Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-05-09

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 dlambrig left #gluster-dev
00:56 shyam1 left #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:38 kshlm joined #gluster-dev
03:09 badone_ joined #gluster-dev
03:27 kshlm joined #gluster-dev
03:33 JustinClift joined #gluster-dev
03:40 kotreshhr joined #gluster-dev
04:18 kshlm joined #gluster-dev
04:18 kshlm joined #gluster-dev
04:26 atalur joined #gluster-dev
04:32 atinmu joined #gluster-dev
04:42 atalur hchiramm, hagarth : gluster build system doesn't seem to be posting comments on review.gluster for build failures.
04:42 atalur hchiramm, hagarth : is it correct or am I missing some info here?
04:42 anoopcs joined #gluster-dev
04:46 atinmu atalur, its been the story since we upgraded gerrit
04:46 atinmu atalur, all those set of jobs are not running which includes smoke too..
04:46 atalur atinmu, oh! Thanks. I might have missed the mailing thread. :-/
04:46 atinmu atalur, we
04:46 atinmu oops
04:52 anoopcs joined #gluster-dev
04:53 schandra joined #gluster-dev
04:58 krishnan_p joined #gluster-dev
05:04 atalur joined #gluster-dev
05:12 anrao joined #gluster-dev
05:13 pranithk joined #gluster-dev
05:14 rjoseph|afk joined #gluster-dev
05:14 pranithk atalur: madam talur, gods are still not with you? :-(
05:15 gem joined #gluster-dev
05:15 atalur pranithk, no :( That patch that I had sent was failing build due to a prev patch. I rebased and sent agai.
05:15 atalur pranithk, s/agai/again. Could you please +1 it? It is exact backport of master one.
05:16 pranithk atalur: alright
05:16 pranithk atalur: You don't worry I will give +2 and merge it as soon as it passes regression :-)
05:17 atalur pranithk, Thanks :) but I wish I had caught this rebase issue yesterday night! I would have sent it then itself.
05:17 pranithk atalur: Things rarely will be perfect :-). Don't worry too much
05:18 atalur pranithk, true :) Thanks again :)
05:18 pranithk atalur: Is rastar_afk around? when does he wakeup? you know?
05:19 atalur pranithk, I will let you know in another 5 minutes or so. I'm in Mysore. Will call and ask his status.
05:19 pranithk atalur: ayya no
05:19 pranithk atalur: I can only call.
05:19 atalur pranithk, ok :)
05:20 pranithk atalur: whats the special occasion for mysore?
05:21 atalur pranithk, you will catch him if you are at office. My mom said he is leaving for office soon.
05:21 atalur pranithk, get together with college friends :)
05:21 hchiramm http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/8826/consoleFull pranithk
05:22 hchiramm any idea this ec failure is fixed in release 3.7 branch
05:22 pranithk atalur: nice nice :-). Enjoy. I will take care of it
05:22 pranithk hchiramm:no :-(
05:23 hchiramm pranithk, hmmm..
05:23 pranithk hchiramm: Let me run it locally once...
05:23 hchiramm ok.. thanks
05:24 atalur pranithk, In case I'm able to monitor, I will ping you once regression succeeds :)
05:25 hchiramm atalur, just noticed ur ping
05:25 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
05:25 hchiramm yeah, smoke is not running after the gerrit upgrade
05:25 atalur hchiramm, okay :) Atin answered my questions.
05:25 * atinmu is stepping out for an hour
05:26 atalur hchiramm, thanks :)
05:26 hchiramm atinmu, njoy ur time :)
05:26 hchiramm atalur++ :)
05:26 glusterbot hchiramm: atalur's karma is now 1
05:26 hchiramm pranithk++
05:26 atinmu hchiramm, :)
05:26 glusterbot hchiramm: pranithk's karma is now 14
05:26 atinmu hchiramm, I don't have time :(
05:26 atinmu hchiramm, :) :)
05:26 hchiramm atinmu, u r not alone :)
05:27 anoopcs joined #gluster-dev
05:27 hchiramm anoopcs, good morning
05:28 anoopcs hchiramm: Good morning
05:29 pranithk anoopcs: Sir! morning. How is your night out friend doing? wokeup?
05:29 anoopcs pranithk: Not yet :)
05:30 pranithk anoopcs: you guys are done for 3.7? anything pending?
05:31 pranithk hchiramm: Doesn't seem obvious man, I will try to re-create it. Could you please send a mail on gluster-devel?
05:31 pranithk hchiramm: Feel free to re-trigger
05:31 hchiramm I am adding u to another loop :)
05:31 hchiramm pranithk, I did
05:32 pranithk hchiramm: cool
05:32 hchiramm pranithk, thanks for looking into it
05:32 anoopcs pranithk: Mostly
05:32 pranithk anoopcs: good!
05:33 * anoopcs will be back in a minute
05:33 hchiramm pranithk, u r in a thread
05:33 hchiramm pelase check ur mail
05:34 anoopcs joined #gluster-dev
05:37 schandra hchiramm, good morning
05:37 hchiramm schandra, I will come to ur seat :)
05:37 hchiramm Good morning
05:40 pranithk hchiramm: Do you really need this patch in 3.7.0 now? Whats so important about it? By the time memory allocation fails kernel would have done OOM kill anyway.
06:04 nishanth joined #gluster-dev
06:20 atinmu joined #gluster-dev
06:26 anrao joined #gluster-dev
06:30 kotreshhr joined #gluster-dev
06:30 ashiq joined #gluster-dev
06:38 atinmu joined #gluster-dev
06:44 hchiramm joined #gluster-dev
06:48 hchiramm ndevos, ping
06:48 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
06:49 hchiramm rastar++ thanks !
06:49 glusterbot hchiramm: rastar's karma is now 3
06:58 kshlm joined #gluster-dev
06:58 schandra joined #gluster-dev
06:59 atinmu http://review.gluster.org/10702 fixes bitrot spurious failures
06:59 atinmu need review
07:04 pranithk atinmu: I am on it
07:04 atinmu pranithk, thanks
07:05 pranithk atinmu: done
07:05 atinmu kshlm, we should take in http://review.gluster.org/#/c/10622/
07:05 atinmu pranithk, thanks
07:05 atinmu pranithk++
07:05 glusterbot atinmu: pranithk's karma is now 15
07:05 kshlm atinmu, sure
07:06 atinmu kshlm, regression has not finished in linux build system
07:06 atinmu kshlm, wondering why
07:07 kshlm I was just checking that.
07:09 hagarth joined #gluster-dev
07:11 hchiramm kshlm, , can u please merge this http://review.gluster.org/#/c/10669/ , its backport and regression has passed
07:11 kshlm The build passed regression, but failed to report back (ssh failure). I ran the ssh command manaually, the +1 should be visible now.
07:13 atinmu kshlm, cool
07:13 kshlm hchiramm, 10669 has been already merged.
07:13 atinmu kshlm, merge it
07:13 atinmu kshlm, we need to backport it as well
07:13 hchiramm ahh.. my mistake
07:14 hchiramm wrong url :)
07:14 atinmu kshlm, i believe anand is not thr2
07:14 hchiramm http://review.gluster.org/#/c/10674/ kshlm is the correct url
07:14 kshlm I'm trying to merge but gerrit is acting up for me.
07:14 atinmu is kp around?
07:14 hchiramm http://build.gluster.org/job/rackspace​-regression-2GB-triggered/8888/console can u please retrigger this ?
07:15 hchiramm atinmu, ^^
07:15 atinmu I wish I would have had the permission
07:15 atinmu sure
07:15 hchiramm thanks atinmu++
07:15 glusterbot hchiramm: atinmu's karma is now 17
07:15 hchiramm kshlm++
07:15 glusterbot hchiramm: kshlm's karma is now 11
07:15 hagarth has anybody trigerred a regression run for rafi's tiering patches?
07:15 atinmu done
07:16 atinmu hagarth, I've not
07:16 hagarth atinmu: ok
07:16 atinmu hagarth, surprisingly all his patches failed in single test
07:16 hagarth atinmu: yeah..
07:16 atinmu hagarth, I was wondering whether he should rebase on top of my fix and resend
07:17 hagarth atinmu: I think that should happen.. am about to merge your patch now
07:17 atinmu hagarth, ok
07:17 atinmu hagarth, thanks
07:21 kshlm atinmu, 10622 is merged.
07:21 atinmu kshlm, would u backport 10622?
07:23 kshlm hchiramm, backports require a seperate bug-id. Can you do that for 10674?
07:23 kshlm atinmu, I'll try.
07:23 atinmu kshlm, or I will do it
07:24 hchiramm kshlm, ok.. I will clone the master bug
07:24 kshlm hchiramm, please
07:24 kshlm just update the bug-id I'll merge.
07:24 kshlm atinmu, I'll do it.
07:25 atinmu kshlm, I am already in progress :)
07:25 kshlm okay then. I was just about to click the button to clone the bug.
07:27 hchiramm kshlm, done ..http://review.gluster.org/#/c/10674/
07:27 hchiramm can u check ?
07:30 hchiramm atinmu, did u retrigger that ?
07:33 hchiramm yes,, u did .. thanks
07:33 atinmu hchiramm, I did
07:33 hchiramm :)
07:33 hchiramm thanks atinmu
07:34 hchiramm kshlm++ thanks !
07:34 glusterbot hchiramm: kshlm's karma is now 12
07:50 ndevos pong hchiramm?
08:00 hchiramm ndevos, HI.. Good morning
08:00 hchiramm its regarding client xlators patch
08:15 pranithk atinmu: there?
08:17 pranithk atinmu: just wanted to know how you like the mail I sent about breaking glusterd into small parts
08:24 kshlm pranithk, Could you share the mail with others as well?
08:37 anrao joined #gluster-dev
08:39 pranithk kshlm|afk: I sent it on glusterdevel, you didn't get?
08:39 pranithk kshlm|afk: "break glusterd into small parts" is the subject
08:40 ndevos hchiramm, kkeithley: please review http://review.gluster.org/10706 when you have a minute
08:44 atinmu pranithk, sorru I was afk, will go through it and let u know
08:45 rafi joined #gluster-dev
08:48 atinmu pranithk, sounds interesting, but have to be thought out well
08:49 atinmu pranithk, we do have a plan to make core glusterd algorithms work as a glusterd engine and other features will work have interfaces to connect to it. Your proposal looks another alternative. I think we should give little bit of more thinking, and then proceed
08:59 pranithk hagarth: http://review.gluster.com/10701 was failing for some case which I fixed and resent, it would be nice if he reviews but what if he doesn't come online?
09:02 pranithk hagarth: http://review.gluster.com/#/c/9407/ is the patch on master
09:06 ndevos hmm, there are quite a lot of unmerged changes for 3.7: http://review.gluster.org/#/q/status:o​pen+project:glusterfs+branch:release-3.7
09:14 atinmu joined #gluster-dev
09:18 hagarth ndevos: I intend not merging any logging related changes for release-3.7
09:18 hagarth pranithk: will merge that after regressions pass
09:19 hagarth ndevos: changes from tiering & bitrot are the major needs for 3.7.0 as of now.
09:20 ndevos hagarth: no, sure, I understand, I was just surprosed there were so many patches posted and not further pushed
09:20 pranithk hagarth: I did a stupid thing by modifying the edit message of master patch adding "backport of ..." instead of release-3.7 patch :-(
09:20 pranithk kshlm: See the mail now :-)
09:20 hagarth ndevos: right, some patches have been posted even before the corresponding mainline one was reviewed  & merged.
09:20 kshlm pranithk, Just saw it.
09:21 ndevos and "pushed" as in ("fix regression testing", "ask for reviews", "ask for merging")
09:21 ndevos hagarth: yeah, the two libgfapi patches are like that, the mainline fixes look different than the backports :-(
09:21 hagarth ndevos: :-/
09:22 hagarth rafi: why is this patch needed - http://review.gluster.org/#/c/10358/ ?
09:24 hagarth ndevos: looks like we will miss a few fixes in libgfapi too
09:25 gem joined #gluster-dev
09:25 RaSTar anyone getting this error when signing in using github ?
09:25 RaSTar http://fpaste.org/220138/14311635/
09:25 rafi hagarth: smaba required this
09:26 ndevos RaSTar: instead of the "login" in the url, try "logout"
09:26 rafi hagarth: to copy from snapshot folder
09:26 hagarth rafi: ok
09:26 pranithk kshlm: Do you think it is useful at all?
09:26 ndevos RaSTar: after logging out (or deleting the cookies for review.gluster.org) you should be able to login again
09:27 rafi hagarth: I think , they call statfs before copy, i'm not sure, guessing it :)
09:27 kshlm Yup. I'm typing out my reply to it right now.
09:27 kshlm pranithk^
09:27 pranithk kshlm: cool
09:27 RaSTar hagarth: regarding 10358 , Samba makes statfs calls to decide how to go about copying file etc.
09:27 ndevos hagarth: yes, it looks like we'll miss the two libgfapi changes
09:27 RaSTar USS needed handle based statfs call to get the data.
09:28 hagarth ok
09:32 RaSTar ndevos++ , clearing cookies on github worked
09:33 glusterbot RaSTar: ndevos's karma is now 121
09:34 ndevos hmm, vdsm still tries to install glusterfs-server :-(
09:36 kshlm ndevos, vdsm does need the server-bits
09:36 kshlm it needs glusterd and cli to to its stuff.
09:36 ndevos kshlm: okay, but why is a oVirt dev complaining about it?
09:36 kshlm I dunno.
09:36 kshlm Oh!
09:37 ndevos kshlm: the cli is in glustersfs-cli, and that does not require the server bits (for --remote-host=...)
09:37 kshlm I was think about managing glusterfs with oVrit.
09:37 kshlm If you were using glusterfs as just a storage domain then you wouldn't need it.
09:37 ndevos ah, okay, then I'll check out where the dependency comes from :)
09:44 krishnan_p joined #gluster-dev
09:44 ndevos kshlm: aha, vdsm has a requirement on glusterfs-geo-replication, which pulls in -server, do you know if geo-replication is a correct requirement?
09:46 kshlm Afaik, ovirt doesn't manage geo-replication. But my information is a little stale.
09:48 kshlm ndevos, It looks like the oVirt team is working geo-rep management. http://www.ovirt.org/Featur​es/Gluster_Geo_Replication
09:48 gem joined #gluster-dev
09:48 kshlm Seems like a pretty new feature.
09:49 ndevos kshlm++ thanks, looks like we can blame vdsm at least a little bit ;-)
09:49 glusterbot ndevos: kshlm's karma is now 13
09:52 RaSTar ndevos: I used gerrit interface to backport 9797 to 10414. It did without conflicts.
09:52 RaSTar I will  we review once and give a +1
09:52 ndevos RaSTar: but that was done on an older version of 9797, I think?
09:52 RaSTar I did just now
09:52 ndevos oh, ok
09:54 kotreshhr hagarth: For http://review.gluster.org/#/c/10639/ , regression is success, but it is not updated
10:00 Debloper joined #gluster-dev
10:05 hchiramm ndevos++
10:05 glusterbot hchiramm: ndevos's karma is now 122
10:09 kshlm kotreshhr, can you give the link to the regression page?
10:10 kshlm NVM, got it from the review page
10:10 kotreshhr kshlm: http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/8896/consoleFull
10:10 anrao joined #gluster-dev
10:11 kshlm kotreshhr, the flag should be set now.
10:12 kotreshhr kshlm: Thanks
10:24 hagarth can somebody pls +2 this - http://review.gluster.org/#/c/10704/ ?
10:25 kshlm Is quota-nfs.t a spurious failure?
10:25 hagarth kshlm: yes
10:26 kshlm Thanks hagarth.
10:26 hchiramm schandra++ awesome progress !
10:26 glusterbot hchiramm: schandra's karma is now 3
10:26 hagarth bitrot tests are the latest offenders today :-/
10:27 rafi hagarth: that was genuine failure
10:27 hagarth rafi: really?
10:27 Gaurav_ joined #gluster-dev
10:27 rafi hagarth: yes, i have send an updated patch set
10:28 hagarth rafi: however there are patches like this as well - http://review.gluster.org/#/c/10707/
10:28 Gaurav_ hagarth: there was comment in http://review.gluster.org/#/c/10702/ patch
10:29 Gaurav_ hagarth: http://review.gluster.org/#/c/10707/  will solve that comment.   http://review.gluster.org/#/c/10702/ patch already merged
10:29 ndevos whenever I read bitrot, I have to think how overclk puts the emphasis on the last t - bitroT
10:29 RaSTar hagarth: gave +2 to http://review.gluster.org/#/c/10704/
10:30 hagarth RaSTar: thanks!
10:31 hagarth Gaurav_: let us merge your change once regressions pass
10:31 Gaurav_ hagarth: sure
10:32 Gaurav_ hagarth: same patch that atin have send was backported for 3.7 http://review.gluster.org/#/c/10705/   its failing regression
10:32 hagarth Gaurav_: ok
10:42 anrao_ joined #gluster-dev
10:53 krishnan_p joined #gluster-dev
10:55 krishnan_p joined #gluster-dev
11:01 hchiramm kkeithley, ping .. Hi
11:06 msvbhat joined #gluster-dev
11:07 anrao joined #gluster-dev
11:07 anrao_ joined #gluster-dev
11:20 rafi1 joined #gluster-dev
11:22 shyam1 joined #gluster-dev
11:30 rafi1 hagarth: waiting for http://review.gluster.org/#/c/10707/
11:32 kaushal_ joined #gluster-dev
11:34 hchiramm kkeithley, there
11:34 pranithk shyam1: thanks dude!!!
11:35 pranithk shyam1++
11:35 glusterbot pranithk: shyam1's karma is now 1
11:35 hchiramm reg# the glupy change, it looks like couple of commits came in same area and it changed the glupy structure itself :)
11:35 hchiramm I was not aware of those changes though
11:38 hchiramm schandra, can I merge the last pul request?
11:38 schandra hchiramm, give me 5 more mins.. will notify when to merge
11:38 hchiramm sure.. nw ..
11:47 shyam1 joined #gluster-dev
11:49 kshlm joined #gluster-dev
11:51 shyam1 left #gluster-dev
11:53 shyam1 joined #gluster-dev
11:54 shyam1 left #gluster-dev
11:55 shyam1 joined #gluster-dev
11:59 shyam1 left #gluster-dev
11:59 shyam1 joined #gluster-dev
12:03 ira joined #gluster-dev
12:10 atinmu joined #gluster-dev
12:12 atinmu hagarth, we need to take in http://review.gluster.org/#/c/10707
12:13 atinmu hagarth, unfortunately http://review.gluster.org/#/c/10702 has broken the regression
12:19 shyam1 left #gluster-dev
12:19 shyam1 joined #gluster-dev
12:19 hchiramm schandra++++++++++++++++++
12:19 glusterbot hchiramm: schandra++++++++++++++++'s karma is now 1
12:19 hchiramm schandra++
12:19 glusterbot hchiramm: schandra's karma is now 4
12:20 schandra hchiramm, 2 + are enough to increase 1 karma :P
12:20 ndevos RaSTar: I think there is a problem in glfs.c in http://review.gluster.org/10414 :-(
12:20 ndevos around line 639...
12:27 hagarth atinmu: shall we merge 10707 ?
12:28 atinmu hagarth, yes, but one thing which I noticed is when bitrot is enabled for some timeframe I could see there are two bitd instances spawned and then it came down to 1 after few seconds
12:28 atinmu hagarth, I am not sure whether we spawn any temporary daemon like this
12:28 hagarth atinmu: that almost sounds freakish :D
12:29 atinmu hagarth, that's why my patch passed locally
12:29 hagarth atinmu: right
12:29 atinmu hagarth, I checked for bitd_count = 2 by mistake
12:29 atinmu hagarth, but logically 10707 is correct and we should take that in
12:30 hchiramm_home joined #gluster-dev
12:30 hagarth atinmu: let us await a clean regression run for 10707
12:30 atinmu hagarth, for 3.7 I've taken care of the same changes in my patch, no need to backport 10707 explicitly
12:30 atinmu hagarth, yeah
12:31 atinmu hagarth, how about http://review.gluster.org/#/c/10417/
12:31 hagarth atinmu: was just thinking about that
12:32 atinmu hagarth, I feel the crash what we see in cleanup_and_exit () will be addressed by the above
12:32 hagarth atinmu: right
12:33 hagarth atinmu: would it be possible for you to backport 10417 to release-3.7 if we merge it in master now?
12:33 pranithk hagarth: Backport of 9407 passed regression http://review.gluster.com/10701, but the patch on master is running into different failures. 1) jenkins infra problem 2) http://build.gluster.org/job/rackspace​-regression-2GB-triggered/8918/console, strange errors :-/
12:34 pranithk hagarth: I restarted the run
12:34 hagarth pranithk: ok
12:34 hagarth pranithk: game for a quick review of 10417?
12:36 atinmu hagarth, I will
12:37 pranithk hagarth: I see that it is already reviewed...
12:37 atinmu hagarth, 10707 passed all but with a core
12:37 hagarth atinmu: wonder what the core is about
12:37 hagarth pranithk: one mroe review will be helpful
12:38 atinmu hagarth, let me check
12:41 ndevos oh no! hagarth, http://review.gluster.org/10706 passed regression but dropped a core too...
12:42 hagarth ndevos: it's all happening out there :-/
12:42 ndevos thats sad :-(
12:43 RaSTar ndevos: That is ok, check my comment there on 10414
12:43 atinmu hagarth, ndevos : the reason of core from 10707 happens to be the same which kotreshhr was analyzing
12:43 hagarth atinmu: I see
12:44 atinmu hagarth, let me add a note in gerrit
12:46 ndevos RaSTar: ah, right!
12:52 * atinmu is logging off now
12:59 hagarth ndevos: let us override patches that are failing due to this core
13:00 RaSTar hagarth: regarding 10414, will do THIS->global_xlator in a different patch later?
13:00 hagarth RaSTar: ok
13:00 RaSTar hagarth: thanks
13:04 RaSTar hagarth: regarding http://review.gluster.org/#/c/10529 , have got +1 from KP and regression passed.
13:05 hagarth RaSTar: I am inclined to take it in after 3.7.0. would that work?
13:05 RaSTar they we will have to use > /dev/null in rpm spec file
13:05 RaSTar which may mask vallid errors from users during upgrade
13:05 RaSTar not sure which is better
13:08 dlambrig joined #gluster-dev
13:08 hagarth ndevos: thoughts on the upgrade?
13:08 ndevos hagarth: uh, what upgrade?
13:09 hagarth ndevos: check RaSTar's notes above
13:10 ndevos hagarth: oh, I'd leave it as it is for now, those messages are not critical
13:10 hagarth ndevos: inclined to agree with you .. let us get it in post 3.7.0
13:11 ndevos hagarth: I do not want to add more >/dev/null, because indeed it could hide real errors
13:11 rjoseph|afk joined #gluster-dev
13:13 hagarth ndevos: ok
13:17 RaSTar ndevos and hagarth: Agreed.  Lets not take this in. We can look at it after 3.7.
13:19 hagarth ndevos: do you foresee any problems with 10417 and NetBSD?
13:20 ndevos hagarth: no, and patch #2 passed on NetBSD too... but I'll quickly check if there was a run with patch #3
13:20 pranithk hagarth: done, gave +2
13:20 hagarth ndevos: ok
13:20 hagarth pranithk: cool
13:20 hagarth pranithk++
13:20 glusterbot hagarth: pranithk's karma is now 16
13:20 pranithk hagarth: neat patch
13:21 hagarth pranithk: yeah, looks nice
13:21 ndevos hagarth: ah, well, a run was done, but Jenkins failed: http://build.gluster.org/job/rackspace-n​etbsd7-regression-triggered/4503/console
13:21 ndevos hagarth: I think the patch should be fine :)
13:22 hagarth ndevos: great!
13:22 hagarth I don't want to face Manu's wrath if anything that we are merging today breaks NetBSD ;)
13:23 ndevos oh, no indeed!
13:26 ndevos rafi1: http://review.gluster.org/10677 failed on the quota-anon-fd-nfs.t, it needs a new run of the regression test
13:26 ndevos rafi1, hagarth: is that something that needs to get in 3.7.0?
13:27 hagarth ndevos: looks like a good fix to have
13:27 hagarth even the ec patch failed on quota-nfs.t
13:27 ndevos hagarth: okay, I'll retrigger it
13:27 pranithk ndevos: I already retriggered ec patch
13:28 * hagarth merged 10417
13:28 ndevos pranithk: okay, the oen from rafi is: glusterd/tiering : cksum mismatch for tiered volume
13:29 pranithk ndevos: sorry? what oen?
13:29 pranithk one
13:29 ndevos hagarth: http://review.gluster.org/10706 should be ready for merging too
13:29 pranithk got it
13:29 ndevos something not ec, I think :)
13:29 pranithk ndevos: It is tiering one..
13:30 pranithk brb in 10-15 minutes
13:32 ndevos hagarth: was someone going to backport 10417 to release-3.7?
13:32 Gaurav_ joined #gluster-dev
13:33 hagarth ndevos: WIP from me
13:33 ndevos hagarth: ok!
13:33 hagarth ndevos: running into checkpatch.pl errors on release-3.7
13:33 ndevos haha
13:34 ndevos hagarth: maybe the double-signoff warnings?
13:35 rafi1 ndevos: http://review.gluster.org/#/c/10707/, i'm planning to rebase, it otherwise it will fail on scrub :)
13:35 rafi1 ndevos: sorr this http://review.gluster.org/#/c/10707/ one
13:35 ndevos rafi1: uhhh, which one?
13:36 rafi1 ndevos: again http://review.gluster.org/#/c/1677
13:36 hagarth ndevos: no, C99 style comments and space missing after comma
13:36 hagarth wonder how jeff overcame that on master
13:36 rafi1 ndevos: http://review.gluster.org/#/c/10977
13:36 rafi1 ndevos: http://review.gluster.org/#/c/10677
13:36 rafi1 ndevos: the last one
13:37 rafi1 ndevos: sice the Gaurav_ patch got merged, i guess it would be better to rebase it
13:37 rafi1 ndevos: right ?
13:37 kshlm joined #gluster-dev
13:37 ndevos rafi1: oh, okay, I'll see if I can cancel the regression test
13:38 rafi1 ndevos: ok, I was waiting to get 10707 to merge :)
13:38 ndevos rafi1: 10677 was only scheduled to get run, it should have been removed from the scheled jobs now
13:39 rafi1 ndevos: ok, then sending the rebased patch
13:39 ndevos rafi1: sure, go for it!
13:40 rafi1 ndevos: hagarth
13:40 rafi1 remote:   http://review.gluster.org/10339
13:40 rafi1 remote:   http://review.gluster.org/10449
13:40 rafi1 remote:   http://review.gluster.org/10406
13:40 rafi1 remote:   http://review.gluster.org/10328
13:40 ndevos hagarth: maybe Jeff posts with "git review", that does not run the checkpatch script?
13:40 rafi1 ndevos: hagarth :
13:40 rafi1 I expect to pass regression for thos patch set
13:41 Gaurav_ joined #gluster-dev
13:42 ndevos rafi1: are those all 3.7 patches? then they should be on http://review.gluster.org/#/q/status:o​pen+project:glusterfs+branch:release-3.7
13:43 ndevos (there are more on the 2nd page)
13:43 ndevos hagarth: http://review.gluster.org/10723 has been posted agains the mainline bug, can you update the commit message to a 3.7 one?
13:44 ndevos *against
13:44 dlambrig left #gluster-dev
13:45 rafi1 ndevos: they should go into 3.7 :), but still in master
13:46 ndevos rafi1: oh, ok, I dont know how long hagarth wants to wait for those then... the plan was to have all merged in 2:15 hours from now
13:47 ndevos and with a 2+ hour regression test run, patches should have been posted by now, I guess
13:48 rafi1 ndevos: I know, But :(
13:48 ndevos rafi1: if they dont make it in 3.7.0, there is alwas 3.7.1 around the corner?
13:48 rafi1 ndevos: Ya, now i'm looking to get merged in master
13:49 hagarth ndevos: will clone and update later
13:49 hagarth ndevos: bitrot patches also got dumped in a burst now
13:50 ndevos hagarth: I dont see them yet, or is that on the master branch?
13:50 Supermathie joined #gluster-dev
13:50 hagarth ndevos: no, on release-3.7
13:51 hagarth http://review.gluster.org/#/q/​status:+open+branch:+release-3.7
13:51 ndevos hagarth: oh, you mean new/updated patches posted? yeah, about 40 minutes ago....
13:51 hagarth check all from Raghavendra Bhat
13:51 hagarth ndevos: 10414 good to get in?
13:52 * ndevos checks again, cant remember what that one was
13:52 ndevos hagarth: yes, looks good to me
13:52 hagarth ndevos: cool, thanks for checking
13:52 rafi1 does any one know about the spurious failure of ./tests/bugs/snapshot/bug-1166197.t ?
13:53 rafi1 http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/8935/consoleFull
13:53 pk1 joined #gluster-dev
13:55 ndevos rafi1: I dont know about that one
13:55 hagarth rafi1: first time I am seeing this
13:56 hagarth looks like 23 TEST mount_nfs $H0:/$V0 $N0 nolock failed?
13:56 hagarth
13:56 ndevos rafi1: ah, pranith found that one earlier: http://thread.gmane.org/gmane.comp​.file-systems.gluster.devel/10786
13:56 ndevos but, no responses on the list :-/
13:58 pk1 rafi1: when did you wakeup dude?
13:59 * hagarth has to headout now.. will come back in ~90 minutes and take stock of our gerrit queue
13:59 ndevos pk1: 10:45 -!- rafi [~Thunderbi@103.16.70.11] has joined #gluster-dev
14:00 ndevos pk1: that is CEST, so try: date -d "10:45 CEST"
14:00 RaSTar ndevos, rafi1, pk1: isn't that again a problem of not waiting for NFS exports to be ready?
14:01 RaSTar shoud adding this line fix it? EXPECT_WITHIN $NFS_EXPORT_TIMEOUT "1" is_nfs_export_available;
14:01 RaSTar s/shoud/should
14:01 ndevos RaSTar: possibly, if that line is not there after starting the volume or enabling nfs
14:02 ndevos RaSTar: yeah, looks like there is only a check to see if the volume was started
14:03 ndevos RaSTar++
14:03 glusterbot ndevos: RaSTar's karma is now 4
14:03 RaSTar yup
14:03 RaSTar will send a patch now
14:03 ndevos okay, thanks!
14:03 * ndevos needs to get some shopping done, will be back in ~60 minutes or so
14:05 pk1 RaSTar: rastar you rock star :-D
14:06 ndevos you know that "pk" translates to horse power in Dutch? like when talking about cars?
14:06 ndevos 1 is not very much in that case ;-)
14:07 * ndevos will be back later
14:10 rafi1 pk1: I joined back 2:00 PM
14:11 rafi1 pk1: I quiet for 3.7.0
14:11 rafi1 pk1: Next target is 3.7..1 :)
14:12 * rafi1 is leaving and taking some rest :)
14:13 pk1 ndevos: :-)
14:38 poornimag joined #gluster-dev
14:45 gem joined #gluster-dev
14:53 pk1 ndevos: I am moving quota-nfs.t and quota-anon-fd-nfs.t to bad tests
14:54 pk1 ndevos: it is failing tests too many times
15:33 hagarth pk1: +1
15:33 hagarth pk1: maybe you should try a nick like pk1000 :D
15:39 pk1 hagarth: Its mental out there :-(. All spurious failures are acting up at once
15:39 hagarth pk1: I know :(
15:39 hagarth ec.t nuked bitrot patche
15:39 hagarth s/patche/patches/
15:39 pk1 hagarth: :-(
15:40 hagarth pk1: how about boldly proclaiming all known regression tests in is_bad_test() ?
15:42 pk1 hagarth: that may not be bad.
15:43 hagarth pk1: go ahead and send a patch. those are the test units that we would need to fix before merging anything for 3.7.1
15:43 pk1 hagarth: Are you doing it already or shall I do? Naga asked me to triage all 34 bugs by tonight... I am doing that.
15:43 pk1 hagarth: okay will quickly do it
15:43 hagarth pk1: thanks!
15:44 pk1 hagarth: Damn, we will have to wait for 9407 to merge because it already started the regression where I made the bad-tests change. It will conflict :-/
15:45 hagarth pk1: nuke it .. I will merge the new patch without a regression run
15:45 pk1 hagarth: on it then :-D
15:46 hagarth pk1: nice :)
15:57 pk1 hagarth: http://review.gluster.org/10725
15:57 pk1 hagarth: that is from the pad
15:59 pk1 hagarth: going for dinner...
15:59 hagarth pk1: nice, merging this now. Let us backport to release-3.7 too.
15:59 hagarth pk1: ok, bon appetit
15:59 pk1 hagarth: Let me send on 3.7 and go then
15:59 pk1 hagarth: one minute
16:02 ira joined #gluster-dev
16:03 pk1 hagarth: http://review.gluster.org/10726
16:03 pk1 hagarth: that is for 3.7
16:03 pk1 hagarth: brb in 15
16:04 hagarth pk1: likewise here .. bbiab
16:11 gem joined #gluster-dev
16:25 Supermathie joined #gluster-dev
16:55 pk1 joined #gluster-dev
17:03 hagarth joined #gluster-dev
17:10 hagarth pk1: any luck with the ec patch after disabling tests?
17:13 pk1 hagarth: Just resubmitted the patch
17:13 hagarth pk1: ok
17:17 hagarth looks like only 4 or 5 regression tests are running atm
17:18 shyam1 hagarth: pk1: Anything I can step in to help?
17:18 gem joined #gluster-dev
17:18 ndevos hagarth: I was just looking at that too, but the reboot-vm job to reboot slave20 is not loading
17:18 ndevos and... the rest of my netwoork is aso lagging like crazy
17:19 hagarth pk1: did you release the slave VM which was disconnected yesterday?
17:19 ndevos try http://build.gluster.org/job/reboot-vm/build and pick slave20 to reboot, , that one looks non-responsive
17:19 hagarth shyam1: more reviews for any last few patches can be one area.. but I doubt if we have too many waiting on master in the review queue
17:20 hagarth ndevos: did that
17:20 ndevos hagarth: ah, ok
17:20 shyam1 hagarth: anything specific that you are fighting now then?
17:21 pk1 hagarth: rafi was still using it when I went to sleep yesterday.
17:21 hagarth shyam1: we were fighting the regression tests till a few hours back .. after pk1 sent across the patch to disable bad tests, we should be good there.
17:21 hagarth shyam1: shall we get lookup-unhashed in now?
17:21 hagarth given that bitrot and a few other patches are bound to take some time.
17:21 shyam1 We could, test passed, Jeff had reviewed and provided a +1
17:22 hagarth shyam1: let me sanity check and merge that
17:22 hagarth shyam1: anything on the tiering front that needs to be pulled in?
17:22 shyam1 Ok, I will start working on the backport to 3.7 and post that
17:23 shyam1 hagarth: Nothing that I am aware of on the top of my head...
17:23 hagarth shyam1: ok
17:24 hagarth ndevos: looks like the reboot vm failed
17:24 hagarth http://build.gluster.org/job/reboot-vm/112/console
17:24 ndevos hagarth: that happens often, you mostly need to try 2-3x
17:25 hagarth ndevos: I see, will do
17:25 ndevos something on build.gluster.org returns the gluster.org homepage 404 instead of the rackspace api endpoints
17:26 ndevos nobody seems to know why that happens om some connections, and not on others...
17:27 dlambrig_ joined #gluster-dev
17:27 dlambrig_ left #gluster-dev
17:27 hagarth i see
17:28 pk1 guys, is bugzilla accessible for you?
17:29 pk1 seems like an intermittent issue, it works now..
17:32 ndevos pk1: I think someone has broken the internet... downloading the dinner pdf for the summit takes loooooong
17:32 ndevos peaking at 12.4KB/s
17:33 hagarth are any bitrot devs around?
17:34 pk1 hagarth: you can call one up ;-)
17:34 hagarth pk1: http://review.gluster.org/#/c/10705/​5/tests/bugs/bitrot/bug-1207029-bitr​ot-daemon-should-start-on-valid-node.t
17:34 hagarth was trying to figure out if the count should be "1" or "2"
17:37 pk1 hagarth: I have no idea but atin worked on this right?
17:37 pk1 hagarth: oh but he said something like on his machine he saw 2 processes for a while etc
17:37 hagarth pk1: yes, have dropped a note to rabhat to review that.
17:40 shyam1 hagarth: autogen.sh on 3.7 branch gives some warnings, like "xlators/features/glupy/src/glupy/Makefile.am:7: warning: pyglupydir multiply defined in condition TRUE ..." just a heads up, not failing, but just warnings by the look of it
17:40 hagarth ndevos, hchiramm: any thoughts on ^^^?
17:41 shyam1 ndevos: "glfs-internal.h:216:14: warning: 'old_THIS' may be used uninitialized in this function [-Wmaybe-uninitialized]" I know earlier in the day you were talking about this, seeing this compile time warning on 3.7, do we have a resolution? (BTW, I am merging auto unhashed and seeing this, in case someone is wondering what I am upto :) )
17:44 ndevos shyam1, hagarth: no, I have not noticed that, I'll check it out
17:44 Gaurav_ joined #gluster-dev
17:48 hagarth Gaurav_: http://review.gluster.org/#/c/10705/​5/tests/bugs/bitrot/bug-1207029-bitr​ot-daemon-should-start-on-valid-node.t is this correct?
17:50 lalatenduM joined #gluster-dev
17:53 raghu joined #gluster-dev
17:54 pk1 raghu: Johnny! good to see you :-)
17:56 raghu Pk1: good to see u too
17:57 ndevos oh, this must be coincidence!
17:59 hagarth pk1: ./tests/basic/afr/self-heal.t (Wstat: 0 Tests: 145 Failed: 1)   Failed test:  29 on NetBSD now :(
18:01 pk1 hagarth: Devuda
18:02 Gaurav_ hagarth: srry i couldn't catch you. i went somewhere outside. just now i came from lazer wedding. now i am checking.
18:02 raghu Pk1: there is a patch for 3.7 for bitd.....patchid is 10705.....that patch says there should be as many bitds as the number of nodes.....but I am wondering how that check can pass if u r running the regression test on a single node......can u please remove that check And the last cleanup and run the test? It should tell us how many bitds should be there.......my place power outage.....I am typing from my mobile......
18:03 raghu If possible pls update hagarth about it
18:04 ndevos oh, this is cool, raghu! We should put that in the release notes :D
18:05 hagarth ndevos: lol
18:05 ndevos shyam1: http://review.gluster.org/10728 fixes the compile warning, I'll send that as backport now too
18:05 raghu He he
18:09 shyam1 ndevos: I think the first in args check should return ret, we could goto out and do things like subvol_done etc.
18:09 pk1 raghu: will do
18:11 ndevos shyam1: oh, yes, of course
18:12 pk1 raghu: Except one of the tests, rest all say that the count should be 1
18:12 pk1 raghu: this should also be 2 right?
18:12 raghu If the test running on only one node....how can there be 2 bitds?
18:13 pk1 raghu: sorry, I meant 1 :-D
18:13 pk1 raghu:  you want me to resubmit it changing it to 1 is it?
18:14 ndevos shyam1: updated http://review.gluster.org/10728 with just a "return ret;"
18:15 raghu Nope...I just want to make sure if that test is ok....if u remove cleanup of the test case in the end then running the test will show u how many bitds r running.......if they r 2 then the test case is fine
18:16 pk1 raghu: Let me run it once. wait
18:19 Gaurav_ hagarth: actually that test case is right. but test case name is wrong. it should have another name and another bug id.
18:20 hagarth Gaurav_: ok
18:20 Gaurav_ Gaurav_: can i send patch now
18:20 Gaurav_ hagarth: can i send patch now. for better rename .
18:20 Gaurav_ hagarth: this test case name should be for volume status.
18:21 hagarth Gaurav_: please do .. also can you explain how the count is 2? do you make use of 2 virtual nodes?
18:21 Gaurav_ hagarth: yes
18:21 hagarth raghu, pk1: ^^
18:21 Gaurav_ hagarth: we are using two virtual node
18:21 Gaurav_ hagarth: by using launch_cluster 2
18:21 hagarth Gaurav_: ok
18:22 Gaurav_ hagarth: actually launch_cluster 2 launch 2 virtual ip
18:22 hagarth Gaurav_: right
18:22 Gaurav_ hagarth: i am renaming test case right now
18:22 hagarth ok
18:22 raghu Hagarth: since there r 2 glusterds running and bitd is spawned by glusterd.....I think the count if bitds should be 2
18:23 ndevos hagarth: the compile warning for gfapi is fixed in http://review.gluster.org/10728 (master) and http://review.gluster.org/10728 (3.7)
18:24 hagarth raghu: right
18:24 ndevos hagarth: I think those are not critical, but it may depend on the compiler flags used, it would be good to have those fixes in
18:24 hagarth I propose that we extend the deadline till 6:30 UTC tomorrow for things to settle down
18:24 hagarth we seem to be in a bit of flux here
18:25 raghu hagarth: a big +1 to that
18:26 shyam1 ndevos: Gerrit links are the same above, righ tone for 3.7 would be: http://review.gluster.org/#/c/10730/
18:26 ndevos shyam1: oh, yes, sorry
18:26 shyam1 hagarth: for you as well (tracking) ^^^
18:26 pk1 raghu: it succeeds. Cool Gaurav_
18:27 pk1 hagarth: That lines makes me so sleep suddenly, I'm out
18:27 pk1 hagarth: raghu Gaurav_ good night
18:27 ndevos cya pk1!
18:27 Gaurav_ pk1 good night
18:27 pk1 ndevos: shyam1 bbye
18:28 shyam1 hagarth: unhashed auto patch backported: http://review.gluster.org/#/c/10729/
18:28 hagarth shyam1: cool
18:28 raghu hagarth: I think that patch is fine
18:28 hagarth raghu: right
18:32 shyam1 hagarth: getting some lunch, cya in a bit
18:34 raghu hagarth: probably u can merge the patch if it has passed regression s
18:35 hagarth shyam1: ok
18:41 gem joined #gluster-dev
18:43 ndevos hagarth: all the regular slaves are back online now, but there are still some regression tests in the queue
18:43 hagarth ndevos: more tests than slaves?
18:43 ndevos hagarth: yes, 3 regression tests are waiting
18:44 hagarth ndevos: ok
18:45 ndevos Gaurav_: http://build.gluster.org/job/rackspace​-regression-2GB-triggered/8962/console seems to be hanging (4+ hours running), I'll restart that one now
18:45 Gaurav_ ndevos: cool .. thanx
18:48 ndevos Gaurav_: it should finish in ~3 hours from now, other tests are finishing up before ours will start
18:49 ndevos I suggest that you do not wait for it to complete ;-)
18:49 Gaurav_ ndevos, i need to backport this patch to 3.7 also
18:49 ndevos Gaurav_: if you feel confident that the change is good, you can already submit the backport
18:50 raghu joined #gluster-dev
18:50 Gaurav_ ndevos: ya i am confident about change. i should have backported it :P
18:50 ndevos Gaurav_: I have done that too, http://review.gluster.org/10730 is an example that contains a reference to the change in the master branch
18:51 ndevos Gaurav_: use the same Change-Id for the backport, a different bug, and add a link to the patch for the master branch
18:51 Gaurav_ ndevos: i m doing right now
18:51 Gaurav_ ndevos: thnx :)
18:52 ndevos shyam++ thanks for the reviews for the compile warning :)
18:52 glusterbot ndevos: shyam's karma is now 4
18:52 ndevos Gaurav_: okay, sounds good! when you submit the change, it should be tested tomorrow morning :)
18:52 hagarth I am off for now, shall resume tomorrow morning :)
18:53 Gaurav_ ndevos: ya i will keep looking about regression test
18:53 ndevos bye hagarth!
18:53 hagarth bye everyone!
18:55 raghu hagarth: see u tomorrow..... Good night......
18:55 raghu Me too out......see u all tomorrow......
19:46 hchiramm hagarth_afk, I have the fix for that makefile issue
19:46 hchiramm I am sending it out.
19:55 hchiramm hagarth_afk, shyam1 http://review.gluster.org/#/c/10734/ this should fix that warning from glupy
20:19 ndevos hchiramm: is that problem also present on the master branch?
20:20 shyam1 joined #gluster-dev
21:06 kripper joined #gluster-dev
21:06 dlambrig_ joined #gluster-dev
21:08 dlambrig_ left #gluster-dev
21:08 hagarth joined #gluster-dev
21:22 shyam1 left #gluster-dev
21:24 shyam1 joined #gluster-dev
21:26 shyam1 left #gluster-dev
21:26 shyam1 joined #gluster-dev
21:27 shyam1 left #gluster-dev
21:27 shyam1 joined #gluster-dev
21:28 shyam1 left #gluster-dev
21:28 shyam1 joined #gluster-dev
21:30 shyam1 left #gluster-dev
21:31 shyam1 joined #gluster-dev
21:38 shyam joined #gluster-dev
22:23 nishanth joined #gluster-dev
22:36 soumya joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary