Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-30

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:00 badone_ joined #gluster-dev
01:01 bala joined #gluster-dev
01:50 hagarth joined #gluster-dev
02:46 soumya_ joined #gluster-dev
03:05 kshlm joined #gluster-dev
03:30 ppai joined #gluster-dev
03:46 overclk joined #gluster-dev
03:54 nkhare joined #gluster-dev
03:58 kanagaraj joined #gluster-dev
04:02 shubhendu joined #gluster-dev
04:02 spandit joined #gluster-dev
04:03 itisravi joined #gluster-dev
04:05 atinmu joined #gluster-dev
04:12 nishanth joined #gluster-dev
04:24 badone_ joined #gluster-dev
04:27 kasturi joined #gluster-dev
04:29 soumya joined #gluster-dev
04:32 nishanth joined #gluster-dev
04:37 kshlm joined #gluster-dev
04:40 itisravi joined #gluster-dev
04:45 kshlm joined #gluster-dev
04:47 ppai_ joined #gluster-dev
04:48 anoopcs joined #gluster-dev
04:53 kdhananjay joined #gluster-dev
04:53 ndarshan joined #gluster-dev
05:03 rafi joined #gluster-dev
05:06 vimal joined #gluster-dev
05:13 hagarth joined #gluster-dev
05:19 jiffin joined #gluster-dev
05:26 spandit joined #gluster-dev
05:33 Manikandan joined #gluster-dev
05:38 atinmu Folks, can I get some review attention for http://review.gluster.org/#/c/9462/ & http://review.gluster.org/#/c/8380/
05:39 hagarth atinmu: I will review 8380 later today
05:39 atinmu hagarth: Thanks!
05:39 atinmu hagarth++
05:39 glusterbot atinmu: hagarth's karma is now 47
05:42 itisravi atinmu: wondering if you should also fix the calls to glusterd_op_get_op() in glusterd_profile_volume_use_rsp_dict() and glusterd_volume_rebalance_use_rsp_dict().
05:42 atinmu itisravi, no, they are op-sm transactions and can be read from global opinfo
05:43 itisravi atinmu: oh they aren't called from a syncop task is it?
05:44 atinmu itisravi, rebalance, profile & top commands go through state machines
05:44 atinmu itisravi, rest all r syncop
05:44 itisravi atinmu: ah okay.
05:45 atinmu itisravi, does it make sense now why I have this fix only in heal path :)
05:45 itisravi atinmu: yes ;)
05:55 krishnan_p joined #gluster-dev
05:56 gem joined #gluster-dev
06:03 foster joined #gluster-dev
06:06 raghu joined #gluster-dev
06:19 lalatenduM joined #gluster-dev
06:22 deepakcs joined #gluster-dev
06:29 jiffin can anyone please review  review.gluster.org/#/c/9984/ ?
06:33 anrao joined #gluster-dev
06:36 pranithk joined #gluster-dev
06:45 gem joined #gluster-dev
06:48 ndarshan joined #gluster-dev
06:48 hagarth jiffin: what is 9984 about?
06:50 shubhendu joined #gluster-dev
07:01 jiffin hagarth: as per current policy ,original file before truncating should copied to trash directory
07:03 jiffin but truncate operation which performs enlarging file, is not required to handle by the trash translator
07:03 jiffin since its original contents retained in that file itself
07:05 hagarth jiffin: ok
07:06 hagarth will review the change, this is the same issue which affects NetBSD right?
07:07 jiffin seems to be
07:07 hagarth jiffin: ok
07:07 jiffin thanks
07:07 jiffin hagarth++
07:07 glusterbot jiffin: hagarth's karma is now 48
07:08 krishnan_p joined #gluster-dev
07:09 anrao joined #gluster-dev
07:11 kshlm hagarth, when is the 3.7 code freeze?
07:11 kshlm End of April?
07:11 hagarth kshlm: GA end of April .. code freeze - at least a week before
07:12 kshlm So we still have around ~3 weeks then.
07:12 kshlm Cool.
07:12 hagarth kshlm: right
07:13 soumya joined #gluster-dev
07:14 krishnan_p joined #gluster-dev
07:19 krishnan_p hagarth, are all bugs that are present in 3.7 tracker bug blockers for GA? i.e, we wouldn't GA until we have them all fixed right?
07:20 aravindavk joined #gluster-dev
07:21 pranithk joined #gluster-dev
07:21 hagarth krishnan_p: ideally yes, unless we find some of them as non-blockers.
07:23 krishnan_p hagarth, I would ideally like to know in advance which patches are fixing blockers. This allows me to defer reviews (and merge) of other non-critical patches.
07:25 hagarth krishnan_p: I think a gerrit search by topic that contains the blocker bug IDs might help there.
07:26 hagarth Manikandan: ping, when do you plan to populate actual messages in client-messages.h - http://review.gluster.org/#/c/9834/7/xlators/protocol/client/src/client-messages.h?
07:29 krishnan_p hagarth, yes. IIUC, we don't have a list of blocker bugs. Do we?
07:30 hagarth krishnan_p: https://bugzilla.redhat.com/showdependencytree.cgi?id=1199352&hide_resolved=0
07:31 hagarth krishnan_p: all bugs listed against the tracker considered as blockers for now
07:32 badone__ joined #gluster-dev
07:33 krishnan_p hagarth, thanks. that helps.
07:33 krishnan_p hagarth++
07:33 glusterbot krishnan_p: hagarth's karma is now 49
07:35 kasturi joined #gluster-dev
07:40 krishnan_p Here is a shortened url to the gerrit query for patches that need review attention for 3.7 - http://bit.ly/1F8F9OV
07:41 krishnan_p For any new bug that is added to the tracker, one must append "OR topic:bug-BUGID" to the end of the query url. HTH
07:41 shubhendu joined #gluster-dev
07:43 hagarth krishnan_p: fantastic, thanks!
07:43 hagarth krishnan_p++
07:43 glusterbot hagarth: krishnan_p's karma is now 5
07:44 hagarth coverity fixes needing review attention - http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-789278,n,z
07:45 hagarth one of our aims is to reduce the outstanding coverity issues for 3.7, so your help in reviewing as many fixes from this list would be appreciated :)
07:50 ndarshan joined #gluster-dev
07:51 ndevos hagarth, krishnan_p: when you merge patches, do you make sure that the bug moves from POST -> MODIFIED too (on the last patch)? otherwise the tracker wont show the correct info
07:52 hagarth ndevos: the onus is on the patch submitter there
07:52 ndevos hagarth: I think its on the one merging it...
07:53 hagarth ndevos: Don't think so, the patch submitter is in the best position to determine the transition
07:53 hagarth or rather when the transition should happen
07:53 ndevos hagarth: well, it depends...
07:53 ndevos but I do not think any patch submitter changes the status to MODIFIED...
07:54 hagarth ndevos: yes, we would need to get more/all submitters to start doing this.
07:54 ndevos okay, I'll send a quick note about to the -devel list
07:54 hagarth ndevos: maybe we could send out a reminder at some frequency with the list of bugs in POST against which patches have been merged?
07:54 ndevos I guess http://bugs.cloud.gluster.org/ should help too?
07:55 hagarth ndevos: absolutely .. we need to publicize bugs.c.g.o more :)
07:57 ppai_ joined #gluster-dev
08:29 krishnan_p ndevos, thus far, I have been of the impression that the submitter is managing the life cycle of her bug and naturally the responsibility to make any bug status change is hers.
08:30 rjoseph joined #gluster-dev
08:31 ndevos krishnan_p: I think that would be an ideal situation, but I have not seen many developers changing the status of a bug to MODIFIED, so at least for 3.5 bugs I tend to do that when I merge them
08:31 krishnan_p ndevos, I agree.
08:31 krishnan_p ndevos, thanks for starting the mail thread on "... you're not finished yet .."!
08:32 krishnan_p ndevos++
08:32 glusterbot krishnan_p: ndevos's karma is now 101
08:33 ndevos krishnan_p: sure, no problem, please respond with notes/ideas/objections/... and suggest others to respond too :)
08:33 krishnan_p ndevos, sure.
08:34 ndevos soumya++ for fixing the status of her bugs :D
08:34 glusterbot ndevos: soumya's karma is now 4
09:01 lalatenduM joined #gluster-dev
09:03 Debloper joined #gluster-dev
09:04 Debloper http://code.debs.io/glusterweb/
09:05 Debloper https://github.com/debloper/glusterweb
09:05 DV_ joined #gluster-dev
09:07 ppai_ joined #gluster-dev
09:07 atinmu Debloper, good to see my name has a mention as top contributor in ur home page :)
09:10 anoopcs rafi++
09:10 glusterbot anoopcs: rafi's karma is now 6
09:13 foster joined #gluster-dev
09:22 pranithk joined #gluster-dev
09:25 rafi anoopcs++
09:25 glusterbot rafi: anoopcs's karma is now 3
09:29 ndevos atinmu: hey, yeah, those stats are quite fun, they come from openhub.net , you can create/update your account and contributions to open source projects there
09:30 atinmu ndevos, I thought we already track it in bitergia
09:30 foster joined #gluster-dev
09:32 ndevos atinmu: bitergia is for our glusterfs community, including code, mailinglists, bugs, ... - openhub.net is avaialble for all OSS projects, but it may have code changes only
09:32 atinmu ndevos, ok
09:32 ndevos atinmu: I think bitergia is a paid service to see how a community is doing, it is less interesting for developer profiles
09:33 atinmu ndevos, ohh!! I didn't know its a paid service
09:34 ndevos atinmu: I'm pretty sure they are an open source company like Red Hat :)
09:34 atinmu ndevos, :)
09:40 kasturi joined #gluster-dev
09:43 pranithk joined #gluster-dev
09:47 kshlm joined #gluster-dev
09:50 rjoseph joined #gluster-dev
09:52 hagarth joined #gluster-dev
09:57 hchiramm joined #gluster-dev
10:00 bala joined #gluster-dev
10:06 ira joined #gluster-dev
10:07 aravindavk joined #gluster-dev
10:08 hagarth aravindavk: http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/
10:09 kotreshhr joined #gluster-dev
10:10 shubhendu joined #gluster-dev
10:11 itisravi joined #gluster-dev
10:14 Manikandan joined #gluster-dev
10:16 ndevos hagarth, hchiramm: dont you think it would be good to get rid of the "pre-release" version in Bugzilla? its confusing if a bug has been open for a while (like a 3.5 pre-release)
10:16 hchiramm pre-release meant for qa releases right?
10:17 ndevos hchiramm: why not use a 3.5 version for a 3.5.4beta?
10:17 ndevos hchiramm: pre-release can be anything, and it is difficult to track non-versions
10:17 atinmu rafi, have u tested http://review.gluster.org/#/c/10046/
10:18 hchiramm I think the bugs against qa releases of upcoming GA version should be marked  as  pre-release bugs
10:18 hchiramm ndevos,
10:19 ndevos hchiramm: and when do we move bugs from pre-3.7 to 3.7 then?
10:19 hchiramm with the release of 3.7
10:19 hchiramm if its not address
10:19 hchiramm addressed
10:19 ndevos hchiramm: issue is that we still have bugs filed against pre-release, but those are filed before 3.4, 3.5 and 3.6 ....
10:20 hchiramm yeah, that need to be cleaned up ..
10:20 hchiramm otherwise we need to have a tag for each GA pre-release
10:21 ndevos instead of pre-release, I would prefer to just file them against 3.7.0, and not have a pre-release selection at all
10:21 nishanth joined #gluster-dev
10:21 hchiramm then how do u know whether its a defect in beta 1 or beta 4 ?
10:22 ndevos does that really matter? there is not difference with pre-release in that case
10:22 ndevos no-t
10:23 ndevos or, if we really like, have a 3.7dev bug, for nightly issues, or a pre-3.7.0 version
10:23 hchiramm the bugs against qa release can be marked as "pre-release" bug and in BZ description u can say the version whether its 'beta'1' or '3'
10:23 ndevos s/3.7dev bug/3.7dev version/
10:23 ndevos thats the same as with a 3.7.0 bug, or not?
10:24 ndevos when the release is made, all bugs that are filed against 3.7.0 and are in status MODIFIED should get closed
10:24 hchiramm the pre-release bugs can be closed even before the GA release
10:25 hchiramm thats what I meant  as an advantage of having the pre-release component
10:25 ndevos only if they got fixed
10:25 hchiramm yes :)
10:25 hchiramm thats true
10:26 rafi atinmu: yes i have tested this for snapshot list and volume info
10:26 ndevos we would need to update the pre-release bugs even if they did not get fixed, and that has not been done for other releases, so it is a mess now
10:26 atinmu rafi, why would you need to maintain the history?
10:27 ndevos I would like to prevent a mess like that in the future, I think it is easy to forget about moving pre-release bugs to bugs for an actual version
10:28 rafi atinmu: Previously we were comparing from last to first , after introducing rcu list that was changed
10:29 atinmu rafi, that's correct
10:29 atinmu rafi, but my question is ur traversal is still forward
10:29 ndevos rafi: ultimately, those details should be captured in the commit message ;-)
10:30 rafi atinmu: so now new entry will be added just after the first occurrence of an entry that has a greater lesser value than the new entry
10:31 rafi atinmu: sorry i will update the commit message
10:31 atinmu rafi : I am not keen to know how exactly it works now, my question is with ur change how it will work...
10:32 atinmu rafi, I am failing to understand why do you need an extra list head local pointer
10:32 * ndevos wonders about adding a smoke test that forces a non-empty commit message
10:32 atinmu rafi, I already logged a comment in your patch
10:32 atinmu ndevos, :)
10:33 rafi atinmu: i will explain as reply for the comment
10:33 rafi ndevos: :)
10:33 lalatenduM joined #gluster-dev
10:34 ndevos rafi: and that would not only be for you :)
10:34 atinmu ndevos, there are few patches where you could get rid of adding commit messages, but yes, having a good commit message eases out the review
10:35 ndevos atinmu: not only for reviewers, but think of people that are looking into a bug and would like to understand why a section of a function changed
10:35 atinmu ndevos, yes that's also a valid point
10:37 ndevos atinmu: when I see a patch, I would like to get an understanding why the change is needed, if something else was introduced that requires the change/fix, and maybe a little exmplanatio of how the fix works
10:38 ndevos if things like that are missing, I do not hesitate to -1 it :)
10:38 atinmu ndevos, probably we should try to adhere to a standard template of it as much as possible
10:39 ndevos atinmu: template or guidelines, yes
10:39 ndevos atinmu: hows your perl?
10:39 atinmu ndevos, pretty bad :(
10:40 atinmu ndevos, I know shell scripts, neither python nor perl :(
10:40 ndevos atinmu: hmm, thats a shame, it would be nice to have a check in extras/checkpatch.pl
10:40 ndevos atinmu: do you know anyone who could code a check like that in perl?
10:41 Manikandan joined #gluster-dev
10:41 ppai_ joined #gluster-dev
10:41 atinmu ndevos, Kaushal must be knowing
10:41 ndevos atinmu++ please bug him a little then :)
10:41 glusterbot ndevos: atinmu's karma is now 7
10:42 atinmu ndevos, sure I can do that
10:47 Debloper joined #gluster-dev
10:53 aravindavk joined #gluster-dev
11:03 rafi atinmu++
11:03 glusterbot rafi: atinmu's karma is now 8
11:06 firemanxbr joined #gluster-dev
11:07 shubhendu joined #gluster-dev
11:07 kotreshhr joined #gluster-dev
11:08 rjoseph joined #gluster-dev
11:12 kdhananjay joined #gluster-dev
11:16 jiffin ndevos++
11:16 glusterbot jiffin: ndevos's karma is now 102
11:19 * ndevos goes for lunch. ttyl!
11:20 nishanth joined #gluster-dev
11:23 pranithk joined #gluster-dev
11:34 atinmu joined #gluster-dev
11:35 nkhare joined #gluster-dev
11:38 DV__ joined #gluster-dev
11:44 soumya joined #gluster-dev
11:47 kdhananjay1 joined #gluster-dev
12:04 rjoseph joined #gluster-dev
12:14 pranithk joined #gluster-dev
12:25 anoopcs joined #gluster-dev
12:29 kotreshhr left #gluster-dev
12:31 rjoseph joined #gluster-dev
12:48 shyam joined #gluster-dev
12:49 DV__ joined #gluster-dev
13:12 vipulnayyar joined #gluster-dev
13:27 firemanxbr joined #gluster-dev
13:32 overclk joined #gluster-dev
13:34 kshlm joined #gluster-dev
13:35 ndevos kkeithley: oh, seems you found the [Submit] button?
13:37 kkeithley I found it a while ago
13:38 kkeithley I've been saving using it for the right occasion
13:39 deepakcs joined #gluster-dev
13:39 kkeithley I don't want to over do it though
13:46 kkeithley whoo, look at that backlog of regression tests
13:49 firemanxbr joined #gluster-dev
13:49 ndevos wow, yes, lets clean some!
13:49 * JustinClift looks
13:50 JustinClift Hmmm, need to make some more rpms-and-smoke-builds-only nodes
13:50 JustinClift Else stuff will get out of order again
13:51 JustinClift k, slave28 is going to be building just rpms and doing smoke stuff
13:51 JustinClift For now anyway
13:51 shyam joined #gluster-dev
13:52 JustinClift slave24 too
13:52 ndevos JustinClift: I think smoke tests should have a higher priority than regression tests, smoke is fast anyway
13:52 JustinClift Picked both of those because they're nearly done with the regression test they're operating atm
13:52 JustinClift Ahhh, good point
13:52 JustinClift I'll check that
13:53 JustinClift ndevos: I haven't been thinking of them in terms of test priority.  That might do the trick :)
13:55 ndevos JustinClift: probably just set the regression-test-triggered to a higher value, and things should become better?
13:57 JustinClift Sounds like a good thing to try
13:57 JustinClift The rpm building and rackspace triggered test are all at 100 presently.
13:57 JustinClift I'll adjust that now, and see what happens.
13:58 ndevos JustinClift: yeah, set regression testing to 500 or so? although it may not affect the already scheduled jobs.., I guess we'll see
13:59 JustinClift I just set it to 50, as the help text in the UI says the higher ones go faster
13:59 ndevos oh, ok :)
13:59 JustinClift I think that means the 50 will give it lower priority
13:59 JustinClift But yeah, lets see.  Shouldn't be too long to find out. :)
13:59 JustinClift Good thinking btw :)
13:59 ndevos it would be nice if there was a global concept of high/low priorities and their matching values...
14:00 * ndevos points to the 'nice' command
14:01 JustinClift Explain that more?
14:01 JustinClift "global concept of" is not clear to me :)
14:04 ndevos well, some priorities use a high value for a high priority, while others use a low value for a high priority - thats confusing and I'd like to see only one way, where 0 is the highest priority
14:04 JustinClift If this works, then raising the priority of the non-triggered rackspace tests is probably a goer too
14:04 JustinClift Ahh yeah, gotcha
14:04 JustinClift Well, I might have i back to front.  We'll see soon :)
14:05 ndevos oh, I did not check what low/high values Jenkins uses :)
14:05 ndevos are there still people running non-triggered (and non scheduled) tests?
14:08 ndevos JustinClift: maybe there is also an option to kill a triggered regression test when it runs for more than 4 hours?
14:09 vipulnayyar joined #gluster-dev
14:11 JustinClift Hmmmm, if we can automate that, it would be useful
14:11 JustinClift Along with a message to the submitter like "your job ran 4 hours+ and was killed"
14:20 ndevos I was hoping it would be an option of the job in Jenkins... the log should then contain the message why it was killed
14:27 hagarth joined #gluster-dev
14:28 JustinClift ndevos: Looks like the prioritisation works. :)
14:31 JustinClift Everything except slave47 can now run regression tests, rpm building, and smoke tests
14:31 JustinClift Leaving slave47 just for smoke and rpm building atm
14:32 ndevos sounds good, that should make a better experience :)
14:38 shyam joined #gluster-dev
14:45 ndevos hchiramm: did you blog post about the gluster conference in Nmamit? if not, do you want to do that and include the poster design?
14:48 kdhananjay joined #gluster-dev
15:00 vipulnayyar joined #gluster-dev
15:06 anrao joined #gluster-dev
15:07 ndevos kkeithley: btw, could you run the Bug Triage meeting tomorrow again?
15:07 ndevos I'll be attending the Ceph Day in Amsterdam and have no idea if I'll be online
15:07 kkeithley It was such a stunning success last week
15:10 ndevos oh, I dont know, I did not really catch up on it... I mostly do that in the morning before the next oe
15:10 ndevos *one
15:10 kkeithley I was actually referring to the lack of quorum
15:10 ndevos the most important thing is that the bugs get triaged, if we dont do that during the meeting, nobody will
15:10 wushudoin joined #gluster-dev
15:11 ndevos I dont think any quorum is needed, there are hardly any deciscions to take
15:12 kkeithley well, sorry, I'm being obtuse.  one or two people doing the triage doesn't scale.
15:12 kkeithley But yes, I'll run the meeting.
15:13 ndevos yes, people need a lot of poking to be active during the meeting, but rafi and atin could get convinced to do some of the triaging too
15:14 ndevos actually, I hope my email from last week about responsibilities/expectations from maintainers would attract more attendees
15:15 shubhendu joined #gluster-dev
15:15 kkeithley we can hope
15:16 * ndevos will step out for a bit, and will be back later (probably)
15:21 bala joined #gluster-dev
15:32 lalatenduM kkeithley, btw have you seen this http://flocktofedora.com/
15:32 overclk joined #gluster-dev
15:33 kkeithley lalatenduM: I have now
15:34 lalatenduM kkeithley, flock is considered to have good gathering of fedora developers
15:35 kkeithley yes. Flock has replaced FUDCON in North America IIRC
15:35 lalatenduM yup
15:37 kkeithley I went to FUDCON at Virginia Tech three years ago. Rochester is close enough it might be good to get to it.
15:40 lalatenduM kkeithley, its 3 hours drive isn't it ?
15:42 vipulnayyar joined #gluster-dev
15:50 shyam joined #gluster-dev
16:06 overclk joined #gluster-dev
16:09 anrao joined #gluster-dev
16:25 shyam joined #gluster-dev
16:43 vipulnayyar joined #gluster-dev
16:49 kkeithley @later tell lalatenduM yes, three or four hours drive
16:49 glusterbot kkeithley: The operation succeeded.
16:52 kkeithley @later tell lalatenduM: more like six hours
16:52 glusterbot kkeithley: The operation succeeded.
16:54 anrao joined #gluster-dev
17:31 _Bryan_ joined #gluster-dev
17:39 vipulnayyar joined #gluster-dev
17:56 hagarth JustinClift: ping, around?
18:10 JustinClift hagarth: Yep, what's up?
18:10 hagarth JustinClift: hmm, recollecting why I pinged you now :)
18:10 JustinClift Ahhh, cool, looking ar your email now
18:11 JustinClift hagarth: Want me to run a bulk test and see what happens on the branches?
18:11 hagarth JustinClift: got it, any further progress on the gerrit upgrade?
18:11 JustinClift Not yet ;)
18:11 hagarth JustinClift: I think we could concentrate more on master as of now. the release branches do not have as many pending patches.
18:11 hagarth JustinClift: OK, 04/20 is looming near and hence that thought passed through my mind :)
18:12 JustinClift Sure
18:12 JustinClift It's looming large in my head too :)
18:12 JustinClift With the bulk tests, is there use in running say 20x runs on master head and seeing if new ones shows up?
18:12 hagarth JustinClift: I think that would be useful, I am sure we have missed a few in the etherpad atm.
18:13 JustinClift Cool.  I'll kick them off now.
18:14 hagarth JustinClift: working with spot to find some time that works for him to attend one of our backlog meetings
18:15 JustinClift Ahhh yeah.  I'm not sure if I have that in my calender yet
18:15 jobewan joined #gluster-dev
18:15 JustinClift hagarth: When's the next one?
18:15 ndevos hagarth, JustinClift: why not file a bug per regular 'spurious' regression test failure?
18:15 hagarth JustinClift: sometime this week
18:15 ndevos and have that block the glusterfs-3.7.0 tracker?
18:16 vipulnayyar joined #gluster-dev
18:16 JustinClift ndevos: We could do that.  There is a bug for the overall spurious failures though too I think... would that conflict?
18:16 JustinClift hagarth: k, I don't have the backlog meeting in my calender this week.
18:17 hagarth JustinClift: yet to post one, will do so after spot confirms
18:17 JustinClift hagarth: If someone can send me an invite, or just tell me the time and I'll manually add one to iCal :)
18:17 JustinClift hagarth: Sure
18:17 ndevos JustinClift: there would not be a conflict, we need to track the current very regular failures, not any rare spurious ones
18:18 JustinClift ndevos: We plan to have no spurious failures
18:19 lalatenduM joined #gluster-dev
18:20 ndevos JustinClift: okay, so lets add the current collector bug for any spurious regression to the 3.7.0 tracker, *and* file a bug per important regression failure
18:23 ndevos JustinClift: I've added the spurious regression test tracker as a blocker to 3.7.0
18:23 JustinClift Cool :)
18:24 ndevos JustinClift: now, just file more bugs per failing regression, add those bugs to the etherpad and have the new bugs block 1163543
18:24 JustinClift Lets make sure we have the BZ #'s for each on the etherpad too then.  Some are already there.
18:24 * hagarth ponders about doing a daily stand up style status update for 3.7.0
18:24 misc how long does it take to run the test suit on our slaves ?
18:25 misc ( ballpark estimation )
18:25 hagarth facing this problem with yum after I tried adding epel repo on one of my test machines:
18:25 hagarth Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
18:25 JustinClift misc: ~2 hours on each slave
18:25 hagarth does it ring a bell to anyone?
18:25 JustinClift ~Nope
18:26 JustinClift hagarth: Sounds like a busted path on the server, or a network failure?
18:26 hagarth I cannot install userspace-rcu and my testing is stalled due to this :|
18:26 JustinClift Either that or busted .repo failure
18:26 JustinClift Which OS?
18:26 hagarth JustinClift: RHEL6, installed rpm for epel repo
18:27 JustinClift Ahhh
18:27 JustinClift There is sometimes some conflict between RHEL packages and what people can get in CentOS
18:28 JustinClift hagarth: Which is the epel.repo file the CentOS slaves are currently using: http://fpaste.org/204888/77400211/
18:29 ndevos hagarth: sounds like a network issue, do you need a proxy, or is there one set in the environment?
18:29 JustinClift hagarth: This is the userspace-rcu and -devel that the CentOS slaves have installed: http://fpaste.org/204889/40141142/
18:29 hagarth ndevos: mirrors.fedoraproject.org and download.fedoraproject.org are reachable from my machine
18:31 ndevos hagarth: try a 'yum clean all' and see if it picks up a different IP for the mirror manager?
18:32 hagarth ndevos: no luck, failed with the same error
18:32 hagarth JustinClift: the epel.repo file looks identical to yours
18:32 JustinClift In that case, maybe download the rpms manually using your browser and yum install them directly
18:33 hagarth JustinClift: yeah, sounds like a better alternative
18:34 ndevos hagarth: you could try a hardcoded mirrot: http://paste.fedoraproject.org/204892/74042614
18:34 JustinClift hagarth: The epel.repo shouldn't look identical to mine btw... as mine has Rackspace mirror locations in it (commented out tho) ;)
18:34 hagarth JustinClift: right, I just compared the mirrorlists :)
18:35 JustinClift :)
18:36 hagarth ndevos: interesting, a hardcoded mirror seems to be working
18:37 ndevos hagarth: well, maybe some outage in fedora land?
18:37 hagarth ndevos: possible, will do some digging around after I finish installing userspace-rcu-devel :)
18:39 hagarth yay! userspace-rcu done with!
18:39 ndevos hagarth: http://status.fedoraproject.org/ does not list any issue though
18:40 hagarth ndevos++, JustinClift++
18:40 glusterbot hagarth: ndevos's karma is now 103
18:40 glusterbot hagarth: JustinClift's karma is now 42
18:40 hagarth ndevos: I suspect the DNS resolution on that machine, will look for a root cause tomorrow
18:40 ndevos hagarth: ah, maybe some IPv6 issue?
18:41 ndevos IPv6++ except when it does not work
18:41 glusterbot ndevos: IPv6's karma is now 1
18:42 hagarth ndevos: quite possible, the first server being resolved to was an IPv6 address IIRC!
18:45 misc JustinClift: the test suite is using multiple core ?
18:45 misc ( as I had the impression it was quite linear )
18:46 dlambrig1 left #gluster-dev
18:49 ndevos misc: the tests themselves are mostly shell scripts running sequential, but the glusterfs binaries are multithreaded
18:52 JustinClift misc: The VM's are dual core.  No idea if it's important. ;)
18:53 JustinClift misc: What are you considering? :)
18:57 misc JustinClift: well, wonder if we can reduce the VM size and so have more VM for the same price, and having more tests but not taking 4h to run
19:00 misc I am also wondering how much we would be using of the 768 cores of the centos CI. So far, I counted 5%, if we start to add centos 7 to the mix ( and moving the centos 6 there )
19:01 misc ( 5% with 20 slaves, so at peak commit rate for release )
19:02 JustinClift misc: I think the tests aren't that cpu intensive, so we could probably overcommit cpu resources quite a lot
19:02 JustinClift The regression tests do seem to need more than 1GB of ram though.
19:02 JustinClift That's kind of a ballpark figure
19:03 misc yeah, that's what I did see on munin too
19:03 JustinClift When initially setting them up in Rackspace, tried with 512MB ram instances
19:03 JustinClift Didn't work ;)
19:03 misc ( before munin got broken, and now waiting on salt )
19:03 misc JustinClift: technically, we could overcommit the ram as well
19:03 JustinClift 1GB instances ran to completion, but massive failure rate.
19:04 JustinClift Unsure if the failure rate with due to less memory, or less cpu (== slower, much race condition!)
19:04 JustinClift s/with/was/
19:04 misc I hope the failure are due to memory
19:04 JustinClift It might be interesting to try some bulk tests with 1GB instances now I think about it, just to see what happens
19:04 misc because race condition would be sad and something to fix :)
19:04 misc JustinClift: yep
19:04 JustinClift I think I'll do that, just to see. :)
19:05 JustinClift Hang on, I need to see which of the VM's failed early before I forget
19:05 misc or test with different tradeoff ram/cpu
19:05 JustinClift It sucks to check on them at the end, and find out 25% of them didn't actually start properly
19:05 JustinClift misc: Sure
19:06 JustinClift The didn't start properly most often is because the git master barfs when 20x VM's hit it at the same time and try to git clone
19:06 misc in fact, being able to say 1.5G per slave is something we could do for a custom VM :)
19:06 JustinClift Yeah
19:06 misc ( but we can't do in rackspace )
19:06 * JustinClift should add some sleep statements between his VM startups
19:06 JustinClift Yeah
19:07 misc a 25% saving of ram cost would sound nice :)
19:07 JustinClift :)
19:09 JustinClift Yep, exactly 1/2 of them wedged at the start on git breakage
19:09 * JustinClift cycles those ones and remakes them
19:10 JustinClift misc: It actually takes longer babying these things than the test runtime
19:10 * JustinClift adds some sleep statements to his scripts
19:10 JustinClift That should improve reliabilty
19:11 misc JustinClift: yep :/
19:12 misc that's why I looked at automating the setup
19:12 misc ( I should continue but the next weeks are gonna be quite busy )
19:12 JustinClift How's the slave you automated go?  Getting close?
19:13 JustinClift s/How's/How'd/
19:21 misc JustinClift: just missing the private key distribution
19:21 misc ( and a way to install after starting the VM, it should work technically, but it doesn't in practice and forgot why )
19:22 misc like, something kill salt during the first run, maybe a bug
19:22 misc I decided to wait on the release that should have happened last month
19:24 misc so I would say 1 day of work and 30 days of bugfix :)
19:37 JustinClift :)
19:38 JustinClift misc: Is there any chance the "kill salt" is really cloud-init rebooting the VM after updating the kernel?
19:42 misc JustinClift: nope, I think that's salt doing it or something
19:42 misc like me triggering a restart of salt or something stupid :)
19:47 hchiramm_ joined #gluster-dev
19:48 JustinClift ;)
19:49 lalatenduM joined #gluster-dev
19:56 JustinClift misc: Something is wrong with the Rackspace API endpoint
19:56 JustinClift It's returning 500 and 503 errors
19:56 JustinClift :(
19:56 misc JustinClift: what ?
19:59 JustinClift http://fpaste.org/204945/45448142/
19:59 JustinClift misc: ^
19:59 JustinClift First one there worked
19:59 JustinClift The ones after that (few minutes beween each) aren't :/
19:59 misc curious
20:00 misc I didn't see anything wrong using salt, so it might be a temporary failure ?
20:01 JustinClift Yeah.  I think it's this: "The Rackspace Open Cloud system engineers will perform priority maintenance to the control infrastructure of our Next Generation Cloud Servers and Cloud Networks regions per the following dates and times:"
20:01 JustinClift From status.rackspace.com
20:01 JustinClift The date for the ORD datacenter is today, and the timing is close to the 10pm they mention
20:04 misc JustinClift: use a different DC
20:04 misc I do my test in texas, so I can check what i need to kill :)
20:06 shyam joined #gluster-dev
20:08 JustinClift Ahhh yeah, I've been meaning to look at their API and see how to specify the DC.  I just defaulted to ORD until now.
20:11 misc JustinClift: look at the way I do for salt-cloud ( /etc/salt/cloud.providers.d/rackspace.conf , copy and change the DC )
20:12 misc ( and then run with the right command )
20:12 JustinClift "/etc/salt/cloud.providers.d/rackspace.conf" <-- which box ?
20:12 glusterbot JustinClift: <'s karma is now -6
20:13 JustinClift glusterbot: Yeah! Right on! screw that guy! :p
20:13 * JustinClift thinks glusterbot should have an exception for.  For things like <--, c++., etc
20:13 glusterbot JustinClift: c's karma is now 8
20:13 glusterbot JustinClift: <'s karma is now -7
20:13 JustinClift s/for./list/
20:17 misc JustinClift: salt-master
20:17 misc JustinClift: basically, the syntax is :
20:17 misc salt-cloud  -p $profil $name
20:17 misc like
20:17 misc salt-cloud  -p jenkins_2 slave99.cloud.gluster.org
20:18 misc it get jenkins_2 from /etc/salt/cloud.profiles.d/jenkins.conf
20:18 misc who reference a provider
20:18 misc provider: rackspace_gluster
20:19 misc who is the provider in /etc/salt/cloud.providers.d/rackspace.conf
20:19 misc ( ie, a zone )
20:19 misc JustinClift: just do not modify these 2 files, as they are salt managed, and so reverted if modified :)
20:20 misc oh yeah, I need also add the DNS integration :/
20:20 misc ( who requires code )
20:44 JustinClift misc: What's my username on salt-master?
20:44 JustinClift Tried jclift and jc without success
20:44 JustinClift Unless the pub keys were added to root?
20:44 JustinClift ... and I'm in
20:44 JustinClift Yeah, root :)
20:45 badone_ joined #gluster-dev
20:47 JustinClift misc: Looking at salt-master.  My individual user account is a member of "admins". That's not a general sudo admin, it's only allowed to run the one
20:47 JustinClift "deploy_salt" script.
20:47 JustinClift That's on purpose yeah?
20:49 * JustinClift is just making sure ;)
20:56 misc JustinClift: you need to connect as root directly
20:56 misc I distributed ssh keys
20:56 misc I didn't want to create too much local account for now, as it would be a mess later for freeipa
20:57 anrao joined #gluster-dev
21:57 JustinClift misc: No worries :)
22:16 obnox hi vor coverity issues, shoud bug 789278 be used?
22:16 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=789278 high, medium, ---, bugs, ASSIGNED , Issues reported by Coverity static analysis tool
22:21 obnox s/vor/for/
22:32 shyam joined #gluster-dev
23:57 JustinClift obnox: Guessing so.  That BZ still seems to be actively used.
23:57 JustinClift hagarth: ^ ?

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary