Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-03

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 ndevos misc: hmm, well, really looks like sudo broke on the slaves... http://build.gluster.org/job/smoke/13635/console is an other one
00:02 misc ndevos: yep, I broke it on every slave at once :)
00:02 misc it should have been fixed everywhere at the same time
00:08 ndevos misc: okay, thanks, I've now retriggered the failed test runs
00:18 vipulnayyar joined #gluster-dev
00:24 shyam joined #gluster-dev
00:30 _nixpanic joined #gluster-dev
00:30 _nixpanic joined #gluster-dev
01:23 bala joined #gluster-dev
02:08 hagarth joined #gluster-dev
02:33 bala joined #gluster-dev
03:33 hagarth joined #gluster-dev
03:41 topshare joined #gluster-dev
03:56 hagarth joined #gluster-dev
04:25 shubhendu joined #gluster-dev
04:33 hagarth joined #gluster-dev
04:37 anoopcs joined #gluster-dev
04:41 atinmu joined #gluster-dev
04:41 jiffin joined #gluster-dev
04:55 nkhare joined #gluster-dev
04:55 prasanth_ joined #gluster-dev
05:02 kanagaraj joined #gluster-dev
05:02 soumya_ joined #gluster-dev
05:04 spandit joined #gluster-dev
05:09 ppai joined #gluster-dev
05:09 ndarshan joined #gluster-dev
05:15 Apeksha joined #gluster-dev
05:34 lalatenduM joined #gluster-dev
05:36 vimal joined #gluster-dev
05:45 overclk joined #gluster-dev
05:55 soumya_ joined #gluster-dev
05:57 gem joined #gluster-dev
05:58 deepakcs joined #gluster-dev
06:03 topshare joined #gluster-dev
06:06 topshare joined #gluster-dev
06:06 kshlm joined #gluster-dev
06:12 hagarth joined #gluster-dev
06:15 rafi joined #gluster-dev
06:21 overclk joined #gluster-dev
06:24 raghu joined #gluster-dev
06:33 ppai joined #gluster-dev
06:42 bala joined #gluster-dev
06:44 topshare joined #gluster-dev
06:49 tigert ndevos: oh
06:49 tigert ndevos: I did notice the nongnu and deleted it and added it to my addressbook
06:50 * tigert grumbles at the email client
06:50 tigert oh wow 22 replies
06:50 * tigert reads
06:50 tigert what did I start? :-)
06:51 tigert oh wow, it was original thread which totals 22 mails
06:52 * tigert grumbles at email clients some more
06:58 topshare joined #gluster-dev
07:15 tigert btw
07:16 tigert gluster-devel is not in my own addressbook
07:16 tigert or maybe it is in my "addresses I mailed to" list
07:16 tigert yeah
07:17 tigert now it is gone
07:17 tigert ndevos: thanks for pointing that out
07:21 rjoseph joined #gluster-dev
07:30 badone_ joined #gluster-dev
07:48 ppai joined #gluster-dev
08:11 aravindavk joined #gluster-dev
08:26 topshare_ joined #gluster-dev
08:34 pranithk joined #gluster-dev
08:55 shubhendu joined #gluster-dev
09:02 lalatenduM joined #gluster-dev
09:32 _shaps_ joined #gluster-dev
09:48 pranithk joined #gluster-dev
10:03 ppai joined #gluster-dev
10:09 Apeksha joined #gluster-dev
10:28 ira joined #gluster-dev
10:32 nkhare joined #gluster-dev
10:32 atinmu joined #gluster-dev
10:51 Apeksha joined #gluster-dev
11:03 firemanxbr joined #gluster-dev
11:21 Monster joined #gluster-dev
11:24 shubhendu joined #gluster-dev
11:30 pranithk joined #gluster-dev
11:38 hchiramm pranithk++
11:38 glusterbot hchiramm: pranithk's karma is now 9
11:38 ndevos raghu: oh, btw, hagarth and I will be travelling tomorrow during the meeting time, could you moderate it?
11:40 hagarth ndevos: JustinClift has been volunteered for that ;)
11:42 ndevos hagarth: ah, cool, lets hope he puts up a series of alarms :)
11:55 ndevos REMINDER: Gluster Community Bug triage meeting starts in 5 minutes in #gluster-meeting
12:15 shubhendu joined #gluster-dev
12:22 xavih joined #gluster-dev
12:35 xavih joined #gluster-dev
12:48 hchiramm hagarth++ thanks !
12:48 glusterbot hchiramm: hagarth's karma is now 42
13:00 anoopcs joined #gluster-dev
13:03 rjoseph joined #gluster-dev
13:10 topshare joined #gluster-dev
13:17 lpabon joined #gluster-dev
13:21 bala joined #gluster-dev
13:24 hchiramm tdasilva++ thanks!
13:24 glusterbot hchiramm: tdasilva's karma is now 1
13:26 _shaps_ joined #gluster-dev
13:29 lalatenduM shubhendu, glusterfs-hadoop-owner@fedoraproject.org
13:30 lalatenduM ndevos, we are trying to reach maintainers of https://koji.fedoraproject.org/k​oji/packageinfo?packageID=16935 , glusterfs-hadoop-owner@fedoraproject.org  is the right email address
13:31 rjoseph joined #gluster-dev
13:31 lalatenduM shubhendu, http://pkgs.fedoraproject.or​g/cgit/glusterfs-hadoop.git/
13:34 shubhendu lalatenduM, also create a CentOS VM for me. Just to try out things
13:34 shubhendu lalatenduM++
13:34 glusterbot shubhendu: lalatenduM's karma is now 70
13:35 lalatenduM shubhendu, yes, I will create one for you thanks to you too shubhendu++
13:35 glusterbot lalatenduM: shubhendu's karma is now 1
13:46 bala joined #gluster-dev
13:48 vimal joined #gluster-dev
13:59 prasanth_ joined #gluster-dev
14:04 shaunm joined #gluster-dev
14:07 shyam joined #gluster-dev
14:15 soumya_ joined #gluster-dev
14:17 prasanth_ joined #gluster-dev
14:25 hagarth ndevos: looking into netgroups patches now
14:26 hagarth ndevos: hit a rebase problem with the first patch - parser in libglusterfs. would it be possible for you to refresh the patchset?
14:27 shyam joined #gluster-dev
14:29 nkhare joined #gluster-dev
14:34 dlambrig joined #gluster-dev
14:36 _Bryan_ joined #gluster-dev
14:47 hagarth ndevos: If you get a chance to refresh it, I might be able to review/merge later this evening. Have a long evening ahead of me ;)
14:56 soumya joined #gluster-dev
15:00 Apeksha joined #gluster-dev
15:14 lalatenduM joined #gluster-dev
15:19 gem joined #gluster-dev
15:31 shubhendu joined #gluster-dev
15:35 ndevos hagarth: yeah, I'll try to get that done
15:36 hagarth ndevos: cool, thanks!
15:56 shubhendu_ joined #gluster-dev
16:02 ndevos hagarth: if you're bored, http://review.gluster.org/8065 would be a safe candidate for merging ;)
16:02 gem joined #gluster-dev
16:03 lpabon joined #gluster-dev
16:04 shaunm joined #gluster-dev
16:09 shubhendu__ joined #gluster-dev
16:16 ndevos hagarth: rebased and re-posted the series - and fixed the test-case counter in one of the patches
16:16 * ndevos just hopes the regression tests will pass immediately
16:26 rjoseph joined #gluster-dev
16:29 kshlm joined #gluster-dev
16:39 bala joined #gluster-dev
16:40 misc JustinClift: http://thenextweb.com/insider/2015/03/03/gitl​ab-acquires-rival-gitorious-will-shut-june-1/
16:40 misc so we are migrating to gitlab :)
16:43 vipulnayyar joined #gluster-dev
16:54 JustinClift misc: Ugh
16:55 xavih joined #gluster-dev
16:56 JustinClift misc: Just emailed them, asking for official notification
16:56 shyam joined #gluster-dev
16:56 JustinClift misc: I guess this gets the Gluster Forge migration higher up the urgency list
16:57 misc JustinClift: https://about.gitlab.com/2015/0​3/03/gitlab-acquires-gitorious/
16:57 misc JustinClift: posted on -infra
16:58 JustinClift Cool
16:59 misc I guess the first step is to see what we do have in term of repo
17:04 shubhendu__ joined #gluster-dev
17:11 JustinClift misc: slave25 hasn't returned from reboot.  Want to look at it?
17:11 JustinClift Hmmm, slave24 has had it's jenkins pw reset
17:12 * JustinClift resets it back
17:14 misc JustinClift: going to do community building with gnome people around a few drink, so gonna look later
17:15 JustinClift misc: np :)
17:15 JustinClift I'll leave it alone and won't rebuild it yet
17:15 bala joined #gluster-dev
17:16 rafi joined #gluster-dev
17:25 JustinClift misc: Something has killed the authorized_keys file in slave23:/root/.ssh/
17:25 JustinClift You may want to look into that later too ;)
17:33 shubhendu_ joined #gluster-dev
17:35 shyam joined #gluster-dev
17:46 JustinClift ndevos misc: Still getting the weird name lookup failues: http://build.gluster.org/job/reboot-vm/52/console
17:46 JustinClift eg where it doesn't get the IP address for the Rackspace endpoint server right
17:54 ndevos JustinClift: yeah, I'm not sure its (only) a dns issue
17:54 JustinClift Yeah, no idea ;)
17:55 ndevos JustinClift: maybe there is some kind of transparent proxy somewhere... I suspect that because sometimes there are ssl certificate warnings
17:57 jobewan joined #gluster-dev
18:00 JustinClift Oh
18:00 JustinClift Well, we'll probably be migrated to something newer in 2-3 months
18:00 JustinClift All regression slave VM's except #25 are online and functional again
18:01 JustinClift We have 16 regressions running simultaneously
18:01 JustinClift That's not bad... except we still prob have massive failure rate
18:02 JustinClift I should write some scripting to analyse the failures better, instead of having to run separate instances up
18:02 JustinClift If only the regression tests could output json
18:02 JustinClift ;)
18:16 vipulnayyar joined #gluster-dev
18:18 shyam1 joined #gluster-dev
18:50 shyam joined #gluster-dev
19:19 ira joined #gluster-dev
19:23 bala joined #gluster-dev
19:25 shyam joined #gluster-dev
19:42 lkoranda joined #gluster-dev
19:56 _shaps_ joined #gluster-dev
20:17 ndevos regression roulette is not my favorite game
20:28 hagarth joined #gluster-dev
20:33 JustinClift ndevos: We need developers to look into the regression failures
20:33 JustinClift It's not an infrastructure problem :(
20:34 JustinClift ndevos: Hmmm, I'll kick off another batch run of regression tests for master in a few minutes, so we can have some results in the morning on what's failing most
20:35 JustinClift ndevos: Do you reckon that might get some dev's to concentrate on the failures (for the ones in their area of specialty), or do you reckon it's better to wait until after 3.7 freeze?
20:35 JustinClift hagarth: ^ Any thoughts?
20:36 ndevos JustinClift: possibly some devs could find a little time to check those issue, if their work for 3.7 is waiting for review
20:36 JustinClift k, I'll kick some off :)
20:37 JustinClift It has a total cost of around ~$2, so not really a huge financial hit for us ;)
20:37 JustinClift Rackspace VM time that is
20:37 ndevos JustinClift: but, the problem would be that the changes need to be rebased before they can use the corrected test cases :-/
20:37 JustinClift True
20:37 JustinClift At least we'll know though
20:37 ndevos but, rather find the problems early :)
20:38 JustinClift And the dev's that are sick of failures would be motivated to rebase :)
20:39 ndevos actually it would be much nicer to not need to rebase, and have the regression checkout to a checkout of master and cherry-pick the change(s) on top
20:39 ndevos I think the old  regression test did it like that
20:39 JustinClift Yeah
20:40 JustinClift The triggered version doesn't, because it's built using someone's (the plugin authors) specific way of doing things in mind
20:40 JustinClift The old regression test does stuff onto master head I think
20:40 JustinClift ndevos: You're still welcome to do stuff through that manually if you want
20:40 hagarth JustinClift: yes, some devs might be able to pick up. Atleast we'll know the tests that can cause spurious failures.
20:41 JustinClift ndevos: Or make a new regression job that's both triggered, plus does stuff onto master + cherry-pick.  If you have the time/desire/motivation. ;)
20:42 JustinClift hagarth: Cool.  Master branch is best yeah?
20:42 hagarth JustinClift: yes
20:42 JustinClift Not much need for doing release-3.5, or release-3.6 branches too?
20:42 JustinClift Or is that useful as well?
20:42 hagarth we might want to manually provide a +1 verified if a patchset encounters a known regression failure
20:42 ndevos I think 3.5 does not have any issues
20:43 hagarth at least in the run up to feature freeze for 3.7
20:43 JustinClift ... says the maintainer of 3.5 branch ... ;)
20:43 ndevos 3.6 seems to have failures :)
20:44 JustinClift hagarth: Ahhh yeah.  With a list of known regression failures like that, we could probably do a manual +1 or something like that if it's the only failure
20:44 JustinClift That'd save on rerunning things until they pass
20:44 ndevos hagarth: manually Verified+1 would be an option, but we need to know which test cases we can accept as spurious
20:44 JustinClift ndevos: Yeah, 3.6 defintiely does have regression failures still
20:44 JustinClift ~80% failure rate from memory, of the full runs
20:44 hagarth JustinClift, ndevos: yes, we need a list and it is going to be important in the days leading to March 9th :)
20:46 JustinClift Just kicked off 20x VM's to run master regression tst
20:46 JustinClift test
20:46 JustinClift Hmmm, we might be at our ram limit in Rackspace.  I'd better check.
20:47 JustinClift Nah, we're ok
20:47 JustinClift We're at 110 GB of 128 GB in ORD
20:48 JustinClift And we still have ~120GB available in other locations too
20:50 JustinClift misc: slave29 seems to have died - not coming back from reboot
20:50 JustinClift I'll leave that for you to look at as well :)
20:51 ndevos JustinClift: you start those VMs on demand and have them deleted when done?
20:56 JustinClift ndevos: I start the VM's on demand, wait about 1-2 hours to make sure finished, then collect the logs and coredumps from them, then mass delete them manually.
20:56 JustinClift ndevos: The initial setup and running of the regression tests is completely automatic.  1 command line
20:56 JustinClift It's the processing the results bit that takes time
20:56 hagarth JustinClift: while you are on these VMs, can you also look at the requests made by geo-replication developers/
20:57 JustinClift hagarth: Oh, from a while back... yeah, I'll take a look
20:57 JustinClift 1 min
20:58 ndevos JustinClift: awesome! could that be used as automation for starting a new vm for each regression test?
20:59 * ndevos was confident he has a US-EU power plug... but is seems to be a US-UK one
21:00 JustinClift ndevos: Yeah, that's how it started out.  It was before Rackspace had DNS available throught the API though, and also (at the time) VM building wasn't always reliable.  Needed to manually log in to verify that VM's had actually picked up their config details ok and built properly
21:00 JustinClift ndevos: VM's seem to be building reliably now though, and the DNS changes could also be done vi API.  So, it could be fully automated
21:01 JustinClift Optimally, we'd adjust the regression runs to also output JSON or something as well, for automated processing
21:01 ndevos JustinClift: one thing at the time :)
21:01 JustinClift While a useful project... it's lower down the priority list than migrating Gerrit + Jenkins, and now the Forge ;)
21:02 ndevos JustinClift: well, maybe 'prove' can convert the 'ok' and 'no ok' to json?
21:02 JustinClift Interesting.  Hadn't thought of that.
21:05 ndevos JustinClift: I think you can pass --formatter to prove and have something else as output - but you may need to write the TAP-formatter in perl if there is none for JSON
21:05 JustinClift Googling for TAP::Harness and JSON shows this:
21:05 JustinClift https://wiki.jenkins-ci.org​/display/JENKINS/TAP+Plugin
21:06 JustinClift It kind of seems like the test harness could be run directly from Jenkins, instead of needing the prove command?
21:06 JustinClift Or maybe that's a processor for the TAP output (unsure)
21:06 JustinClift Meh, it's not grabbing me
21:06 JustinClift I'll forget about it :)
21:06 JustinClift (for now)
21:07 ndevos yeah, like http://search.cpan.org/~evos​trov/TAP-Formatter-Jenkins/ ?
21:09 ndevos well, we seem to have something similar, just depends if you want it in Jenkins or in prove :)
21:47 shaunm joined #gluster-dev
22:37 jobewan joined #gluster-dev
22:54 ira joined #gluster-dev
22:59 ira joined #gluster-dev
23:02 misc JustinClift: the passwd killer striked again
23:03 JustinClift misc:
23:03 misc JustinClift: on slave25
23:04 JustinClift misc: We should probably get that test fixed then
23:04 * JustinClift will hassle Pranith to do so
23:04 misc JustinClift: it is
23:04 JustinClift Oh
23:04 JustinClift It's already fixed, but it still happens?
23:04 misc well, it was committed :)
23:04 misc like today
23:04 JustinClift Ahh
23:05 JustinClift 1 sec, we should be able to see which CR's were run on slave25
23:05 JustinClift If something was run from a branch that doesn't have it applied... or from before it was applied...
23:05 JustinClift Yeah
23:05 misc so far, the file is present in every node I tested
23:09 misc JustinClift: so cp /etc/passwd- /etc/passwd to fix :)
23:12 JustinClift :)
23:12 JustinClift misc: This is the build that killed it: http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/4817/consoleFull
23:12 JustinClift The ones after that have failures about unknown group and stuff
23:14 misc there is a awful lots of "not ok", shouldn't it stop ASAP :) ?
23:31 JustinClift misc: Nah, it's a dumb script.  Runs to the end.
23:33 wushudoin joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary