Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-06-06

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:12 lpabon joined #gluster-dev
02:25 bharata-rao joined #gluster-dev
02:31 bharata_ joined #gluster-dev
02:50 hagarth joined #gluster-dev
02:54 kkeithley1 joined #gluster-dev
03:36 itisravi joined #gluster-dev
03:53 hagarth joined #gluster-dev
04:01 kanagaraj joined #gluster-dev
04:13 shubhendu joined #gluster-dev
04:15 kkeithley_ So.... what did we change to make regressions start working?
04:15 vpshastry joined #gluster-dev
04:16 hagarth kkeithley_: I still think that regression is in a fragile state
04:25 aravindavk joined #gluster-dev
04:33 ndarshan joined #gluster-dev
04:40 kkeithley_ hagarth: does that mean we didn't change anything?
04:41 hagarth kkeithley_: not that I am aware of
04:45 [o__o] joined #gluster-dev
04:45 kkeithley_ interesting
04:48 kdhananjay joined #gluster-dev
04:52 deepakcs joined #gluster-dev
04:59 ppai joined #gluster-dev
05:04 glusterbot` joined #gluster-dev
05:06 spandit joined #gluster-dev
05:14 atinmu joined #gluster-dev
05:21 lalatenduM joined #gluster-dev
05:26 glusterbot joined #gluster-dev
05:26 kdhananjay joined #gluster-dev
05:26 aravindavk joined #gluster-dev
05:26 kanagaraj joined #gluster-dev
05:26 hagarth joined #gluster-dev
05:26 kkeithley_ joined #gluster-dev
05:26 bharata_ joined #gluster-dev
05:26 systemonkey joined #gluster-dev
05:26 semiosis joined #gluster-dev
05:26 ndevos joined #gluster-dev
05:26 JoeJulian joined #gluster-dev
05:26 nixpanic_ joined #gluster-dev
05:26 foster joined #gluster-dev
05:26 rturk|afk joined #gluster-dev
05:28 johnmark joined #gluster-dev
05:30 kkeithley1 joined #gluster-dev
05:33 lalatenduM joined #gluster-dev
05:33 glusterbot joined #gluster-dev
05:33 kdhananjay joined #gluster-dev
05:33 aravindavk joined #gluster-dev
05:33 kanagaraj joined #gluster-dev
05:33 hagarth joined #gluster-dev
05:33 kkeithley_ joined #gluster-dev
05:33 bharata_ joined #gluster-dev
05:33 systemonkey joined #gluster-dev
05:33 rturk|afk joined #gluster-dev
05:33 foster joined #gluster-dev
05:33 nixpanic_ joined #gluster-dev
05:33 JoeJulian joined #gluster-dev
05:33 ndevos joined #gluster-dev
05:33 semiosis joined #gluster-dev
05:37 kkeithley1 joined #gluster-dev
05:41 vipulnayyar joined #gluster-dev
05:42 hchiramm_ joined #gluster-dev
05:50 shruti joined #gluster-dev
05:53 aravindavk joined #gluster-dev
05:55 hagarth joined #gluster-dev
06:00 raghu joined #gluster-dev
06:08 bala joined #gluster-dev
06:30 bala joined #gluster-dev
06:36 aravindavk joined #gluster-dev
06:37 hagarth joined #gluster-dev
06:50 kdhananjay joined #gluster-dev
06:56 bala joined #gluster-dev
07:02 rgustafs joined #gluster-dev
07:13 bharata-rao joined #gluster-dev
07:31 xavih I sent a couple of patches to gerrit yesterday and I've receive a build failure notification pointing to an inaccessible url: http://rhs-client34.lab.eng.blr.red​hat.com:8080/job/libgfapi-qemu/51/
07:31 xavih how can I see what went wrong ?
07:32 ndevos xavih: you're looking for Humble / hchiramm_ in that case
07:32 xavih ndevos: thanks :). I'll wait
07:33 hchiramm_ xavih, I am here :)
07:33 xavih hchiramm_: good :D
07:33 hchiramm_ yeah , I was trying to implement a new test :)
07:33 hchiramm_ unfortunately it made noise..
07:33 xavih should I do something ?
07:33 hchiramm_ xavih, which is ur patch ?
07:34 hchiramm_ http://review.gluster.org/7749 ?
07:34 xavih http://review.gluster.org/7749/ and http://review.gluster.org/7782/
07:34 glusterbot Title: Gerrit Code Review (at review.gluster.org)
07:34 glusterbot Title: Gerrit Code Review (at review.gluster.org)
07:35 hchiramm_ xavih, we should not be counting that failure for now..
07:35 hchiramm_ let me check with hagarth
07:35 xavih ok, thanks :)
07:36 hchiramm_ I will get back .. xavih  :)
07:36 hchiramm_ sorry :(
07:36 xavih no problem :)
07:39 hchiramm_ xavih, looks like  buildsystem 1 didnt respond against ur patch set v17 , right ?
07:40 xavih no, it seems not
07:40 xavih only response from GlusterBuildSystem2
07:41 hchiramm_ yeah. build1 was non responsive at that time..
07:42 hchiramm_ so may be we need to retrigger it on build1..
07:42 hchiramm_ build2 is disabled for now..
07:42 hchiramm_ so it will not make any noise .
07:42 xavih How I do that ? Do I resend the patch ?
07:43 hchiramm_ iic, kkeithley was (re)triggering failed jobs on build1 yesterday
07:43 hchiramm_ definitely resending can trigger it..
07:43 hchiramm_ how-ever let me check with hagarth and kkeithley
07:44 xavih ok, then I'll wait. It's not urgent
07:44 hchiramm_ k :)
07:44 xavih thank you very much, hchiramm_ :)
07:44 bala joined #gluster-dev
07:44 hchiramm_ Np !!
07:53 lalatenduM joined #gluster-dev
07:53 kkeithley1 joined #gluster-dev
08:25 hagarth joined #gluster-dev
08:25 pranithk joined #gluster-dev
08:36 hchiramm_ kkeithley_, ping
08:39 lalatenduM kkeithley, my FAS account name is "lalatendu"
08:52 hchiramm_ xavih, ping
08:53 pranithk hchiramm_: xavih is in meeting with lots of people, don't disturb him :-P. You can listen to him at #gluster-meeting :-). He is explaining erasure-code xlator he wrote
08:53 xavih hchiramm_: sorry I'm busy right now...
08:54 hchiramm_ ok ..
08:55 hchiramm_ pranithk, thanks .. I  missed it .
09:18 vipulnayyar joined #gluster-dev
09:58 kkeithley1 joined #gluster-dev
10:13 lalatenduM joined #gluster-dev
10:17 hagarth ndevos: ping
10:18 ndevos hagarth: pong
10:18 hagarth ndevos: should we disable drc by default till it works better?
10:19 ndevos hagarth: I've been wondering about that...
10:19 ndevos hagarth: I think with the recent patches, it should be more stable, but I'm also not sure what bugs would still be lurking
10:19 hagarth ndevos: I just observed nfs consuming 4G+ on the build machine now
10:20 hagarth and it got oom killed while running regression tests
10:20 ndevos hagarth: right, but what release is that? I'm not sure if the patches have been merged in 3.5 yet
10:21 hagarth ndevos: this is on master
10:23 ndevos hagarth: ah, http://review.gluster.org/7816 has not been merged yet, and its obviously not the only change that is needed
10:23 glusterbot Title: Gerrit Code Review (at review.gluster.org)
10:25 hagarth ndevos: right
10:25 ndevos hagarth: in that case, disabling drc by default is probably the way to go
10:25 hagarth ndevos: +1
10:26 ndevos hagarth: I'll file a patch for that in a bit, santosh prepared one already
10:27 ndevos hagarth: care to file a bug and assign it to me?
10:27 hagarth ndevos: will do
10:27 ndevos hagarth: thanks :)
10:30 swebb joined #gluster-dev
10:32 hagarth ndevos: https://bugzilla.redhat.co​m/show_bug.cgi?id=1105524
10:32 glusterbot Bug 1105524: unspecified, unspecified, ---, ndevos, NEW , Disable nfs.drc by default
10:32 ndevos hagarth: ok!
11:25 hchiramm_ joined #gluster-dev
11:38 JustinClift ndevos hagarth: Any objection to me updating the rpms running www.gluster.org?
11:38 JustinClift It's a RHEL 6.5 box, and needs updating.  It's not badly out of date.
11:38 JustinClift eg doesn't looks like any potential issues
11:39 hagarth JustinClift: no objections from me
11:39 kkeithley1 joined #gluster-dev
11:41 JustinClift kkeithley_: I'm just about to update rpms on www.gluster.org.  It's a RHEL 6.5 box.
11:42 JustinClift Doesn't seem badly out of date, so not expecting issues.
11:42 JustinClift Apart from website unavailability while rebooting right after...
11:42 JustinClift kkeithley_: No objection?
11:44 JustinClift ... me gets it done
11:46 kkeithley_ no, I don't object to updating www.gluster.org. I believe johnmark was doing that
11:46 JustinClift np :)
11:46 kkeithley_ I only worry about build.gluster.org....
11:46 JustinClift Yeah.  Me too.
11:48 JustinClift k.  Rebooting it now.
11:49 JustinClift (www.gluster.org that is)
11:50 JustinClift And its back and running
11:51 edward1 joined #gluster-dev
12:01 itisravi_ joined #gluster-dev
12:02 vpshastry joined #gluster-dev
12:31 vikhyat joined #gluster-dev
12:36 vikhyat joined #gluster-dev
12:40 vikhyat joined #gluster-dev
12:40 JustinClift Updating the openssl rpms on review.gluster.org now...
12:49 JustinClift Updating download.gluster.org now...
12:52 JustinClift Rebooting download.gluster.org
12:52 JustinClift kkeithley: You're still logged into download.gluster.org.  Am I ok to reboot it?
13:05 shyam1 joined #gluster-dev
13:16 spandit joined #gluster-dev
13:28 JustinClift kkeithley: Ima just gunna reboot it
13:29 ndevos note: regression tests are failing again, pk and I are looking into it
13:30 JustinClift c
13:30 JustinClift k
13:31 JustinClift If it's a problem with stuff not being deletable from /build/install/var/run, then it's kind of weird that the sudo rm -rf /build/install in /opt/qa/build.sh isn't taking care of that
13:32 JustinClift Anyone know how to start Jenkins on review.gluster.org?
13:33 JustinClift Not seeing a startup method for it in /etc/init.d, nor in /etc/rc.local
13:33 JustinClift Non-urgent (it's running atm), but I'm wondering what happens when the box gets restarted. ;)
13:45 ndevos JustinClift: no the problem is that 'configure' complains about 'error: source directory already configured; run "make distclean" there first'
13:45 JustinClift Weird
13:45 ndevos JustinClift: but, we can not figure out (yet) where a "make distclean" would be missing...
13:46 JustinClift Any chance it's wrong permissions not letting the old artifacts not be removed?
13:46 ndevos and, it works on our systems...
13:46 JustinClift s/not be removed/be removed/
13:46 ndevos oh, maybe?
13:48 jobewan joined #gluster-dev
13:48 JustinClift Having the state dir /var be under /build/install/var/ seems to lead to all kinds of weird permissions problems.  It's why the git version of the build/smoke/regression tests has it kept out of /build/install.  Seems a bit seperate to build artifacts not being removed though. :(
13:58 ndevos JustinClift: my current understanding is like this: each 1st failure after some success runs, seems to be followed by the configure error
13:59 ndevos JustinClift: that suggest that a failure regression run does not cleanup (correctly) after itself
13:59 hagarth joined #gluster-dev
14:00 JustinClift ndevos: Looking at build.sh, that's totally feasible
14:00 JustinClift Should be easy to fix though
14:00 ndevos JustinClift: okay, how/where should the cleanup script be called then?
14:00 ndevos JustinClift: can you make that change?
14:00 JustinClift ndevos: Sure.  Gimme a sec
14:01 ndevos JustinClift: it's all yours!
14:02 JustinClift First thought is to do a git reset --hard HEAD instead of make distclean
14:03 JustinClift Actually, I think I'll do a rm -rf /full/path/to/git/repo/* first, then git reset.  More guaranteed of success.
14:05 JustinClift ndevos: k, should be good
14:05 ndevos JustinClift: what and where is the change you made?
14:06 JustinClift Adjusted build.sh
14:06 JustinClift Take a look, it's super obvious :)
14:06 ndevos JustinClift: okay, lets see
14:07 * JustinClift hopes HEAD is the right tag for this.  From memory it'll be appropriate for detached state checkouts too
14:07 ndevos JustinClift: wait, I dont think thats correct, Jenkins checks out the repo and applies some pathes...
14:08 JustinClift Yeah.  I don't think HEAD will revert the patches though
14:08 JustinClift Actually, it's easy to double check.  I'll do it manually on slave2.
14:08 JustinClift 1 min
14:08 ndevos JustinClift: ah, yes, it should not
14:09 ndevos JustinClift: would it not be cleaner to do a full cleanup in http://build.gluster.org/job/regression/configure before build.sh is run?
14:10 wushudoin joined #gluster-dev
14:10 JustinClift ndevos: the git reset --hard HEAD works as intended
14:10 JustinClift But yeah, I'd prefer we didn't do sudo stuff inside build.sh
14:10 ndevos JustinClift: cool
14:11 JustinClift Want me to revert build.sh, and you can adjust the regression configure job?
14:11 * JustinClift kept a backup ;)
14:12 ndevos JustinClift: nah, thats an email to avati away, we need a fix very soon
14:13 ndevos JustinClift: we should send an email to avati, and mention the changes in build.sh so that he can (approve) move them to teh jenkins script
14:13 JustinClift np
14:14 JustinClift Anyway, feel free to start jobs up again now.  Hopefully this fix will work as intended.
14:25 JustinClift Ugh.  Forge.gluster.org box _hasn't_ been updated after all.
14:25 * JustinClift will do it himself
14:26 johnmark *sigh*
14:26 johnmark gah
14:27 shubhendu joined #gluster-dev
14:32 JustinClift Hmmm, it's saying it's updating a bunch of packages from the "base" repo.
14:32 JustinClift Unsure how that could be correct...
14:32 JustinClift Fuck it.  As long as it comes back after the reboot, I don't care.
14:33 johnmark lol
14:33 johnmark JustinClift: awesome
14:34 JustinClift Actually, looking at some of those ones it's saying it's upgrading, there are double ups.
14:34 JustinClift Now I'm wondering if rebooting is a bad idea.
14:35 JustinClift Like, it's saying it just installed pam, and rpm.
14:35 JustinClift That can't be right.
14:39 lpabon joined #gluster-dev
14:59 JustinClift lpabon: Are your jenkins slaves up-to-date with rpms?
14:59 lpabon no
14:59 lpabon JustinClift, no
15:00 JustinClift Heh.  Should they be updated?
15:00 ndevos JustinClift: just wondering, will the rackspace slaves get online soon again? they speed up regression testing quite a bit
15:06 JustinClift ndevos: I'm working through the issues to make the process of getting them online repeatable.
15:06 JustinClift ndevos: For the first two, I just did them manually.
15:06 JustinClift ndevos: I think we're pretty close now though: http://www.gluster.org/community/do​cumentation/index.php/Jenkins_setup
15:06 glusterbot Title: Jenkins setup - GlusterDocumentation (at www.gluster.org)
15:07 lpabon JustinClift, nah, they are good
15:24 ndevos JustinClift: why is there a step " Set the Jenkins password " ? can the account not be locked down?
15:26 bala joined #gluster-dev
15:26 JustinClift ndevos: Because I haven't gotten up to using public ssh keys yet, and have been using an ssh password so far
15:26 JustinClift ndevos: That definitely needs adjusting... but it can come after the other bits are working properly
15:26 ndevos JustinClift: ah, is that how the master Jenkins instructs the slave?
15:26 JustinClift Yeah
15:27 ndevos right :)
15:27 JustinClift The master jenkins logs in remotely using ssh, as whatever user it's told to
15:27 JustinClift In the master jenkins "Manage Jenkins" page, there's a "Manage Credentials" page
15:27 JustinClift I've just got it using jenkins/[password] atm
15:28 JustinClift It's completely able to use public/private key though
15:28 JustinClift It's not even difficult, I just haven't set that up yet
15:28 JustinClift Can email you the existing pw if you want to try stuff out?
15:29 ndevos no worries, I was just wondering :)
15:29 JustinClift :)
15:30 JustinClift I've added "Create a wiki page listing our infrastructure, and who's responsible for it" to my ToDo list
15:30 * JustinClift thinks it'll help
15:30 bala1 joined #gluster-dev
15:43 [o__o] joined #gluster-dev
16:11 bala joined #gluster-dev
16:23 ndk joined #gluster-dev
16:29 bala joined #gluster-dev
16:36 skoduri joined #gluster-dev
18:01 tdasilva joined #gluster-dev
18:20 shyam1 left #gluster-dev
18:20 shyam1 joined #gluster-dev
18:20 shyam1 left #gluster-dev
18:35 shyam joined #gluster-dev
20:02 systemonkey joined #gluster-dev
20:37 shyam left #gluster-dev
20:53 tdasilva left #gluster-dev
22:55 glusterbot joined #gluster-dev
23:45 awheeler joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary