Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-09-25

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 bala joined #gluster-dev
00:34 shyam joined #gluster-dev
01:38 semiosis joined #gluster-dev
01:45 semiosis joined #gluster-dev
01:45 shyam joined #gluster-dev
01:45 semiosis_ joined #gluster-dev
01:47 semiosis_ joined #gluster-dev
01:48 semiosis joined #gluster-dev
02:36 suliba joined #gluster-dev
03:16 bharata-rao joined #gluster-dev
04:06 shubhendu joined #gluster-dev
04:12 hagarth joined #gluster-dev
04:23 kanagaraj joined #gluster-dev
04:39 rafi1 joined #gluster-dev
04:40 anoopcs joined #gluster-dev
04:43 spandit joined #gluster-dev
04:48 aviksil joined #gluster-dev
04:53 aravindavk joined #gluster-dev
04:55 nishanth joined #gluster-dev
05:01 ndarshan joined #gluster-dev
05:08 kdhananjay joined #gluster-dev
05:20 Gaurav__ joined #gluster-dev
05:27 hagarth joined #gluster-dev
05:31 kshlm joined #gluster-dev
05:31 kshlm joined #gluster-dev
05:40 deepakcs joined #gluster-dev
05:41 Humble I am not able to login to review.gluster.org ..
05:41 Humble anyone else face this issue ?
05:43 Humble aravindavk, anoopcs ^^
05:46 aravindavk Humble, same here.
05:48 Humble aravindavk++ kshlm++ thanks for confirming
05:48 glusterbot Humble: aravindavk's karma is now 1
05:48 glusterbot Humble: kshlm's karma is now 2
05:51 ppai joined #gluster-dev
05:53 raghu joined #gluster-dev
06:06 lalatenduM joined #gluster-dev
06:06 anoopcs Humble: same here
06:06 Humble yep
06:07 anoopcs Is this the error you got?
06:07 anoopcs The page you requested was not found, or you do not have permission to view this page.
06:11 atalur joined #gluster-dev
06:12 spandit joined #gluster-dev
06:12 RaSTar joined #gluster-dev
06:14 kdhananjay joined #gluster-dev
06:30 bala joined #gluster-dev
06:39 jiffin joined #gluster-dev
06:58 pranithk joined #gluster-dev
07:07 RaSTar joined #gluster-dev
07:23 deepakcs joined #gluster-dev
07:27 lalatenduM hagarth, Humble we haven't send any announcement abt 3.6.0beta2, I think we should do that
07:27 Humble hagarth, can u please announce it ?
07:27 lalatenduM Humble +1
07:28 hagarth will do .. I was running some tests with beta2. Things do look good :).
07:30 lalatenduM nice!!
07:32 sickness is someone testing the ec xlator too? =_)
07:32 Humble nice :)
07:47 hagarth sickness: I intend running my regular regression tests on a ec volume with beta2 :)
07:50 sickness oh, ok
07:57 bharata-rao joined #gluster-dev
08:15 ndevos hagarth: oh, I guess it's time for me to run my bug-checke script again and move some bugs to ON_QA for beta2?
08:16 aviksil_ joined #gluster-dev
08:17 Humble hagarth, Is it showing 'your incoming/outgoing/recently closed'.. properly once u login ?
08:19 Humble I mean "changed" tab under 'My' ?
08:19 Humble changed/changes
08:19 hagarth Humble: yes
08:19 aviksil_ joined #gluster-dev
08:19 hagarth ndevos: yes, please!
08:19 hagarth ndevos++
08:19 glusterbot hagarth: ndevos's karma is now 26
08:21 Humble k .. thanks for conforming.. It works for me now
08:21 Humble hagarth++
08:21 glusterbot Humble: hagarth's karma is now 10
08:27 ndevos hagarth: should be done now :)
08:28 hagarth awesome!
08:29 ndevos hagarth: not all bugs on the blocker have all their patches merged (contained in a tag) yet
08:30 ndevos https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-3.6.0&maxdepth=1&hide_resolved=1
08:30 hagarth ndevos: yes, I am working with a few owners to slot them in for a further beta release
08:31 ndevos hagarth: like bug 1122443 is actually for master, there needs to be a clone for 3.6 to get the patch backported :-/
08:31 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1122443 high, unspecified, ---, gluster-bugs, MODIFIED , Symlink mtime changes when rebalancing
08:31 * ndevos clones and corrects at least that one
08:33 hagarth right
08:56 hagarth lalatenduM: can we trigger a covscan run for beta2?
08:58 lalatenduM hagarth, yeah we can do that, do you want it in coverity appliance  or through Covscan website ?
09:00 hagarth lalatenduM: covscan
09:00 hagarth so that all of us can fix serious issues if any
09:01 lalatenduM hagarth, will do
09:02 hagarth lalatenduM: thanks!
09:03 lalatenduM hagarth, thats a good idea. I am sure we can avoid crashes and memory leaks if we fix Coverity issues for those
09:04 hagarth lalatenduM: right
09:15 vimal joined #gluster-dev
09:20 suliba joined #gluster-dev
09:33 ndarshan joined #gluster-dev
09:37 spandit joined #gluster-dev
10:06 lalatenduM hagarth, I did the run, did yoy get the report automatically from covsacn? will post this report to ML
10:12 _Bryan_ joined #gluster-dev
10:30 hagarth lalatenduM: I did get that
10:30 hagarth lalatenduM: ++
10:30 glusterbot hagarth: lalatenduM's karma is now 29
10:45 kkeithley1 joined #gluster-dev
10:55 hagarth joined #gluster-dev
11:02 ppai joined #gluster-dev
11:14 shyam joined #gluster-dev
11:34 lalatenduM ndevos, regarding https://bugzilla.redhat.com/show_bug.cgi?id=1113543
11:34 glusterbot Bug 1113543: low, unspecified, 3.6.0, kkeithle, ON_QA , Spec %post server does not wait for the old glusterd to exit
11:38 lalatenduM ndevos, bug-zapper script moved it from ASSIGNED → ON_QA , which I think is wrong
11:39 ndevos hagarth: is there a bug for http://www.gluster.org/community/documentation/index.php/Features/data-classification ?
11:40 hagarth ndevos: not that I am aware of
11:40 ndevos hagarth: oh, thats a shame
11:40 hagarth ndevos: we have just started implementation, what about that?
11:41 ndevos hagarth: bug 763746 asks for rack awareness, and it would be nice to be able to block it on an other bug
11:41 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=763746 low, medium, ---, kparthas, ASSIGNED , We need an easy way to alter client configs without breaking DVM
11:47 hchiramm_call joined #gluster-dev
11:55 pranithk left #gluster-dev
12:06 lalatenduM ndevos, check my latest comment in https://bugzilla.redhat.com/show_bug.cgi?id=1113543, any idea why the fix did not work in 3.4->3.5.2 (3.5.2 had the fix)
12:06 glusterbot Bug 1113543: low, unspecified, 3.6.0, kkeithle, ON_QA , Spec %post server does not wait for the old glusterd to exit
12:06 lalatenduM kkeithley, ^^
12:07 kkeithley_ I'll take a look
12:07 ndevos lalatenduM: ah, I started looking at that
12:09 ndevos lalatenduM: so, it has been fixed for you?
12:09 lalatenduM ndevos, yeah at leat on 3.6.0beta
12:10 ndevos lalatenduM: hmm, could it be that the psmisc package is not installed everywhere? that would surely cause this patch to not work
12:11 ndevos I filed bug 1146426 about that earlier today
12:11 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1146426 unspecified, unspecified, ---, gluster-bugs, NEW , glusterfs-server and the regression tests require the 'killall' command
12:16 lalatenduM ndevos, interesting, the VM on which I tried these steps does not have psmisc package, "rpm -qa | grep psmisc" does not return anything
12:16 lalatenduM also I dent have killall command in that machine
12:17 lalatenduM s/dent/donot/
12:22 ndevos lalatenduM: aha! that means the old glusterd does not get killed, and therefore the new start of glusterd fails, but glusterd keeps runnin
12:22 ndevos g
12:24 lalatenduM ndevos, ohhh!! let me try it gain, in that case the spec file should have a dependency on psmisc package, isn't it
12:25 ndevos lalatenduM: yes, thats the bug I filed earlier
12:25 ndevos well, more or less, I did not think about killall usage in the .spec
12:26 lalatenduM ndevos++
12:26 glusterbot lalatenduM: ndevos's karma is now 27
12:26 kkeithley_ how about I just fix that right now with the other glusterfs.spec{,.in} changes I'm making
12:27 lalatenduM kkeithley, I am planning to send a patch to masterbranch
12:27 kkeithley_ okay, your change is yours, my changes are mine. ;-)
12:28 Gaurav__ joined #gluster-dev
12:29 lalatenduM kkeithley, you are welcome to assign the bug https://bugzilla.redhat.com/show_bug.cgi?id=1146426 to yourself and send the patch to upstream too :)
12:29 glusterbot Bug 1146426: unspecified, unspecified, ---, lmohanty, ASSIGNED , glusterfs-server and the regression tests require the 'killall' command
12:30 kkeithley_ that's okay, you may do it. ;-)
12:32 lalatenduM kkeithley_, do you mean I can assign it to you?
12:33 kkeithley_ lol, sure, if you want.
12:34 kkeithley_ I'll just fix it in 1146423 and close 1146426 as a dupe.
12:34 lalatenduM kkeithley, done, the bug is assigned it to you now :)  https://bugzilla.redhat.com/show_bug.cgi?id=1146426
12:34 glusterbot Bug 1146426: unspecified, unspecified, ---, kkeithle, ASSIGNED , glusterfs-server and the regression tests require the 'killall' command
12:35 ndevos noooo, you're closing my bug as a dupe :-/
12:35 kkeithley_ lol lol lol
12:35 kkeithley_ what's wrong with close as a dupe?
12:35 kkeithley_ whatever. I'll fix it in a separate patch then
12:36 ndevos I always feel stupid when I file dupes
12:36 ndevos and specially if I point it out myself...
12:36 kkeithley_ I feel stupid a lot, I never let it bother me. much ;-()
12:36 kkeithley_ ;-)
12:36 lalatenduM kkeithley, are you sure it is 1146423 as bz 1146423 is some RHEL 7 bug
12:36 kkeithley_ 1146523
12:36 ndevos hehe
12:40 ndevos lalatenduM: if you can still reproduce 1113543 with beta2, you should move it back to ASSIGNED
12:40 kkeithley_ being pedantic, it would be just a teeny bit weird, funny-weird, for 11464xx to be a dupe of a 11465xx bug.  I won't close it as a dupe. Is there a clone for release-3.6 branch?
12:41 * ndevos doesnt know about that, or he forgot
12:41 lalatenduM ndevos, kkeithley in https://bugzilla.redhat.com/show_bug.cgi?id=1113543, something weired is happening
12:41 glusterbot Bug 1113543: low, unspecified, 3.6.0, kkeithle, ON_QA , Spec %post server does not wait for the old glusterd to exit
12:41 ndevos lalatenduM: can you define the 'weird' part too?
12:41 lalatenduM because when I did not had psmisc package , glusterd should not have restarted right
12:42 lalatenduM but if you see my comment the pid has changed
12:42 lalatenduM how is that possible
12:42 ndevos lalatenduM: no, glusterd would be running, killall glusterd would fail, and starting glusterd should fail too - the original would still be running
12:43 ndevos lalatenduM: hmm, maybe systemctl restarts it at the end?
12:43 lalatenduM ndevos, yeah see comment 13, "systemctl status glusterd" out before update and after update
12:43 lalatenduM the pid has changed
12:43 kkeithley_ whenever something is wrong, blame systemd
12:45 kkeithley_ does -regression-tests really need psmisc? -regression-tests already has a Requires: -server. If server has a Requires: psmisc then everything should be okay.
12:46 lalatenduM kkeithley, agree
12:47 ndevos kkeithley_: oh, yes, in that case - and as long as we're not planning to drop that dependency
12:49 ndevos lalatenduM: isnt there a systemctl in the spec that restarts glusterd at the end of the transaction, some systemd macro?
12:52 lalatenduM ndevos, there are some e.g. "%define _init_restart() /bin/systemctl try-restart %1.service ;"  however I fully dont understand this .
12:53 kkeithley_ it's just a macro def. That's the one for systemd on fedora and RHEL7. The other is for init.d on RHEL[56]
12:55 kkeithley_ %1 expands to glusterd
12:57 kkeithley_ In theory that macro could be used to restart glusterfsd too, in our fedora koji builds where we ship glusterfsd.service and glusterfsd.init
13:39 shyam joined #gluster-dev
14:00 kkeithley_ two regression tests in a row have failed in ec tests. One regression for for a nit change in the glusterfs.spec.in.   Did some test not clean up correctly that's causing ec tests to fail now?
14:00 kkeithley_ s/for for/is for/
14:08 ndevos kkeithley_: the psmisc dependency for bug 1113543 is not sufficient, it glusterd will be stopped after the update :-/
14:08 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1113543 low, unspecified, 3.6.0, kkeithle, POST , Spec %post server does not wait for the old glusterd to exit
14:09 ndevos kkeithley_: lalatenduM tested with psmisc installed, and glusterd fails to start that way, maybe he still has logs from that somewhere?
14:12 lalatenduM ndevos, kkeithley yeah with psmisc , glusterd failes to start after update, which logs you guys need?
14:12 ndevos lalatenduM: the glusterd logs from that update test would be helpful
14:12 ndevos (hopefully)
14:14 lalatenduM ndevos, but without psmisc , glusterd was restated, do we know the root cause?
14:14 ndevos lalatenduM: _init_restart would have done that
14:15 lalatenduM ndevos, then what is the need of "killall --wait glusterd" is it only required for non systemd OSs?
14:15 lalatenduM kkeithley_, ^^
14:17 ndevos lalatenduM: it is about the next glusterd start, it has some special options for updating
14:17 ndevos lalatenduM: although I'm not sure we still need that in current versions
14:17 dlambrig_ joined #gluster-dev
14:30 xavih joined #gluster-dev
14:36 lalatenduM ndevos, kkeithley I have attached the glusterd logs to the bug
14:42 ndevos lalatenduM: was that from an update where glusterd was running, and after the update glusterd was not running anymore?
14:42 lalatenduM ndevos, yes
14:42 ndevos lalatenduM: and you did not start glusterd afterwards?
14:43 ndevos from the log, I would think glusterd is running... hmm
14:44 lalatenduM ndevos, nope, I have not started glusterd
14:45 ndevos lalatenduM: ah, there is an other glusterd messing in the last 'Final graph' output
14:46 lalatenduM ndevos, I think you are talking about "[2014-09-25 14:11:19.263199] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down"
14:46 ndevos well, it could be the glusterd that is running with --xlator-option *.upgrade=on
14:46 glusterbot lalatenduM: ('s karma is now -6
14:46 ndevos lalatenduM: yes
14:47 ndevos lalatenduM: that is printed from a signal-handler (15 = SIGTERM), it would be nice to know who sends that to the process
14:50 ndevos lalatenduM: aha! found it, when "--xlator-option *.upgrade=on" is passed, glusterd_handle_upgrade_downgrade() is called and it does a "kill (getpid(), SIGTERM)"
14:53 lalatenduM ndevos, and it should not?
14:55 deepakcs joined #gluster-dev
14:57 ndevos lalatenduM: I think it is intentional, but that means that 'systemctl try-restart' or 'service cond-restart' will not work, glusterd will likely have exited already by the time those are executed
14:59 lalatenduM ndevos, looks like source code is doing it which rpm should do
14:59 ndevos lalatenduM: we need to 1. get the status of a (not) running glusterd, 2. stop a running glusterd, 3. run glusterd with *.upgrade=on, 4. start (or not) glusterd, depending on 1
15:00 ndevos lalatenduM: no, I think the *.upgrade=on case is special, and glusterd should exit after it updated its configuration files
15:04 * kkeithley_ reads the scrollback...
15:05 kkeithley_ hmm. IIRC, adding --wait to the killall was JoeJulian's idea, but even he said, again IIRC, it wasn't perfect.
15:07 ndevos kkeithley_: no, I came up with the --wait, and we need that for the next glusterd *upgrade=on run
15:08 ndevos lalatenduM: http://paste.fedoraproject.org/136437/14116576 should probably solve the issue
15:08 kkeithley_ oh, okay, clearly IDNRC. ;-)
15:08 ndevos :)
15:09 lalatenduM ndevos, do you think this will work for systemd and non systemd systems
15:09 ndevos the issue is that "glusterd --xlator-option *.upgrade=on -N" exits glusterd after it updated the config, even when glusterd was running in the first place
15:09 ndevos lalatenduM: if there is a %_init_start macro, yes :)
15:10 ndevos lalatenduM: I did not check if the macro is in the .spec, if not, you need one that does 'systemctl start ...' or 'service ... start'
15:10 lalatenduM ndevos, ok, but I can see you have used "killall glusterd"
15:10 lalatenduM I thought we need --wait
15:11 ndevos uh, yes, we need the --wait too, I only butchered a .spec.in that was in my current branch
15:11 lalatenduM ndevos, ok what about  1. get the status of a (not) running glusterd
15:12 ndevos lalatenduM: well, after looking at the spec, that is actually there already, its the 'if' statement just above the killall
15:13 lalatenduM ndevos, checking the spec file
15:13 lalatenduM ndevos, right
15:14 lalatenduM ndevos, once you send a patch I can test this with a scratch build
15:14 kkeithley_ or something that checks using what's in /var/run/glusterd.pid?
15:14 ndevos lalatenduM: oh, I dont intent to send the patch, I'd like to pass that on to you :D
15:14 kkeithley_ using pid from /var/run/glusterd.pid
15:15 ndevos kkeithley_: that is possible too, but there can be a stale pid in there and you dont want to kill the wrong process
15:15 kkeithley_ I think that's me actually
15:15 lalatenduM ndevos, I wouldl suggest you to send the patch as you have come up with the solution,may be if the patch gets any review comment I can work on them and resend :)
15:16 ndevos lalatenduM: kkeithley_ is assigned to the bug :D
15:16 lalatenduM :)
15:16 lalatenduM kkeithley, so it is you now :D
15:17 kkeithley_ yeah, I'll give Niels credit in the commit, how's that?
15:18 ndevos kkeithley_: oh, I'd love that :P
15:19 kkeithley_ heh
15:20 lalatenduM kkeithley, you can use as a tester :) for the change
15:20 kkeithley_ ???
15:21 * kkeithley_ can't parse that
15:21 lalatenduM s/use/use me/
15:21 lalatenduM :)
15:21 nishanth joined #gluster-dev
15:21 kkeithley_ :-)
15:26 JustinClift Hmmm, there seem to be a lot of failures on the regression test slave for ec over the last week or so
15:26 JustinClift Spurious failures that is
15:27 * JustinClift is pretty sure that's not a good sign
15:29 jobewan joined #gluster-dev
15:29 hagarth joined #gluster-dev
15:34 kkeithley_ lalatenduM: there you go
15:34 lalatenduM kkeithley_, the recent commit to the fedora dist git shows you have remove the change log for " - add psmisc for -server" ?
15:35 lalatenduM kkeithley_, I thought you are going to send a patch to upstream master?
15:35 kkeithley_ %changelog
15:35 kkeithley_ * Thu Sep 25 2014  Kaleb S. KEITHLEY <kkeithle[at]redhat.com>
15:35 kkeithley_ - add psmisc for -server
15:35 kkeithley_ - add smarter logic to restart glusterd in %%post server
15:35 kkeithley_ is what's in the file I committed
15:37 lalatenduM kkeithley_, np, I just read the mail and misinterpreted it
15:38 kkeithley_ yep, no worries
15:39 kkeithley_ that's what I figured
15:41 misc JustinClift: FYI : "salt * pkg.install bash refresh=True
15:42 misc JustinClift: just how I patched 3 servers with salt :p
15:44 misc JustinClift: i copied your key on salt-master, so ssh root@salt-master.gluster.org should give the access
15:50 JustinClift misc: Thx.  I'll take a look at it later.  Buried under other things atm.
15:52 xavih joined #gluster-dev
16:05 nishanth joined #gluster-dev
16:05 JustinClift misc: When using salt like that, how does it show you when something goes wrong?
16:05 JustinClift eg when one of the boxes doesn't have the update available
16:05 misc JustinClift: nope
16:06 JustinClift So, needs manual verification afterwards too
16:06 JustinClift np
16:06 JustinClift Good to be aware of :)
16:06 misc because you ask for the latest
16:06 misc if ther eis no update, then the latest is the one you ask
16:06 JustinClift Sure
16:06 misc I think you can ask for a precise version
16:06 misc or just use cmd.run and rpm -q bash
16:07 JustinClift Sure
16:07 JustinClift The download.gluster.org box is refusing the show the update still
16:07 JustinClift Even though it's RHEL 6.5, and I've cleared the yum cache
16:07 * JustinClift hasn't investigated in depth
16:08 JustinClift Still doing stuff on other boxes
16:08 JustinClift And hoping you can look into it :)
16:08 misc sure, but not access it seems :(
16:09 * JustinClift looks
16:10 JustinClift Weird, you don't have an account on there yet
16:10 * JustinClift creates one
16:16 JustinClift misc: Check your email for details, and please verify it works. :)
16:16 misc JustinClift: yeah
16:16 misc it work
16:16 JustinClift :)
16:17 misc I might propose that we standardize on the server name somehow
16:19 ndevos kkeithley_: oh, and you may want to stack your change on top of http://review.gluster.org/8844
16:19 kkeithley_ see what I mean about feeling stupid?
16:20 kkeithley_ and how do I do that?
16:20 JustinClift misc: Standardised servers names sounds like a win
16:20 kkeithley_ and which change are you referring to?
16:20 JustinClift We have things like "supercolony" "supercolony-gen1" (different servers), et
16:21 JustinClift It's weird
16:21 JustinClift etc
16:22 ndevos kkeithley_: something like "git fetch ssh://kkeithle@git.gluster.org/glusterfs refs/changes/44/8844/2 && git rebase FETCH_HEAD" while on your branch with that change
16:25 kkeithley_ layer the synching dist-git changes?
16:25 xavih_ joined #gluster-dev
16:27 misc JustinClift: "historical" :)
16:32 lalatenduM kkeithley, I had done a scratch build with your changes in fedora dist git, just saw that your patch is changing in upstream :)
16:34 lalatenduM regarding the glusterd restart issue
16:34 Gaurav__ joined #gluster-dev
16:39 lalatenduM kkeithley_, ohh you have again commited in to fedora dist git
16:39 lalatenduM cool
16:41 JustinClift The regression slaves are sure being put to a lot of use recently.  Glad we added more the other day
16:41 JoeJulian kkeithley_: You didn't completely misremember. The --wait that I had suggested was bug 1010068 though.
16:41 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1010068 unspecified, unspecified, ---, kparthas, NEW , enhancement: Add --wait switch to cause glusterd to stay in the foreground until child services are started
16:44 xavih joined #gluster-dev
16:45 kkeithley_ lalatenduM: yeah, sorry about the churn. Trying to keep too many balls in the air.
16:46 lalatenduM kkeithley, I understand
16:47 kkeithley_ now what's wrong with github, trying create a clone in github of the ganesha repo that's on github. sigh
16:49 JustinClift status.github.com isn't showing problems
16:49 JustinClift ^ data point
17:03 ndevos kkeithley_: if you have more compile warnings that you can not fix today, let me know and I'll look at them tomorrow morning
17:03 kkeithley_ dunno, it just says there was an error pushing commits to GitHub. doesn't tell me what the error was. oh well, off to the dentist...
17:04 kkeithley_ they're all fixed. I just need to get them into a repo/branch where Frank can pull from
17:05 kkeithley_ gotta run
17:07 ndevos kkeithley_: if you cant get them pushed, you can mail the patches to me an I'll put them in a branch in my github repo
17:08 ndevos and have fun at the dentist!
17:11 hagarth hchiramm_call: you seem to have been on a call for a long while now ;)
17:13 JustinClift Nothing seems to have broken by turning off that old gluster-mirror2 server
17:13 * JustinClift is going to nuke it shortly
17:13 JustinClift misc: FYI ^
17:15 hagarth JustinClift: the debian one?
17:15 JustinClift Yeah
17:15 hagarth JustinClift: cool, one less thing to be bothered about!
17:15 JustinClift :)
17:16 misc JustinClift: ok for me
17:22 * JustinClift waves a magic wand at gluster-mirror2, quietly mouthing the ancient incantation "Die Fukker"
17:22 lalatenduM ndevos, kkeithley good news changes in fedora dist git i.e. http://review.gluster.org/#/c/8855 , fixes glusterd restart issue :), tested with scratch build http://koji.fedoraproject.org/koji/taskinfo?taskID=7694323
17:22 JustinClift k, it's gone
17:22 lalatenduM s/news/news,/
17:23 JustinClift lalatenduM: That'll go into 3.6.0 beta3?
17:25 lalatenduM JustinClift, yes, I would recommend it
17:25 hagarth JustinClift++
17:25 glusterbot hagarth: JustinClift's karma is now 25
17:25 an joined #gluster-dev
17:25 lalatenduM JustinClift, yeah it will go in beta3, changes are present in fedora dist git :)
17:26 JustinClift :)
17:26 lalatenduM kkeithley_, has committed them
17:27 lalatenduM hagarth, we need the ldconfig patch too :)
17:28 JustinClift misc: Have you had a change to look at download.gluster.org yet, or figure out why it's not seeing the updated bash rpm?
17:29 hagarth lalatenduM: merging it now :)
17:29 lalatenduM hagarth++
17:29 glusterbot lalatenduM: hagarth's karma is now 11
17:39 misc JustinClift: I think that's a CDN issue
17:39 JustinClift misc: Sounds like it
17:40 misc because all seems correct
17:40 JustinClift misc: Should we just wait for a bit, or should we mention it on tech-list or something?
17:41 JustinClift kkeithley_: Your 8859 CR failed regression on pump.t.  It's just been retriggered to try again
17:42 * JustinClift has seen a few pump.t failures today, but not looked closely
17:42 misc JustinClift: I was looking on internal security chan as it was mentionned
17:42 misc JustinClift: one solution could be to download and install
17:44 JustinClift As long as that doesn't cause future problems, lets do that
17:45 JustinClift misc: You good to get that done?
17:47 misc JustinClift: yeah
17:47 JustinClift misc: Tx :)
17:48 misc but after that, I need to get fetch some food
17:48 JustinClift misc: ++
17:48 glusterbot JustinClift: misc's karma is now 7
17:48 JustinClift misc++
17:48 glusterbot JustinClift: misc's karma is now 8
17:48 JustinClift misc++
17:48 glusterbot JustinClift: misc's karma is now 9
17:48 JustinClift misc++
17:48 glusterbot JustinClift: misc's karma is now 10
17:48 JustinClift Interesting.  No rate limit
17:52 JustinClift That mgmt locks v3 spurious error is annoying
18:02 hagarth JustinClift: +1
18:02 misc JustinClift: updated
18:02 JustinClift misc: Tx
18:02 * JustinClift just sent out update
18:02 misc ( which some hurdle, since my other server also suffer from the same cdn issue )
18:03 misc with
18:03 JustinClift Awesome CDN we have there. ;)
18:04 JustinClift "RH announces critical remote security exploit.  RH CDN servers refuse to provide the update for it"
18:04 misc it is maybe the older cdn, not the new one
18:04 JustinClift Meanwhile, the CentOS mirrors had it yesterday.  Someone, somewhere is probably doing stats about that...
18:05 JustinClift old vs new CDN.  From customer perspective they wouldn't care. ;)
18:05 misc but i am a bit lost when it come to the difference between rhn and subscription manager
18:05 * JustinClift prefers subscription manager
18:05 misc JustinClift: well, if we said "use the new one to solve your problem" and people keep using the old one ...
18:05 JustinClift Ahhh
18:08 misc but i didn't follow :)
18:13 hchiramm hagarth, thanks for the notification :)
18:13 hagarth hchiramm: :)
18:14 hchiramm :)
18:16 * hagarth notices another significant memory leak with glustershd in beta2
18:16 hchiramm :(
18:17 hagarth hchiramm: testing a patch now
18:17 hchiramm oh..ok..
18:17 hchiramm kkeithley, u may edit (minor edit) the commit message in http://review.gluster.org/#/c/8857/ and retrigger the build  verification.
18:21 JustinClift We have way too many spurious regression failures happening
18:21 JustinClift It's like the bad old days a few months ago, where we needed to hunt them down and figure out wtf is going on
18:23 * JustinClift bets if he runs 20 regression runs on master overnight, 1/2 would fail spuriously
18:25 JustinClift k, food time
18:51 an joined #gluster-dev
18:58 lalatenduM joined #gluster-dev
19:16 xavih joined #gluster-dev
20:00 hchiramm joined #gluster-dev
20:39 xavih joined #gluster-dev
20:48 xavih joined #gluster-dev
21:34 xavih joined #gluster-dev
22:24 dlambrig_ left #gluster-dev
22:33 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary