Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-01-15

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 zhangjn joined #gluster-dev
00:04 zhangjn joined #gluster-dev
00:17 zhangjn joined #gluster-dev
00:53 zhangjn joined #gluster-dev
00:56 shyam joined #gluster-dev
01:02 zhangjn joined #gluster-dev
01:07 EinstCrazy joined #gluster-dev
01:36 zhangjn joined #gluster-dev
01:40 kotreshhr joined #gluster-dev
01:46 zhangjn joined #gluster-dev
01:53 zhangjn joined #gluster-dev
02:05 shyam joined #gluster-dev
02:08 zhangjn joined #gluster-dev
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:54 raghug joined #gluster-dev
03:18 gem joined #gluster-dev
04:17 zhangjn joined #gluster-dev
04:35 zhangjn joined #gluster-dev
05:11 sakshi joined #gluster-dev
05:35 zhangjn joined #gluster-dev
05:37 zhangjn joined #gluster-dev
05:38 gem joined #gluster-dev
05:39 zhangjn_ joined #gluster-dev
05:40 sankarshan_ joined #gluster-dev
05:41 zhangjn_ joined #gluster-dev
05:43 zhangjn joined #gluster-dev
05:47 zhangjn joined #gluster-dev
05:57 mchangir joined #gluster-dev
06:13 vimal joined #gluster-dev
06:17 aravindavk joined #gluster-dev
06:35 gem joined #gluster-dev
06:37 ashiq joined #gluster-dev
07:19 sankarshan_ joined #gluster-dev
07:40 sankarshan_ joined #gluster-dev
07:41 sankarshan joined #gluster-dev
07:41 sankarshan joined #gluster-dev
07:48 kotreshhr joined #gluster-dev
07:50 primusinterpares joined #gluster-dev
08:03 vimal joined #gluster-dev
08:49 skoduri joined #gluster-dev
09:04 zhangjn joined #gluster-dev
09:19 obnox kind of nasty. still lots of intermittent test failures due to coredumps
09:24 csim on HEAD ?
09:36 xavih I'm seing some core dumps caused by a statedump being processed before fuse_init() and fuse_graph_sync() are executed
09:37 xavih the statedump functions assume that priv->active_subvol is set, which is false if fuse_graph_sync() has not been called
09:40 kotreshhr left #gluster-dev
09:40 obnox csim: yes
09:41 obnox csim: trying to get a patch for the ctdb hook script through regressions since many days now
09:41 obnox http://review.gluster.org/#/c/13170/
09:41 obnox here is the build with the core:
09:41 obnox https://build.gluster.org/job/rackspace-r​egression-2GB-triggered/17612/consoleFull
09:41 obnox core seems to come from  tests/performance/quick-read.t
09:44 obnox and here tests/basic/ec/ec-7-3.t failed but there was also a core as it seems:
09:44 obnox https://build.gluster.org/job/rackspace-r​egression-2GB-triggered/17610/consoleFull
09:44 obnox from same review request
09:45 obnox and so on
09:48 csim not sure what i can do :/
09:48 csim but I am surprised that something got merged that did trigger a core
09:51 obnox csim: yeah. spurious / intermittent failures
09:53 csim obnox: wouldn't something likeelectricfence or clang help ?
10:02 obnox csim: dunno. after all, we _do_ get cores sometimes, so these could be analyzed. Maybe I don't understand the suggestion really.. :-)
10:04 csim obnox: well, maybe I am wrong, but wouldn't electricfence detect errors sooner, so segfault more often when running the tests ?
10:07 obnox not sure
10:08 * obnox does not know electric fence
10:08 EinstCrazy joined #gluster-dev
10:09 obnox csim: what would help would be (maybe it already exists), if one would get backtraces in the logs when a core occurs, and one could see these logs in the console output in jenkins
10:09 obnox because if these things are spurious failures, it might be not so easy to repro
10:09 csim http://elinux.org/Electric_Fence
10:09 csim obnox: yeah, I think this could be done with some abrt magic, but only on linux
10:09 obnox csim: yeah, i googled. hence the answer above.
10:10 obnox csim: better than nothing ...
10:11 obnox in samba code we do these things (not integration into a jenkins) but we have a panic function to dump core and print backtraces . this is triggered upon several things, e.g. segfault, etc
10:11 csim I guess a similar function could be added ?
10:11 obnox i guess so
10:12 obnox so even w/o actually having the core file, and w/o access to the very system you can do some amount of analysis
10:12 obnox also, running the code under valgrind helps a lot in spotting the segfaults
10:13 obnox xavih: the problem you are referring to is in fuse_itable_dump() right?
10:13 xavih obnox: yes
10:14 xavih obnox: the other one you reported seems to be related with a bad access (probably to an unloaded module) during cleanup
10:14 obnox ok
10:15 xavih obnox: I tried to see if some recent change to fuse may have caused the problem, but I haven't seen anything interesting
10:15 xavih obnox: though I don't know much about fuse
10:16 obnox xavih: so, what would be your suggestion: fix fuse_itable_dump() to check active_subvol for NULL or fix callers?
10:16 obnox xavih: neither do I. I would rather fix fuse_itable_dump()
10:16 obnox just to be on the safe side
10:18 xavih obnox: that would fix this particular problem, but I'm not sure what really caused the problem in first place (I have only seen this core recently). Maybe something is not working as expected an other nasty things could happen...
10:19 xavih obnox: I would like someone with deep fuse knowledge could analyze it
10:19 obnox xavih: sure
10:23 obnox xavih: ok, I looked around the code a bit. If I get it right, the statedump is triggered by a  USR1 signal to glusterfsd
10:24 xavih obnox: yes
10:24 obnox xavih: and this can be done from userspace at any time
10:24 obnox xavih: so i'd argue the callbacks in the fuse xlator should be safe to be called at any time
10:24 obnox even if active_subvol is NULL
10:25 xavih obnox: that's for sure, however this didn't happen some time ago, so I assume that the mount command didn't return until fuse was fully initialized
10:25 xavih obnox: if now mount returns too early, it's quite possible that other problems could appear
10:26 obnox ah
10:26 xavih obnox: anyway you are right. statedump functions should not assume that active_subvol is != NULL
10:27 xavih obnox: but maybe this is not a problem at all. I don't know...
10:31 obnox xavih: well, if you saw a coredump from that codepath, then there is some problem there ...
10:31 obnox :-)
10:32 xavih obnox: I mean that 'mount' returns too early, not the segmentation fault. Of course it's a problem :P
10:32 obnox :-D
10:32 obnox ok, signal handlers and backtrace printers upon SIGSEV do exist in gluster. (everything else would have puzzled me..)
10:48 csim so they are not enabled for ci ?
10:54 aravindavk joined #gluster-dev
11:05 ira joined #gluster-dev
11:12 obnox csim: well, i just looked in to that core. and it seems it called the print trace function
11:17 obnox csim: but I do not find anything that looks like it in the logs
11:20 csim obnox: stderr not logged ?
11:21 obnox csim: I had assumed that this func would write to a log file...
11:21 obnox maybe it does not
11:21 obnox I am still only beginning to understand how the gluster code works here
11:24 obnox uh, now I don't quite understand that:
11:24 obnox yesterday the netbsd regression succeeded on my patch, now I am getting this mail from jenkins:
11:24 obnox "Dropping Verified result, now is NetBSD-regression." ??
11:25 obnox not from jenkins but from gerrit
11:25 vmallika joined #gluster-dev
11:25 obnox http://review.gluster.org/#/c/13170/
11:25 obnox ah, did it simply change how it is presented in gerrit?
11:25 obnox it just basically changed labes
11:26 obnox from code-review (or so) to label  NetBSD-regression
11:26 obnox that makes sense
11:27 ndevos obnox: it didnt "simply change", its a manual thing
11:27 obnox ok
11:28 obnox ndevos: maybe you can tell me: when there is a coredump in a test run, and I download the build and logs from jenkins.
11:28 kotreshhr joined #gluster-dev
11:28 obnox ndevos: I see a gf_print_trace() in the backtrace
11:28 obnox ndevos: shouldn't I see the backtrace somewhere in the logs?
11:29 ndevos obnox: well, yes, but if it crashed in gf_print_trace() maybe the backtrace was not written before the process exited?
11:29 obnox ndevos: right.
11:30 obnox ndevos: but this was called upon a sig 11 handler
11:31 ndevos obnox: ah, gf_print_trace() for a statedump, not on a segmentation fault (is that gf_print_backtrace()?)
11:32 obnox print_trace, I think for a sig11 handler
11:32 obnox was my naive understanding
11:32 obnox i could have crashed _again_ in the handler of cours
11:32 obnox e
11:33 ndevos well, a segfault should cause the backtrace to be written to the process' log
11:33 obnox so if glusterfsd printed that in the quick-read.t test, i would assume some log in the glusterfsd log file?
11:33 obnox from the quick-read.tar ?
11:33 obnox ndevos: yeah, what would that process' log be?
11:34 ndevos well, depends, for fuse mounts it is /var/log/glusterfs/$MOUNT_PATH_​WITH_DASH_INSTRAD_OF_SLASH.log
11:35 ndevos for bricks, it would be similar, but under /var/log/glusterfs/bricks/...
11:35 ndevos and glusterd has /var/log/glusterfs/etc-gluster-glusterd.log
11:35 obnox ok. but glusterfsd is printing this trace in the core
11:35 obnox so it would be in the mount log
11:35 obnox i.e. mnt-glusterfs-0.log
11:35 obnox or mnt-glusterfs-1.log
11:35 obnox in this case -- correct?
11:36 ndevos glusterfsd is only the loader of the xlators, the glusterd and glusterfsd binaries are hard- or sym-linked
11:36 ndevos as in, all binaries that are running are glusterfsd ones
11:37 ndevos but yes, if it is the mount log, mnt-glusterfs-0.log would be possible
11:37 obnox hm, ok.
11:38 obnox so. the test quick-read.t was the last test that was run . it passed. but afterwards test runs are aborted as failed because a core file was found
11:38 obnox does this mean that the core happened when running the qick-read.t test?
11:38 ndevos yes, I think so
11:38 obnox i.e. is the existence of cores checked after each test that has been run?
11:38 obnox ok
11:39 ndevos there was the occasional segfault when a process exited, but I have not seen those for quite a while
11:39 obnox so I would expect something like a backtrace in _some_ logs in the quick-test.tar that I can take from the big logs tarball
11:40 obnox but I can not find anything that looks like a backtrace to me
11:41 ndevos if I remember well, we had problems where part of the core functionlity was cleaned-up already, and some pending log messages caused a segfault
11:41 ndevos this sounds pretty much like it, I think?
11:41 obnox and the core looks like this: http://paste.fedoraproject.org/311172/45285808/
11:42 obnox ndevos: I have no idea. just trying to get that hook script patch through regressions since a couple of days ;-)
11:42 ndevos do you have any other threads in that core?
11:42 obnox and i just thought I can try and see what Info I can get from the recorded data
11:42 ndevos ever heard of termbin.com ?
11:42 obnox no
11:43 ndevos termbin.com is a nice fpaste replacement, doesnt need fpaste to be installed, only netcat or nc
11:43 ndevos anyway, can you post "thread apply all bt"
11:43 obnox so what's the advantage if I have fpaste?
11:43 obnox ndevos: sure
11:44 ndevos I use throw away VMs for testing and development, they normally do not have access to a repo with fpaste (RHEL or CentOS)
11:46 obnox ndevos: ok. makes sense. but this is just logs and archived build data downloaded to my host
11:46 obnox it seems to be an intermittent failure anyways
11:47 obnox http://paste.fedoraproject.org/311173/58414145/
11:47 kotreshhr left #gluster-dev
11:48 ndevos obnox: oh, that actually is the nfs proces, so the backtrace should have landed in the nfs.log
11:48 obnox http://bholley.net/blog/2015/must-be-thi​s-tall-to-write-multi-threaded-code.html
11:48 obnox ndevos: looking
11:50 obnox how to you tell it is 'the nfs process' ? it says it was generated by glusterfs
11:50 ndevos obnox: from the commandline
11:50 obnox from the volfile name?
11:51 obnox id
11:51 ndevos yes
11:51 obnox http://paste.fedoraproject.org/311175/85864114/
11:51 obnox that's nfs.log
11:51 obnox gluster logs are still a mystery to me, but I can't spot a backtrace in that log
11:52 ndevos obnox: thread 5 is in cleanup_and_exit, I think that only gets called on an intentional process exit
11:52 ndevos nah, there is noting useful in that log :-/
11:53 gem joined #gluster-dev
11:53 skoduri joined #gluster-dev
11:54 obnox hm. ok.
11:54 obnox but it is slightly disquieting that there are so many intermittent coredumps in the regression runs
11:55 kotreshhr joined #gluster-dev
11:55 ndevos yeah, we were pretty good for a while, but it seems to get worse again
11:58 obnox ndevos: so for this one, it does not seem we can do any more analysis, right?
11:59 ndevos obnox: not easily no...
12:03 ndevos obnox: seems very similar to bug 1293594, and for whatever reason it is marked as a gcc bug...
12:03 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1293594 medium, unspecified, rc, jakub, NEW , Segmentation fault in '_Unwind_Backtrace ()'
12:28 EinstCrazy joined #gluster-dev
12:33 kkeithley where did the 'abandon' button go in gerrit?
12:34 obnox kkeithley: right next to the rebase and cherry-pick buttons?
12:34 obnox kkeithley: I think it vanishes once the patch has been merged
12:34 obnox (reasonably so)
12:34 kkeithley hmm
12:34 obnox it is nice and red for me
12:35 obnox kkeithley: e.g. I see one here:
12:35 obnox http://review.gluster.org/#/c/13170/
12:35 kkeithley yeah, it's because I already merged it
12:35 kkeithley false alarm
12:35 obnox :-)
12:36 ndevos you can also only abandon changes that you sent yourself, unless you're an admin mayve
12:36 ndevos *maybe
12:36 kkeithley yes, it was my own change
12:37 kkeithley gerrit is smarter than I realized
12:40 obnox ;-)
12:40 obnox ndevos: that bug 1293594 you quoted above, this looks very much the same indeed
12:40 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1293594 medium, unspecified, rc, jakub, NEW , Segmentation fault in '_Unwind_Backtrace ()'
12:41 ndevos obnox: maybe we can persuade kkeithley to fix it, looks like something he might like
12:42 obnox ndevos:  but I don't understand yet, whether really the _Unwind_Backtrace function is segfaulting _again_ ?
12:42 obnox ndevos: because it is actually called from a handler for SIG 11
12:42 skoduri joined #gluster-dev
12:42 obnox so I am kind of confused
12:44 kkeithley that's why they call me "the fixer".   Oh wait, that's something else. never mind. ;-)
12:44 mchangir joined #gluster-dev
12:44 obnox kkeithley: heh
12:45 * obnox will from now on associate that nice metallica song with kkeithley ;-)
12:47 kkeithley hmm, I'm not fully up on metallica lyrics, except maybe Enter Sandman. Which song?
12:47 obnox http://www.metrolyrics.com/f​ixxxer-lyrics-metallica.html
12:47 obnox https://www.youtube.com/watch?v=MnpjyNgzJ5Q
12:48 obnox Tell me, can you heal what father's done? Or cut this rope and let us run? Can you heal the broken worlds within? ...
12:49 obnox song caled fixxxer
12:49 obnox +l
12:50 kkeithley yup, playing now
12:51 ndevos obnox: yeah, thats what I think happens, cleanup_and_exit causes a segfault, causes backtrace() to be called, causes an other segfault in _Unwind_Backtrace()
12:51 obnox nasty
12:52 ndevos well, maybe cleanup_and_exit() does not cause the segfault itself, but causes an other thread to segfault
13:06 obnox may I repeat: http://bholley.net/blog/2015/must-be-thi​s-tall-to-write-multi-threaded-code.html
13:07 obnox that would actually be a good topic for this channel ;-)
13:11 ndevos kkeithley is the tallest of us, would he be tall enough to fix it?
13:11 kkeithley I'm only taller because my cowboy boots have 3cm heels
13:13 * kkeithley wonders what's going to happen when he tries to build nfs-ganesha in Ubuntu Launchpad, which needs libntirpc from the same PPA in Launchpad.
13:14 lpabon joined #gluster-dev
13:14 * ndevos wonders what pyxattr version he should put in the CentOS Storage SIG for glusterfs-3.7 support
13:15 kkeithley ???  what version is in epel-7?
13:15 kkeithley why not the same version as what's in epel-7?
13:16 * kkeithley wonders, if obnox is a metallica fan, is he also a GnR fan?
13:16 ndevos pyxattr-0.5.0-1.el6 in epel, pyxattr-0.5.3-6.fc24 for rawhide, not sure whats in rhel7
13:17 ndevos pyxattr isnt in epel-7, because it is part of rhel7
13:17 kkeithley it's not in epel-7 at all.
13:17 kkeithley snap
13:18 ndevos I only need to add it for CentOS-6...
13:19 kkeithley I vote for pyxattr-0.5.0-1.el6
13:19 ndevos ok, its all the same to me
13:20 kkeithley on the basis that if somehow something doesn't work, we don't need to figure out whether it's because of different versions of pyxattr.
13:21 kkeithley but whatever
13:22 kkeithley why is jenkins set to shut down?
13:22 ndevos I would like it to reload its configuration, in the hope all smoke tests can use the new Smoke label afterwards
13:23 kkeithley so.....   are we waiting for existing jobs to finish before this happens, or what?
13:28 ndevos yes, waiting for a loong time already
13:29 ndevos I'm not sure, but didnt regression tests take 2:20 hours just a few weeks ago, how did that jump 1+ hour?
13:29 kkeithley is that necessary?  Or how many slaves are still running? Can we just kill those jobs? (And then restart them?)
13:30 kkeithley I can remember when regression jobs only took an hour.   This is just one part of the reason why I want to shift focus from adding regression tests to adding unit tests.
13:30 ndevos seems 3 out of 4 should be finished within the next 5 minutes, the one netbsd seems stuck at the start for about 3+ hours now
13:31 * ndevos abandons that one
13:39 ndevos hmm, from 15secs and 1:15 and such, times are now N/A
13:51 obnox whooo! after so many tries, my patch is finfally fully regressopm-verified! "=_
13:51 obnox :-)
13:51 obnox ndevos: if you feel like it, would you bother to merge http://review.gluster.org/#/c/13170/ ? :-)
13:51 obnox (you did the first reviews and I addressed your requests)
13:53 ndevos obnox: done!
13:58 obnox ndevos++ thanks
13:58 glusterbot obnox: ndevos's karma is now 234
13:58 kkeithley bahhh..   centos regressions are dropping cores from an assert in cluster/ec code!   ???? huh?
13:59 ndevos oh, I think there is a patch for that already
13:59 kkeithley oh?
14:00 ndevos at least there was an email on the -devel list about it
14:00 kkeithley http://review.gluster.org/13238  ?
14:02 ndevos http://thread.gmane.org/gmane.comp.file-​systems.gluster.devel/13567/focus=13569 ?
14:03 kkeithley http://review.gluster.org/#/c/13039
14:04 kkeithley is jenkins restarted yet?
14:05 kkeithley oh, restarting now
14:06 kkeithley let's get that sucker merged and then I can rebase everything.  sigh
14:06 ndevos its merged already, in both master and 3.7
14:06 ndevos and yes, looks like Jenkins is restarting now, how long does that take?
14:09 kkeithley it is? I don't see it in the log after I rebased one of my patches?
14:09 ndevos Gerrit says it is?
14:10 kkeithley oh, maybe this commit 3882408103973eac6983c2efdd5af8b1d51f272c
14:12 kkeithley jenkins is still not back?  That doesn't seem right
14:13 ndevos still "Please wait while Jenkins is restarting..." for me too
14:13 ndevos hi csim: do you know how long it takes for Jenkins to restart itself?
14:16 csim ndevos: no idea, i did see the message as well but I was wondering why it was so long
14:16 csim I can take a look
14:17 ndevos csim: yeah, please check, it's 'only' a webui, right? java-- maybe
14:17 glusterbot ndevos: java's karma is now -2
14:17 csim ndevos: everything is bundled in 1 process
14:17 csim 1 process to rule them all :)
14:17 ndevos heh, yeah, I know its not *that* simple
14:22 kkeithley that commit was back on 21 December.  I know I've rebased my patches since then. And I'm still getting an assert?  Or you think that change is what is now causing the assert?
14:22 kkeithley and jenkins is back
14:25 shyam joined #gluster-dev
14:25 csim yeah, i did restart it
14:27 kkeithley but slaves are off-line?
14:28 csim they are started on demand, no ?
14:29 kkeithley https://build.gluster.org/view/gluster%20on​ly/job/rackspace-regression-2GB-triggered/      job sitting in pending saying no nodes are on-line
14:30 ndevos thanks, csim
14:30 csim kkeithley: seems to have started
14:30 ndevos kkeithley: maybe just click the 'lauch slave' button?
14:32 csim https://build.gluster.org/view/gluster%​20only/job/smoke_test_fedora/9/console nice
14:33 ndevos csim: hah, that is something rastar and kkeithley were checking out :)
14:33 ndevos or, do you mean the French language in there?
14:33 kkeithley ???
14:33 csim ndevos: nope, the build error
14:33 csim didn't even see the french
14:34 csim guess that env var leaking on jenkins :/
14:34 kkeithley oh, yes, the FORTIFY_SOURCE, but the French too.  ;-)
14:35 ndevos uh, isnt Janson the really HUGE lecture room at FOSDEM?
14:35 mchangir joined #gluster-dev
14:37 csim can I restart again jenkins now, or that too disruptive ?
14:38 kkeithley if you're going to do it, do it and get it over with
14:39 kkeithley http://www.ulb.ac.be/ulb/orchestre/plan_janson.htm
14:40 ndevos https://fosdem.org/2016/sche​dule/event/gluster_roadmap/
14:42 kkeithley are you wearing a suit in that picture?
14:45 kkeithley uh, we build the smoke test with --enable-debug?
14:45 kkeithley is that new?
14:45 ndevos uh, yes, and I hope I can find an other picture... this one is pretty old
14:45 kkeithley I meant is building smoke test with --enable-debug new?
14:46 ndevos I do not think smoke testing has changed, only the gerrit labels have different names
14:46 kkeithley or you meant yes you are wearing a suit
14:46 csim maybe he is wearing a --dbug
14:47 ndevos yes the suit, no the --enable-debug
14:47 kkeithley or maybe smoke test on fedora 22 is new? Did that change? A newer compiler or newer python now doesn't like -D_FORTIFY_SOURCE ?
14:48 ndevos I dont know where the fedora smoke tester is coming from...
14:51 kkeithley csim: did you restart jenkins?
14:51 csim kkeithley: I did
14:51 csim ndevos: the one I installed a while ago
14:51 csim that's still a manual test
14:52 kkeithley okay, that's probably why my regression failed.
14:52 kkeithley I guess
14:55 ndevos which test failed?
14:57 kkeithley the two I started (regression and netbsd-regression)
14:57 kkeithley actually the netbsd looks like it failed to clone the source
14:58 csim there is it seems some github issue, people complain about it here
14:59 csim but maybe it clone from gerrit ?
14:59 kkeithley how do I make jenkins clone from gerrit instead of github?
15:00 csim no idea (not even sure if that's not already the case :/)
15:00 kkeithley or are you proposing a general config change for where the source is cloned?
15:00 csim no, I am just wondering if that could be the problem
15:01 csim but not sure where does it get the source
15:01 kkeithley looking....
15:01 kkeithley Cloning repository git://review.gluster.org/glusterfs.git
15:01 kkeithley RROR: Error fetching remote repo 'origin'
15:02 kkeithley https://build.gluster.org/view/glu​ster%20only/job/rackspace-netbsd7-​regression-triggered/13449/console
15:02 csim mhhh ok, let's take a look
15:05 csim stderr: fatal: Couldn't find remote ref efs/changes/31/13031/10
15:05 csim was it removed or changed ?
15:06 kkeithley haha, no, that's my fault.
15:06 csim we will still blame jenkins and gerrit
15:06 kkeithley it was supposed to be refs/changes/31/13031/10, not efs/changes/31/13031/10
15:06 kkeithley (10:05:27 AM) csim: was it removed or changed ?
15:07 csim I would count that as jenkins fault, the interface should be better :)
15:07 kkeithley pffft.  fat fingers
15:07 kkeithley cut-and-paste error
15:09 kkeithley one of these days computers will guess (accurately) that when I enter efs/changes/31/13031 that I meant refs/changes/31/13031/10, and when I type suod, I meant sudo.
15:20 nbalacha joined #gluster-dev
15:55 hagarth joined #gluster-dev
16:35 shaunm joined #gluster-dev
16:38 zhangjn joined #gluster-dev
16:40 josferna joined #gluster-dev
17:07 josferna joined #gluster-dev
18:00 josferna joined #gluster-dev
18:48 purpleidea joined #gluster-dev
19:10 purpleidea joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary