Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 en0x i made some performance tweaks to my volume is there a way to reset them?
00:02 en0x ok found it
00:22 y4m4 joined #gluster
00:30 lge joined #gluster
00:33 dowillia joined #gluster
00:39 tg2 joined #gluster
00:39 Nagilum joined #gluster
00:39 purpleidea joined #gluster
00:39 ndevos joined #gluster
00:39 neofob joined #gluster
00:39 jiffe1 joined #gluster
00:39 kkeithley joined #gluster
00:39 a2 joined #gluster
00:39 bfoster joined #gluster
00:45 yinyin joined #gluster
00:48 Shdwdrgn joined #gluster
00:55 akshay joined #gluster
01:09 yinyin joined #gluster
01:09 jdarcy joined #gluster
01:28 clag_ joined #gluster
01:34 akshay joined #gluster
01:39 glusterbot New news from newglusterbugs: [Bug 919286] Efficiency of system calls by posix translator needs review <http://goo.gl/vvY2U>
01:46 _pol_ joined #gluster
02:06 _pol joined #gluster
02:28 hagarth joined #gluster
02:38 dowillia joined #gluster
02:40 kevein joined #gluster
02:57 jdarcy joined #gluster
03:02 pipopopo_ joined #gluster
03:04 bulde joined #gluster
03:06 shireesh joined #gluster
03:08 pipopopo joined #gluster
03:10 disarone joined #gluster
03:10 nemish joined #gluster
03:25 dblack joined #gluster
03:26 dowillia joined #gluster
03:43 lala joined #gluster
03:43 yinyin joined #gluster
03:45 pranithk joined #gluster
03:46 pranithk tjstansell: ping
03:49 pranithk JoeJulian: ping
03:51 _pol_ joined #gluster
03:54 _pol joined #gluster
03:56 bala joined #gluster
03:57 JoeJulian Hello
03:57 glusterbot JoeJulian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
03:58 JoeJulian Gah! I got myself.!
04:02 JoeJulian pranithk: I haven't tried duplicating the results myself yet.
04:03 pranithk JoeJulian: Which bug are you talking about?
04:04 JoeJulian The timestamp one
04:04 pranithk JoeJulian: Cool
04:05 pranithk JoeJulian: I just updated the bug with the test-frame-work scripts I used
04:05 JoeJulian Is the framework in the source tree? I haven't tried using the test tools yet.
04:06 pranithk JoeJulian: It would be great if we can comeup with a test case that hits the issue in one machine
04:06 bala joined #gluster
04:06 pranithk JoeJulian: yes the one in the source tree
04:06 vshankar joined #gluster
04:07 JoeJulian Ok, I'll try to break it. :)
04:08 pranithk JoeJulian: If we come up with a case that uses just one machine, we can add it to the regression suite and we will never break it again ;-)
04:08 pranithk pranithk: Every commit is tested against these scripts
04:08 JoeJulian Now if I could just figure out how the root gfid symlink gets created as a directory sometimes....
04:09 pranithk JoeJulian: Yes, I am dying to figure that one out. It is unbelievable I tell you
04:09 shylesh joined #gluster
04:10 JoeJulian Unfortunately, everyone that it's happened to it's happened on their production servers so they were more interested in fixing it than figuring out why it happened.
04:10 pranithk JoeJulian: I get it..
04:14 pranithk JoeJulian: I am still making some more improvements to the scripts for the timestamps bug. I will post the final results to the bug once I am done
04:15 sripathi joined #gluster
04:17 raghu joined #gluster
04:18 pai joined #gluster
04:23 JoeJulian pranithk: btw... the trace logs /look/ like they're doing the right thing, don't they?
04:23 tjstansell hi folks.
04:24 pranithk pranithk: hey todd!
04:24 JoeJulian o/
04:24 tjstansell saw your bug update ...
04:24 pranithk tjstansell: There are 2 bugs in the script... I just fixed them...
04:24 pranithk tjstansell: I am gonna post them in 5-10 minutes...
04:24 tjstansell so were you able to reproduce my results?
04:25 pranithk tjstansell: Do you think you can re-create the problem in local setup with replica count 2?
04:25 pranithk tjstansell: unfortunately no... :-(
04:25 tjstansell of course. i haven't been able to *not* reproduce these results. :)
04:25 JoeJulian Me neither yet.
04:26 pranithk tjstansell: Do one favour for me. Try to recreate the issue with replica count 2 on a local machine and give me the list of commands you executed... let me see if I can recreate the bug...
04:26 vpshastry joined #gluster
04:27 tjstansell ok.
04:27 JoeJulian Ok, dinner's arrived. I'll come back to this after everyone goes to bed.
04:27 pranithk tjstansell: The reason I am asking you to re-create it on a single machine is we have this test frame work which keeps testing test cases on a single machine. We can add this test case to that and this bug will never be introduced again
04:27 rastar joined #gluster
04:27 tjstansell yep. that would be great.
04:28 pranithk tjstansell: I will be waiting for your update on the bug then. Have a happy dinner :-)
04:28 tjstansell i only did it with 3 nodes to try to see if the behavior really was related to the first brick or if it was more random.
04:28 tjstansell 3 bricks, rather.
04:29 pranithk tjstansell: Okay. I tried to do it with 2 bricks. There are some bugs with replica 3 on 3.3.x releases which Jeff fixed recently. So try it with 2 first, then we shall go to replica 3 if we can't recreate it with replica 2
04:29 tjstansell oh, and i happen to currently be using 3.4.0alpha.
04:30 tjstansell just because that was the last version i tried.
04:30 pranithk tjstansell: I need to check if that bug is fixed in there...
04:31 tjstansell is there a clean way to actually erase volumes that will wipe the bricks too?
04:31 tjstansell seems to require manually cleaning the bricks.  is that right?
04:31 pranithk tjstansell: yes that is right
04:32 tjstansell ok.
04:32 tjstansell no worries.
04:32 pranithk tjstansell: the bug of replica 3 is fixed in alpha release.
04:32 pranithk tjstansell: It is fine even if you give the test case with replic 3
04:33 pranithk just give 'history' command output which contains the command you executed. Even the execution log is fine
04:33 pranithk tjstansell: commands*
04:34 tjstansell well, that's basically what I had posted in comment 4 of the bug
04:34 tjstansell oh, nevermind. that was using 2 hosts.
04:34 tjstansell i'll update the bug shortly with the new test.
04:35 pranithk tjstansell: cool dude, thanks a lot for your time...
04:35 tjstansell np. i'm *very* interested to get this resolved.
04:36 pranithk tjstansell: You give me the test case, I will give you the fix :-)
04:36 tjstansell sounds like a deal.
04:36 pranithk tjstansell: :-)
04:37 dowillia joined #gluster
04:44 raghu joined #gluster
04:45 pranithk joined #gluster
04:50 satheesh joined #gluster
04:51 yinyin joined #gluster
04:51 tjstansell pranithk: just updated the bug
04:59 tjstansell pranithk: interestingly, i tried it again with cluster.self-heal off set for the volume and it failed in a different way.  it created a stub file with the correct timestamp, but 0 bytes.
05:05 tjstansell hm... tried it again and it didn't restore the file and my mount didn't show the file existing.  i remounted the client mount and the file was then there and it restored the file, but the timestamp was broken.
05:13 aravindavk joined #gluster
05:13 pranithk tjstansell: ping
05:13 pranithk tjstansell: I just looked at the steps...
05:13 tjstansell i'm here
05:14 sahina joined #gluster
05:14 pranithk tjstansell: If I understand the steps correctly.. you are removing only the link file and not the actual file? is that correct?
05:14 tjstansell i'm removing them both.
05:15 pranithk tjstansell: In the steps I see only one rm -f of the gfid link file...
05:15 rastar1 joined #gluster
05:16 tjstansell both are being removed with the one rm
05:16 tjstansell there's a space in there :)
05:16 pranithk tjstansell: Ah! sorry sorry mybad. Let me do the steps on my machine....
05:16 pranithk tjstansell: will update in a while...
05:16 tjstansell ok
05:17 tjstansell and i did the same thing with self-heal disabled as well ... and after stat'ing the file, the same behavior happened...
05:17 tjstansell i noticed you had cluster.self-heal-daemon off so i thought i'd try that too.
05:17 tjstansell same result for me.
05:18 yinyin joined #gluster
05:19 vpshastry1 joined #gluster
05:24 satheesh1 joined #gluster
05:25 pranithk tjstansell: I am writing a test case using the steps given in the bug update.. will update with my results
05:27 dowillia joined #gluster
05:32 sripathi joined #gluster
05:32 phase5 joined #gluster
05:32 sahina joined #gluster
05:33 pai_ joined #gluster
05:44 dowillia joined #gluster
05:46 sripathi joined #gluster
05:52 ramkrsna joined #gluster
05:52 vshankar joined #gluster
05:55 vshankar joined #gluster
06:01 pranithk tjstansell: I recreated the issue :-). I will now start debugging the issue.
06:01 pranithk tjstansell: Thanks a lot for the test case. Really appreciate your time
06:04 tjstansell yay!
06:04 tjstansell then my work here is done.  and i'm heading to bed. ;)
06:04 tjstansell good luck!
06:05 Ryan_Lane joined #gluster
06:06 satheesh joined #gluster
06:25 sripathi joined #gluster
06:26 JoeJulian pranithk, tjstansell: Interestingly, the Access time doesn't change.
06:30 ngoswami joined #gluster
06:33 shireesh joined #gluster
06:34 yinyin joined #gluster
06:53 yinyin joined #gluster
06:56 pranithk JoeJulian: I hear you.. I am looking into the code path.. will have an update soon
06:58 vimal joined #gluster
06:59 puebele1 joined #gluster
07:04 sgowda joined #gluster
07:05 mooperd joined #gluster
07:08 rgustafs joined #gluster
07:09 Nevan joined #gluster
07:15 jtux joined #gluster
07:17 jag3773 joined #gluster
07:19 puebele joined #gluster
07:20 ThatGraemeGuy joined #gluster
07:29 mooperd joined #gluster
07:40 sgowda joined #gluster
07:42 sripathi joined #gluster
07:44 rotbeard joined #gluster
07:45 sripathi1 joined #gluster
07:50 rastar joined #gluster
07:57 ekuric joined #gluster
07:59 ctria joined #gluster
08:02 jtux joined #gluster
08:04 guigui joined #gluster
08:14 bala joined #gluster
08:14 tjikkun_work joined #gluster
08:16 vpshastry joined #gluster
08:16 yinyin joined #gluster
08:17 tryggvil joined #gluster
08:27 Staples84 joined #gluster
08:28 lge joined #gluster
08:30 masterzen joined #gluster
08:33 rastar joined #gluster
08:36 tjikkun_work joined #gluster
08:38 masterzen joined #gluster
08:38 sgowda joined #gluster
08:42 dobber_ joined #gluster
08:50 vpshastry joined #gluster
08:52 tryggvil joined #gluster
08:54 rastar joined #gluster
09:02 mooperd joined #gluster
09:02 ninkotech_ joined #gluster
09:05 sgowda joined #gluster
09:05 samu60 joined #gluster
09:06 samu60 hi all
09:06 samu60 is there anyone around that can help with quite a weird situation?
09:07 samu60 we manually recovered a split brain situation
09:07 samu60 in a 3.3.0 stripped 8-node environment
09:07 samu60 we've checked flags with getafattr and all flags are set as 0
09:08 samu60 but in the gluster clients, we still get the split brain log
09:08 samu60 is it required to remount gluster on clients?
09:10 glusterbot New news from newglusterbugs: [Bug 919352] glusterd segfaults/core dumps on "gluster volume status ... detail" <http://goo.gl/i23kf>
09:12 samppah samu60: no it should not be necessary.. how did you recover split brain and is it possible that there is still something else that is causing it?
09:13 samu60 i removed both the file and the .gluster gfid file
09:14 samu60 i've checked replicas and both are identical (using hexdump the oiutput is the same)
09:14 samu60 flags seems to be ok:
09:14 samu60 trusted.afr.storage-client-6​=0x000000000000000000000000
09:14 samu60 trusted.afr.storage-client-7​=0x000000000000000000000000
09:16 manik joined #gluster
09:17 yinyin joined #gluster
09:17 yinyin joined #gluster
09:20 slacko16215 joined #gluster
09:21 tryggvil joined #gluster
09:25 rastar joined #gluster
09:28 samu60 i've checked http://review.gluster.com/#change,3583
09:28 glusterbot Title: Gerrit Code Review (at review.gluster.com)
09:28 samu60 but i'm not able to reset the split brain scenario
09:34 sripathi joined #gluster
09:41 satheesh joined #gluster
09:46 tryggvil_ joined #gluster
09:50 tryggvil joined #gluster
09:55 maxiepax joined #gluster
10:02 mooperd joined #gluster
10:12 ctria joined #gluster
10:12 aravindavk joined #gluster
10:22 torbjorn1_ semiosis: do you think your PPA deb's will work on Debian Squeeze ?
10:23 torbjorn1_ I don't know a lot about PPA or cross-compatibility between Debian and Ubuntu, I'm afraid
10:24 torbjorn1_ on http://www.gluster.org/download/, the preview link goes to the first alpha, not http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.0alpha2/
10:24 glusterbot <http://goo.gl/0CjjJ> (at download.gluster.org)
10:30 ctria joined #gluster
10:35 mgebbe_ joined #gluster
10:35 sgowda joined #gluster
10:43 tryggvil joined #gluster
10:53 yinyin joined #gluster
10:55 vpshastry joined #gluster
11:03 raghug joined #gluster
11:07 sgowda joined #gluster
11:10 samu60 i got the answer in the mailing list
11:10 samu60 thanks a lot for everything
11:12 andrei joined #gluster
11:20 tryggvil joined #gluster
11:28 vpshastry joined #gluster
11:36 inodb joined #gluster
11:37 sahina joined #gluster
11:39 jclift joined #gluster
11:42 vpshastry1 joined #gluster
11:44 edward1 joined #gluster
11:50 tryggvil joined #gluster
11:52 manik joined #gluster
11:53 sripathi1 joined #gluster
12:00 lala joined #gluster
12:01 Staples84 joined #gluster
12:12 nemish joined #gluster
12:20 manik joined #gluster
12:21 kevein joined #gluster
12:34 jdarcy joined #gluster
12:36 lh joined #gluster
12:36 torbjorn1_ semiosis: I'm trying to build a Debian pacakge for Squeeze, using the alpha2 sources and our glusterfs-debian repo
12:37 torbjorn1_ semiosis: thanks for the packaging, by the way, but I'm wondering about the build-dep for cdbs (>= 0.4.90~) .. is that required ?
12:38 torbjorn1_ semiosis: latest in squeeze is 0.4.89 .. I've adjusted debian/control, and I'm now trying to build it with 0.4.89
12:38 verdurin joined #gluster
12:39 dowillia joined #gluster
12:41 mooperd joined #gluster
12:43 tryggvil joined #gluster
12:54 tjikkun_ joined #gluster
13:02 zwu joined #gluster
13:08 sripathi joined #gluster
13:08 bennyturns joined #gluster
13:12 dowillia joined #gluster
13:15 nemish joined #gluster
13:18 hagarth joined #gluster
13:23 ProT-0-TypE joined #gluster
13:23 tryggvil joined #gluster
13:28 lpabon joined #gluster
13:30 fleducquede joined #gluster
13:31 balunasj joined #gluster
13:43 plarsen joined #gluster
13:51 Staples84 joined #gluster
13:52 nemish joined #gluster
13:55 mooperd joined #gluster
13:56 aliguori joined #gluster
13:59 mooperd joined #gluster
14:03 mooperd joined #gluster
14:03 dustint joined #gluster
14:04 mooperd_ joined #gluster
14:10 hagarth joined #gluster
14:16 ctria joined #gluster
14:23 clag_ joined #gluster
14:28 hagarth joined #gluster
14:39 semiosis torbjorn1_: i think you'll be fine with an older cdbs
14:41 torbjorn1_ semiosis: yup, they seem to have done the job
14:42 lalatenduM joined #gluster
14:44 mooperd joined #gluster
14:53 semiosis JoeJulian: intel integrated graphics FTW.
14:55 semiosis torbjorn1_: great
14:57 theguidry joined #gluster
14:59 mooperd joined #gluster
15:05 Staples84 joined #gluster
15:05 jdarcy joined #gluster
15:06 stopbit joined #gluster
15:16 ctria joined #gluster
15:19 dowillia joined #gluster
15:25 tryggvil joined #gluster
15:33 dbruhn joined #gluster
15:34 dbruhn any issues I should look out for on RedHat 6.4?
15:34 semiosis ~ext4 | dbruhn
15:34 glusterbot dbruhn: Read about the ext4 problem at http://goo.gl/PEBQU
15:34 dbruhn Was planning on xfs
15:36 tryggvil joined #gluster
15:40 Staples84 joined #gluster
15:44 jbrooks joined #gluster
15:47 chouchins joined #gluster
15:48 nemish joined #gluster
15:49 tryggvil joined #gluster
15:55 ctria joined #gluster
16:04 kr4d10 joined #gluster
16:05 flakrat joined #gluster
16:05 flakrat joined #gluster
16:11 bugs_ joined #gluster
16:14 jrossi joined #gluster
16:15 daMaestro joined #gluster
16:16 lh joined #gluster
16:16 lh joined #gluster
16:19 jrossi Given gluster volume " gluster volume create gfs-04 replica 2 host-01:/d-01/gfs-4 host-2:/d-01/gfs-04 host-1:/d-02/gfs-04 host-2:/d-02/gfs-04 " will the replication of a file ever be on the same host?   Basiclly does gluster make sure each replica is never on the same host?  Thank you
16:19 phase5 joined #gluster
16:21 semiosis replication is between bricks grouped into replica sets in the order they appear
16:21 semiosis in your case, with replica 2, the first two bricks form a replication pair, the last two bricks form a different replication pair.  files will be distributed over the two replica pairs
16:22 jrossi semiosis: ok so I have ot setup correctly. Thank you
16:22 semiosis yw
16:24 ctria joined #gluster
16:25 weplsjmas joined #gluster
16:28 phase5 joined #gluster
16:37 bala joined #gluster
16:38 _pol joined #gluster
16:54 manik joined #gluster
16:56 flrichar joined #gluster
16:57 Gilbs3 joined #gluster
17:10 clag_ joined #gluster
17:21 disarone joined #gluster
17:30 cw joined #gluster
17:30 ctria joined #gluster
17:38 jclift Anyone know who admins the review.gluster.org server?
17:43 lalatenduM joined #gluster
17:55 vshankar joined #gluster
17:56 hattenator joined #gluster
17:57 sjoeboo_ joined #gluster
17:57 Mo___ joined #gluster
18:06 hagarth joined #gluster
18:06 y4m4 joined #gluster
18:09 _pol joined #gluster
18:30 Ryan_Lane joined #gluster
18:33 hagarth joined #gluster
18:42 satheesh joined #gluster
18:45 balunasj joined #gluster
18:46 balunasj joined #gluster
18:47 balunasj joined #gluster
18:48 balunasj joined #gluster
18:52 duffrecords joined #gluster
18:54 duffrecords semiosis: yesterday you suggested enabling quorum to avoid split-brain scenarios in a replica 2 setup.  did you set it to "auto" or "fixed?"
19:05 __Bryan__ joined #gluster
19:08 mooperd joined #gluster
19:16 semiosis auto
19:18 ninkotech_ joined #gluster
19:20 chouchin_ joined #gluster
19:24 _pol joined #gluster
19:30 dbruhn__ joined #gluster
19:31 duffrecords thanks
19:32 dmojoryder joined #gluster
19:33 semiosis yw
19:37 Ryan_Lane semiosis: I was told that wouldn't help when using replica=2
19:38 Ryan_Lane does auto work for that when using replica=2?
19:38 semiosis i did a most basic test yesterday (3.3.1) and it seemed to do what i expected, which is go read-only when one brick is down, and return to read-write once all bricks were back in business
19:38 Ryan_Lane oh. cool.
19:38 * Ryan_Lane goes to enable that
19:40 robos joined #gluster
19:41 Gilbs3 Anyone see this error in geo-replication?  I'd like to see the error log for the slave, but it looks like it's having permission issues.
19:41 Gilbs3 IOError: [Errno 13] Permission denied: '/var/log/glusterfs/geo-replication-slaves​/5c34ffb8-ef52-4cad-8adb-a88c07b7a3a0:glus​ter%3A%2F%2F127.0.0.1%3Ageo2-volume.log'
19:41 duffrecords yeah, I wasn't sure how a quorum would be possible with two servers, since you can't have a majority vote if it's one server vs. another.  but if it forces the volume read-only, I'd prefer that to a split-brain
19:44 semiosis think of it from the client's point of view... can the client see a majority of replicas?  if yes, then allow writes, if not then read-only
19:45 semiosis network partitions could affect some clients differently from others, but that rule should always apply
19:45 Gilbs3 Do I need to set anything else anything else besides SSH?  (Mountbroker, IP controls) for geo-rep?
19:52 duffrecords semiosis: I guess in my case that won't help because I'm not using the GlusterFS client.  I wish I was, because in my experience it performs much better than mounting the volume via NFS.  but we're using ESXi, which unfortunately only supports NFS as an alternative to its own proprietary filesystem
19:53 semiosis duffrecords: in that case it's the POV of the nfs server machine the client is connected to
19:54 semiosis duffrecords: consider replica 3, and a network partition that disconnects one of the three servers from the other two.  any nfs clients mounting from the lone server will be read-only.  any nfs clients mounting from the pair will be r/w
19:55 duffrecords that POV would be one of the Gluster boxes, then.  the volume is mounted locally via the GlusterFS client, and that same server exports it via NFS
19:55 dbruhn__ joined #gluster
19:56 Gilbs3 Ah, looks like there is something in Bug 893960, anyone have full details (not auth to access)?
19:56 glusterbot Bug http://goo.gl/XnsKF is not accessible.
19:56 duffrecords brb. lunch
19:58 Gilbs3 semiosis: any good info on that bug if you have access?
19:58 semiosis i have no access, i am a ,,(volunteer)
19:58 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
19:58 semiosis i'd expect anyone who could access it wouldn't tell us anything about it tho
19:59 Gilbs3 nuts... that's pretty much my error.
20:10 JoeJulian Gilbs3: Are you running as root? ssh'ing to root? eperm creating the log file sounds like you don't have permissions to create the log file. Since you said selinux is disabled (I assume at both ends) then not being root is the only other thing I can think of.
20:30 neofob left #gluster
20:30 rotbeard joined #gluster
20:45 Gilbs3 joejulian:  Negative, running as local account, i'll test as root.
20:46 joehoyle- joined #gluster
20:47 JoeJulian Gilbs3: It's designed to run as root. It can't set the trusted.* attributes, nor can it set the owner/group if it's not. This will cause a lot of unnecessary sync attempts since it won't match the master.
20:47 Gilbs3 JoeJulian:  Gotcha, i'll set up the ssh key and test again, thanks.
20:49 jclift Interesting.  The gluster code defines TMP_MAX, which is already defined system wide on OSX.  Compiling spews a _shedload_ of warnings due to it.
20:49 jclift Surely this has been discussed before?
20:50 plarsen joined #gluster
20:54 Gilbs3 MASTER               SLAVE                                              STATUS
20:54 Gilbs3 ----------------------------------------​----------------------------------------
20:54 Gilbs3 geo-volume           root@x.x.x.x ::geo2-volume                    OK
20:55 Gilbs3 JoeJulian:  I owe you a very very big dinner
20:55 JoeJulian Oooh, after all the beers people have offered, the dinner will be nice to keep me from feeling too drunk. ;)
20:58 Gilbs3 eat, THEN drink == happy
21:02 Gilbs3 JoeJulian: One thing I did do different for CentOS, i added the PATH to gsyncd -- This was based off the red-hat doco, is this also needed for ubuntu's root alias account?
21:03 * jclift hopes someone gets review.gluster.org working sometime in the next few days
21:03 jclift Patches to submit, and can't do it. :/
21:13 polkadotpin-up joined #gluster
21:28 lpabon joined #gluster
22:17 phase5 joined #gluster
22:20 phase5 joined #gluster
22:22 phase5 left #gluster
22:38 Gilbs3 left #gluster
22:43 Gilbs3 joined #gluster
22:43 glusterbot New news from newglusterbugs: [Bug 903396] Tracker for gluster-swift refactoring work (PDQ.2) <http://goo.gl/wiUbE>
23:10 sjoeboo_ joined #gluster
23:17 dowillia joined #gluster
23:27 Gilbs3 left #gluster
23:34 gbrand_ joined #gluster
23:51 sjoeboo_ joined #gluster
23:56 hagarth joined #gluster
23:59 yinyin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary