Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Humble joined #gluster
00:10 sjoeboo joined #gluster
00:14 JoeJulian @ppa
00:14 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
00:30 sjoeboo joined #gluster
00:34 yinyin joined #gluster
00:44 luckybambu joined #gluster
00:44 yinyin_ joined #gluster
00:46 yinyin joined #gluster
00:47 _pol joined #gluster
00:50 yinyin joined #gluster
00:56 yinyin_ joined #gluster
01:01 yinyin joined #gluster
01:06 yinyin joined #gluster
01:09 yinyin_ joined #gluster
01:11 yinyin joined #gluster
01:12 hagarth joined #gluster
01:14 yinyin_ joined #gluster
01:14 _pol joined #gluster
01:15 _pol joined #gluster
01:19 yinyin joined #gluster
01:22 atrius joined #gluster
01:22 bala1 joined #gluster
01:24 yinyin joined #gluster
01:28 yinyin_ joined #gluster
01:29 VSpike joined #gluster
01:34 yinyin joined #gluster
01:38 yinyin_ joined #gluster
01:38 Humble joined #gluster
01:41 yinyin joined #gluster
01:44 yinyin joined #gluster
01:49 kevein joined #gluster
01:51 sjoeboo joined #gluster
01:51 yinyin joined #gluster
01:56 yinyin joined #gluster
02:00 yinyin joined #gluster
02:05 yinyin joined #gluster
02:10 yinyin joined #gluster
02:15 yinyin joined #gluster
02:19 yinyin joined #gluster
02:25 yinyin joined #gluster
02:29 yinyin joined #gluster
02:32 yinyin_ joined #gluster
02:33 yinyin joined #gluster
02:39 raven-np joined #gluster
02:40 yinyin joined #gluster
02:43 raven-np1 joined #gluster
02:45 yinyin joined #gluster
02:45 hagarth joined #gluster
02:49 _pol joined #gluster
02:49 yinyin joined #gluster
02:52 yinyin joined #gluster
02:53 vshankar joined #gluster
02:56 yinyin joined #gluster
03:01 saraa joined #gluster
03:01 saraa i will back
03:01 saraa avati: hi again :) maybe u can help my?
03:01 yinyin joined #gluster
03:02 pipopopo_ joined #gluster
03:05 yinyin_ joined #gluster
03:09 pipopopo joined #gluster
03:13 saraa enyone know ho fix problem with 100% CPU by glusterfs?
03:15 hagarth joined #gluster
03:15 yinyin joined #gluster
03:18 yinyin_ joined #gluster
03:18 JoeJulian Get a faster cpu?
03:18 JoeJulian saraa: Does it actually stop anything from working?
03:21 JoeJulian penglish: Check the glusterd log on the other hosts as well.
03:22 saraa hmm
03:22 saraa this is
03:22 saraa 4 cors 8 threds 2.8 xeon
03:23 saraa and openvz conteners, 100% is only per max cpu process
03:23 saraa but this do the all pages (webpages) sorded by gluster work tragedly
03:23 saraa :(
03:24 saraa log no have interested information
03:25 JoeJulian saraa: Perhaps you're running ,,(php) apps and this link might be useful:
03:25 glusterbot saraa: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
03:25 JoeJulian Still wouldn't expect 100% though.
03:25 JoeJulian Gotta run. I'll be back tomorrow.
03:26 yinyin joined #gluster
03:28 bulde joined #gluster
03:35 anmol joined #gluster
03:35 yinyin joined #gluster
03:35 saraa JoeJulian> sory i now see u information
03:36 saraa yes php files
03:36 saraa u have how i simple can fix this problem?
03:37 saraa http://goo.gl/uDFgg - i read this, but not help, or i dont how fix it
03:37 glusterbot Title: Optimizing web performance with GlusterFS (at goo.gl)
03:38 yinyin joined #gluster
03:42 sripathi joined #gluster
03:42 yinyin joined #gluster
03:47 yinyin joined #gluster
03:51 yinyin joined #gluster
03:55 saraa ???????????????????????????????????????????????​?????????????????????????????????????????????? :) please
04:16 sgowda joined #gluster
04:21 deepakcs joined #gluster
04:25 yinyin joined #gluster
04:31 pai joined #gluster
04:33 khanh_coltech joined #gluster
04:36 shylesh joined #gluster
04:38 vpshastry joined #gluster
04:39 bulde a2: ping
04:43 sahina joined #gluster
04:47 Humble joined #gluster
04:57 bala joined #gluster
05:05 kevein joined #gluster
05:12 yinyin joined #gluster
05:13 lala joined #gluster
05:14 flrichar joined #gluster
05:14 sas joined #gluster
05:17 yinyin_ joined #gluster
05:18 lala_ joined #gluster
05:22 rastar joined #gluster
05:25 pai_ joined #gluster
05:29 avati bulde: pong
05:47 bala joined #gluster
05:51 overclk joined #gluster
06:06 raghu joined #gluster
06:14 rwheeler joined #gluster
06:17 hagarth joined #gluster
06:25 mohankumar joined #gluster
06:25 _pol joined #gluster
06:37 ramkrsna joined #gluster
06:44 bulde avati: still around?
06:51 hagarth joined #gluster
06:52 timothy joined #gluster
06:55 deepakcs joined #gluster
06:56 mooperd joined #gluster
07:01 Nevan joined #gluster
07:01 ctria joined #gluster
07:08 sripathi1 joined #gluster
07:10 Humble joined #gluster
07:17 guigui1 joined #gluster
07:18 ngoswami joined #gluster
07:19 rgustafs joined #gluster
07:20 ThatGraemeGuy joined #gluster
07:21 jtux joined #gluster
07:31 andreask joined #gluster
07:32 Humble joined #gluster
07:33 gbrand_ joined #gluster
07:39 sripathi joined #gluster
07:41 dobber_ joined #gluster
07:48 glusterbot New news from resolvedglusterbugs: [Bug 764966] gerrit integration fixes <http://goo.gl/AZDsh> || [Bug 765473] [glusterfs-3.2.5qa1] glusterfs client process crashed <http://goo.gl/4fZUW>
07:50 ekuric joined #gluster
07:55 puebele joined #gluster
07:56 jtux joined #gluster
08:01 puebele2 joined #gluster
08:05 rwheeler joined #gluster
08:17 deepakcs What does "VM image storage improvements – not related to QEMU integration; related to performance improvements" mean  ?
08:17 deepakcs I see that @ http://www.gluster.org/2013/02/​new-release-glusterfs-3-4alpha/
08:17 glusterbot <http://goo.gl/GuQav> (at www.gluster.org)
08:18 mooperd joined #gluster
08:20 DWSR joined #gluster
08:26 samppah deepakcs: i'd like to hear about that also
08:27 samppah i've noticed that fuse glusterfs 3.4 seems somewhat faster
08:27 hagarth joined #gluster
08:29 rotbeard joined #gluster
08:32 tjikkun_work joined #gluster
08:37 gbrand_ joined #gluster
08:38 gbrand_ joined #gluster
08:39 gbrand__ joined #gluster
08:39 gbrand_ joined #gluster
08:40 ctria joined #gluster
08:40 rastar joined #gluster
08:44 Humble joined #gluster
08:50 GabrieleV joined #gluster
08:51 WildPikachu joined #gluster
08:56 mooperd joined #gluster
09:07 sripathi joined #gluster
09:21 fleducquede joined #gluster
09:33 tomsve joined #gluster
09:41 NuxRo any of you guys use glusterfs with Onapp? any pointers, gotchas etc?
09:49 glusterbot New news from resolvedglusterbugs: [Bug 871987] Split-brain logging is confusing <http://goo.gl/DG74F>
09:51 shireesh joined #gluster
09:52 joeto joined #gluster
09:56 Staples84 joined #gluster
10:27 hagarth joined #gluster
10:41 duerF joined #gluster
10:49 glusterbot New news from resolvedglusterbugs: [Bug 765514] [Red Hat SSA-3.2.4] when one of the replicate pair goes down and comes back up dbench fails <http://goo.gl/TqNNR> || [Bug 785568] Marking source with lowest UID fails <http://goo.gl/Q1J0R>
10:55 bulde joined #gluster
10:55 sahina joined #gluster
11:01 stigchri_ joined #gluster
11:17 sjoeboo joined #gluster
11:19 glusterbot New news from resolvedglusterbugs: [Bug 800803] All clients are marked fools due to "No space left on device" <http://goo.gl/YVImH> || [Bug 800291] [347b4d48cba3cc1e00d40ec50e62497d65a27c84] - crash in inodelk when .glusterfs is removed from the back-end. <http://goo.gl/hksJr>
11:24 sjoeboo joined #gluster
11:24 shireesh joined #gluster
11:27 sgowda joined #gluster
11:29 puebele joined #gluster
11:38 ThatGraemeGuy noob question, a couple of days ago semiosis left me a message via glusterbot, i got the messages when i came online. how can i do that?
11:39 rastar left #gluster
11:49 glusterbot New news from resolvedglusterbugs: [Bug 836101] Reoccuring unhealable split-brain <http://goo.gl/FRmIs>
11:50 puebele joined #gluster
11:50 ndevos @later tell ThatGraemeGuy glusterbot creates private messages when you write "@later tell $USER ..."
11:50 glusterbot ndevos: The operation succeeded.
11:53 ThatGraemeGuy ndevos, thanks! :)
11:55 ndevos :)
11:58 sjoeboo joined #gluster
12:00 rastar joined #gluster
12:00 hagarth joined #gluster
12:02 ctria joined #gluster
12:04 gbrand__ joined #gluster
12:05 duerF joined #gluster
12:07 gbrand___ joined #gluster
12:08 theron joined #gluster
12:15 sjoeboo joined #gluster
12:28 andreask joined #gluster
12:29 bulde joined #gluster
12:33 raven-np joined #gluster
12:43 mooperd joined #gluster
13:07 aliguori joined #gluster
13:25 plarsen joined #gluster
13:27 sjoeboo joined #gluster
13:28 bala joined #gluster
13:32 ThatGraemeGuy @later tell semiosis hey, I see that you managed to get mounting to work by using '127.0.0.1' in fstab, unfortunately that didn't help much for me, as I still get a DNS resolution error (!!?). I've used your other idea and put the sleep in the upstart config, but even that seems to fail around 20% of the time, with an 8s sleep :(
13:32 glusterbot ThatGraemeGuy: The operation succeeded.
13:32 semiosis ThatGraemeGuy: hey i'm just here for a few mins before i embark on a road trip
13:33 semiosis ThatGraemeGuy: i pushed an updated package, precise8, to the ppa wednesday
13:33 ThatGraemeGuy i installed that earlier, afraid it didn't help
13:33 semiosis ThatGraemeGuy: it holds glusterd from starting until net-device-up IF=lo
13:33 semiosis wow thats strange
13:33 ThatGraemeGuy i see you mention your tests were on 12.04 and 12.04.1.... my boxes are 12.04.2, not sure if there may be additional changes that affect me that you aren't affected by
13:34 overclk joined #gluster
13:34 semiosis i haven't yet got to the bottom of why that DNS lookup fails (or even occurs when you use an IP)
13:34 semiosis i just did a blackbox analysis and compared to the nfs mount procedure, and that seemed to improve mounting success on my test vms both local kvm & ec2
13:35 semiosis the ip addr & the new upstart event condition both seemed to help
13:35 ThatGraemeGuy yeah, that i don't really understand. i took a shot in the dark and tried to enclose the IP in [], which lots of software will take to mean "don't do any DNS stuff with this", but gluster doesn't seem to understand that
13:35 semiosis hah interesting
13:35 semiosis 12.04.2?
13:35 ThatGraemeGuy yes
13:36 semiosis i thought the latest was 12.04.1, in any case my ec2 instance is whatever the latest precise is
13:36 edward1 joined #gluster
13:36 ThatGraemeGuy Description:    Ubuntu 12.04.2 LTS
13:36 semiosis actually both, i launched ec2 instances from both the original april '12 image and the latest
13:36 semiosis cool
13:37 ThatGraemeGuy maybe amazon is a little delayed, ubuntu wiki puts the 12.04.2 release on 14 feb
13:37 semiosis my local test vm was an original precise install but i've updated the packages several times, it's probably somewhere between 12.04.1 and 12.04.2
13:39 ThatGraemeGuy i actually have an original 12.04 iso here, maybe i should bring up 2 VMs with that and see what my results are
13:39 semiosis i doubt the ec2 images are delayed, the canonical AMIs are first-class releases done alongside isos
13:39 ThatGraemeGuy possible it's something environmental, although i can't imagine what
13:39 semiosis but i will check when i have a chance
13:39 semiosis ThatGraemeGuy: would be a great test if you could try the original april '12 image!
13:40 ThatGraemeGuy cool, i still have little bit more than an hour before my weekend starts, i'll give that a try and let you know
13:41 semiosis ok great, please use glusterbot to leave me a message :)
13:41 ThatGraemeGuy hrm actually, i have that iso here with me, the environment i have the issue in has a 12.04.1 iso
13:41 ThatGraemeGuy but still it's not .2 so may still be worth a try
13:42 ThatGraemeGuy will see what i get done in the time i have and let you know either way
13:42 ThatGraemeGuy enjoy the road trip :)
13:42 semiosis ok thank you
13:43 semiosis fyi, this issue is really important to me, i intend to get to the bottom of it.  thank you very much for your help reporting & troubleshooting :)
13:44 ThatGraemeGuy no worries, thank you for all your efforts too :)
13:45 rwheeler joined #gluster
13:47 lala joined #gluster
13:48 vpshastry joined #gluster
13:51 overclk joined #gluster
13:52 bala joined #gluster
13:55 mooperd joined #gluster
13:57 mooperd_ joined #gluster
13:59 lala joined #gluster
14:04 glusterbot New news from newglusterbugs: [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9>
14:08 andreask joined #gluster
14:14 bit4man joined #gluster
14:15 dustint joined #gluster
14:19 sjoeboo joined #gluster
14:21 joeto1 joined #gluster
14:35 sjoeboo joined #gluster
14:41 bala joined #gluster
14:46 tomsve joined #gluster
14:46 vpshastry joined #gluster
14:49 stopbit joined #gluster
14:54 lala joined #gluster
15:10 lpabon joined #gluster
15:18 gbrand_ joined #gluster
15:19 rubbs joined #gluster
15:23 _pol joined #gluster
15:28 hagarth joined #gluster
15:29 Staples84 joined #gluster
15:38 sjoeboo_ joined #gluster
15:41 bugs_ joined #gluster
15:50 rubbs I have a test volume I just created using 3.3.1 and I can do `echo "Testing" >> /srv/replicated/test.txt` just fine, but trying to ls in to the directory just hangs. I even let it try to ls overnight. There is no more than 3 files in the volume. Any idea on where I can start troubleshooting/
15:51 rubbs s/\//?/
15:51 glusterbot rubbs: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
15:52 jruggiero Weird. Haven't experienced anything like that myself.
15:53 flrichar is there a verbosity setting?
15:53 flrichar for logs
15:53 jruggiero you didn't do anything like destroy and recreate the same volume or anything?
15:53 rubbs jruggiero: nope, just created it and first test died on an ls
15:54 jruggiero =/ I've only been playing with gluster for a week with only a couple volumes and I've been automating gluster builds with puppet and haven't seen that
15:55 rubbs oh son of a... I may have found the issue
15:55 rubbs somehow a bunch of 32bit binaries were installed instead of the 64bit versions of a lot of packages
15:55 jruggiero :)
15:56 * rubbs eyes his coworker's system building scripts with suspicion
15:56 rubbs I'm going to have to manually pull all of these out.
15:56 rubbs this could be terrible.
15:56 jruggiero configuration automation ftw
15:57 jruggiero if it's a test box can't you just rebuild?
16:03 lpabon_ joined #gluster
16:06 rubbs I'm looking into more automation here. Until this last month, I was doing this all on my own, so no time to improve.
16:06 jruggiero there are some decent gluster puppet modules out there, if that's your thing
16:10 sjoeboo_ joined #gluster
16:18 rubbs jruggiero: yeah, I'm not even in a position to get puppet in the works yet. Soon, very soon, but just not there yet
16:19 _pol joined #gluster
16:22 lpabon joined #gluster
16:39 rubbs Ripped out all i*86 packages and rebooted, and I'm still getting the same problem.
16:40 rubbs http://pastebin.com/h036MGJB
16:40 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
16:40 rubbs ah. I'll usse something else.
16:40 rubbs http://dpaste.org/2Vd2O/
16:40 glusterbot Title: dpaste.de: Snippet #219823 (at dpaste.org)
16:41 rubbs http://dpaste.org/ZAojc/
16:41 glusterbot Title: dpaste.de: Snippet #219824 (at dpaste.org)
16:41 rubbs the first is my commands, the second is my log file
16:48 dustint_ joined #gluster
16:56 Humble joined #gluster
17:01 rotbeard joined #gluster
17:11 dbruhn__ joined #gluster
17:12 _pol joined #gluster
17:13 m0zes rubbs: easy fix. don't mount the client over the bricks.
17:14 m0zes whoops, misread
17:21 zaitcev joined #gluster
17:33 vpshastry joined #gluster
17:46 nueces joined #gluster
17:49 timothy joined #gluster
17:52 _pol joined #gluster
17:57 rubbs m0zes: yeah, they're not :(
17:58 bennyturns joined #gluster
18:05 doubleb joined #gluster
18:05 deepakcs joined #gluster
18:11 doubleb Will someone please help me, to solve "Rebalance on volume is already started" problem on 3.3.1  system in 8x2 volume although no node is in progress state? https://gist.github.com/anonymous/5015412
18:11 glusterbot Title: gist:5015412 (at gist.github.com)
18:28 tbrown747 joined #gluster
18:29 tbrown747 hi there, does anybody have any interest in helping me with an issue occuring during self-heal on gluster v3.3.1?
18:30 tbrown747 ok, i will describe it anyway and if anyone has any insight please let me know
18:32 tbrown747 i had a partition with ~6T of data on server 1 and created a gluster volume with that brick and another empty one on server 2
18:32 tbrown747 i kicked off the self-heal to replicate the data over
18:33 tbrown747 it is dutifully replicating data to the second server, but during this operation i can't ls or generally access files in only particular directories of the gluster mount
18:33 tbrown747 about half of them are no problem
18:33 tbrown747 it has remained the same directories since the start of the self heal (which is ~25% complete at this point) so I don't think it's a concurrent access problem
18:35 JoeJulian rubbs: Oh, that is weird. Check the brick logs.
18:36 JoeJulian tbrown747: It sounds like it's self-healing the directory, which will include that directory's files. I suspect the background self-heal queue is full and it'll block until it finishes that directory.
18:36 JoeJulian Or at least until the last of that directory's files is in the background queue.
18:38 JoeJulian ok, gotta run. be back later.
18:38 tbrown747 joejulian: yeah actually that does seem to be the case, is there a way to safely stop or slow the self-heal to allow normal access during self-heal?
18:40 tbrown747 or since he's out, does anybody else have anything on the topic of performance during large self-heals?
18:40 tbrown747 i've been checking and I see a lot of people complaining about replication performance on previous versions but nothing since 3.3
18:48 66MAAGB6J joined #gluster
18:51 wN joined #gluster
18:52 lpabon joined #gluster
18:58 gbrand_ joined #gluster
19:02 tqrst tbrown747: I've never run into access issues during self heals, but can confirm that it is superduperslow (but not as slow as rebalancing)
19:03 tqrst superduperslow as in two days for a ~1.7T brick
19:04 bennyturns joined #gluster
19:06 _pol_ joined #gluster
19:10 bennyturns joined #gluster
19:29 lpabon_ joined #gluster
19:34 tbrown747 tqrst: yeah, this is different - i would expect degraded performance and I would expect it to take a long time to self-heal, but it is blocking on directory operations
19:34 tbrown747 i have about half a dozen zombie ls processes that i can't kill
19:35 tbrown747 also, the brick logs are reporting setting xattr errors even though both bricks are mounted user_xattr
19:36 tbrown747 that aside, i think joejulian is correct in that the self-heal is consuming all of the available resources and is blocking, so i am looking for a way to alter the performance of the self heal operation
19:36 tbrown747 obviously would prefer a slower replicate with reasonable (<20second) directory access times
19:41 rubbs JoeJulian: sorry I'm re-installing. all a test right now anyway. if I run into again I'll do that though.
19:53 glusterbot New news from resolvedglusterbugs: [Bug 884381] Implement observer feature to make quorum useful for replica 2 volumes <http://goo.gl/rsyR6>
19:55 elyograg superseded by bug 914804 ... I wonder if that will be included in 3.4, and when 3.4 is coming.
19:55 glusterbot Bug http://goo.gl/NWJpx medium, unspecified, ---, jdarcy, POST , [FEAT] Implement volume-specific quorum
19:56 elyograg I'd ask jdarcy, but he's not here. :)
19:58 lpabon joined #gluster
19:58 pipopopo joined #gluster
20:19 JoeJulian tbrown747: is there any way not to do directory listings? if you don't try to trigger the heal of every file simultaneously, other file operations are non-blocking.
20:20 JoeJulian You can also adjust the number of background self-heal operations, but raising it too far can cause some pretty severe utilization numbers.
20:21 JoeJulian elyograg: If a patch has been submitted to gerrit, it will be noted in bz.
20:21 doubleb JoeJulian: where can set the number of background self-heal processes?
20:22 tbrown747 joejulian: ls is just a symptom of the real problem, which is that accessing files in the directory directly also hangs
20:23 tbrown747 joejulian: it also doesn't seem to be related to directories that are still replicating- i found one on the brick mount that is complete but still hangs
20:23 a2 JoeJulian, only after a patch gets reviewed in merged into glusterfs.git, bz will be notified -- not when it is still submitted for review (though it would be nice to report that too)
20:24 tbrown747 joejulian: i also just found an archived forum post of yours about a bug in ext4, which is what these bricks are formatted that can cause ls hangs, is that still a valid issue?
20:25 JoeJulian it sure is.
20:26 JoeJulian a2: Yes, it really would. Could we get that? I think it would help the review process.
20:26 tbrown747 joejulian: i see that this guy is running rhel 6.3 and i am running centos 6.3, guess i should try xfs
20:27 a2 i will check if gerrit allows for such hooks
20:27 a2 the bz updation after merge is a git-hook
20:27 tomsve joined #gluster
20:33 lpabon_ joined #gluster
20:38 lpabon joined #gluster
20:43 soukihei joined #gluster
20:44 soukihei is the amount of available disk space limited to the smallest node in gluster or is it some combination of all the available nodes?
20:46 rwheeler joined #gluster
20:52 JoeJulian soukihei: depends on how it's configured, but generally you're going to want your bricks to be the same size. Many partition bigger drives and present them as multiple bricks to match the size of the smaller ones.
20:53 66MAAGB6J can you change with bricks are replicants later
20:53 soukihei JoeJulian, thanks. I've found the v3.2 Administrator guide which is describing the different type of Server Volumes in GlusterFS
20:54 soukihei that seems to be pointing me in the right direction
20:55 dbruhn on the replication change, I am assuming no, but thought I would ask
20:56 soukihei and it looks like want I'm going to want is to setup is the distributed replicated volumes
20:56 disarone joined #gluster
21:00 mnaser joined #gluster
21:02 balunasj joined #gluster
21:28 tqrst JoeJulian: sounds like a recipe for hard drive trashing
21:29 tqrst JoeJulian: (multiple bricks backed by the same hard drive)
21:39 larsks joined #gluster
21:43 JoeJulian tqrst: Why?
22:00 rubbs JoeJulian: jruggiero: FYI, I reinstalled the boxes and re-created the volume and everything works fine. Going to have to change up our config scripts. (or better yet finally move to puppet or similar). Thanks for all your help
22:11 elyograg JoeJulian: there is a link to a patch in gerrit in the new bug.  my bug got closed because it's being handled by the new one.
22:14 gbrand_ joined #gluster
22:14 tryggvil joined #gluster
22:34 tqrst JoeJulian: that hard drive is going to see twice (or however many times you split it) as much io as all the others.
22:35 _pol joined #gluster
22:35 bala joined #gluster
23:06 Humble joined #gluster
23:37 Humble joined #gluster
23:45 hattenator joined #gluster
23:56 jclift_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary