Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 sjoeboo_ joined #gluster
00:27 andreask joined #gluster
00:37 jiffe1 joined #gluster
01:01 yinyin joined #gluster
01:16 sjoeboo_ joined #gluster
01:30 GLHMarmot joined #gluster
01:32 aliguori_ joined #gluster
01:32 dhsmith joined #gluster
01:38 m0zes joined #gluster
01:57 mohankumar joined #gluster
02:40 sjoeboo_ joined #gluster
03:03 theron joined #gluster
03:08 bharata joined #gluster
03:10 glusterbot New news from newglusterbugs: [Bug 893851] Probing an invalid host causes glusterd to crash <http://goo.gl/gbILP>
03:41 ramkrsna joined #gluster
03:47 bharata joined #gluster
04:12 hagarth joined #gluster
04:13 badone_ joined #gluster
04:17 lanning for the performance settings in a volume, is there a way to map "foo.bar.baz" in "gluster volume set vol01 foo.bar.baz xyz" ?
04:17 lanning is it something like performance.io-threads.io-thread-count ?
04:17 sgowda joined #gluster
04:33 lanning err... thread-count in performance/io-threads = performance.io-threads.thread-count?
04:34 y4m4 joined #gluster
04:40 glusterbot New news from newglusterbugs: [Bug 893778] Gluster 3.3.1 NFS service died after <http://goo.gl/NLoE3>
04:47 yinyin joined #gluster
04:49 bala joined #gluster
04:53 hagarth joined #gluster
05:05 raghu joined #gluster
05:06 vpshastry joined #gluster
05:18 sripathi joined #gluster
05:23 yinyin joined #gluster
05:27 hateya joined #gluster
05:29 FyreFoX hi semiosis, built 3.3.1 + patches just fine using your instructions. tried to do 3.4.0qa6 but it fails format-security. Compile 3.4.0qa6 manually and it succeeds. Any ideas? http://pastie.org/5659545
05:29 glusterbot Title: #5659545 - Pastie (at pastie.org)
05:29 sunus joined #gluster
05:30 hagarth joined #gluster
05:45 lala joined #gluster
05:56 kevein joined #gluster
05:59 shylesh joined #gluster
06:15 frakt joined #gluster
06:15 bulde joined #gluster
06:18 rastar joined #gluster
06:19 snarkyboojum joined #gluster
06:23 cyberbootje joined #gluster
06:27 raven-np joined #gluster
06:28 hagarth joined #gluster
06:35 edong23 joined #gluster
06:43 theron joined #gluster
06:45 ngoswami joined #gluster
06:45 cyberbootje joined #gluster
06:51 vikumar joined #gluster
06:59 cyr_ joined #gluster
07:01 hagarth joined #gluster
07:03 overclk joined #gluster
07:10 glusterbot New news from newglusterbugs: [Bug 893779] Gluster 3.3.1 NFS service died after <http://goo.gl/RBJPS>
07:25 jtux joined #gluster
07:33 yinyin joined #gluster
07:49 dobber joined #gluster
07:54 ekuric joined #gluster
07:55 hateya joined #gluster
07:55 15SAATIU9 joined #gluster
07:58 theron joined #gluster
08:05 Azrael808 joined #gluster
08:06 ctria joined #gluster
08:07 Nevan joined #gluster
08:28 bulde joined #gluster
08:37 tjikkun_work joined #gluster
08:43 bala joined #gluster
08:48 sipane joined #gluster
08:51 hateya joined #gluster
08:57 gbrand_ joined #gluster
09:01 sgowda joined #gluster
09:04 shireesh joined #gluster
09:15 vikumar__ joined #gluster
09:15 ngoswami_ joined #gluster
09:15 ramkrsna_ joined #gluster
09:17 ngoswami__ joined #gluster
09:17 ramkrsna__ joined #gluster
09:18 vimal joined #gluster
09:19 khushildep joined #gluster
09:26 yinyin joined #gluster
09:29 deepakcs joined #gluster
09:29 shireesh joined #gluster
09:32 bala joined #gluster
09:34 khushildep joined #gluster
09:36 khushildep_ joined #gluster
09:49 x4rlos Er, is 3.4 available now?
09:53 yinyin joined #gluster
10:00 shireesh_ joined #gluster
10:25 fubada joined #gluster
10:25 hagarth joined #gluster
10:35 fubada joined #gluster
10:37 glusterbot New news from resolvedglusterbugs: [Bug 764966] gerrit integration fixes <http://goo.gl/AZDsh>
10:57 yinyin joined #gluster
10:59 ramkrsna__ joined #gluster
11:05 shireesh_ joined #gluster
11:06 sgowda joined #gluster
11:16 yinyin joined #gluster
11:16 bulde joined #gluster
11:28 bauruine joined #gluster
11:32 nullck joined #gluster
12:04 manik joined #gluster
12:14 edward1 joined #gluster
12:14 cyr_ joined #gluster
12:18 ctria joined #gluster
12:18 yinyin joined #gluster
12:21 balunasj joined #gluster
12:23 kkeithley1 joined #gluster
12:26 raven-np1 joined #gluster
12:35 andreask joined #gluster
12:38 greylurk joined #gluster
12:44 raven-np joined #gluster
12:45 gbrand_ joined #gluster
12:50 raven-np1 joined #gluster
12:54 aliguori joined #gluster
13:04 sjoeboo joined #gluster
13:05 gbrand__ joined #gluster
13:09 spn joined #gluster
13:12 lala joined #gluster
13:17 shylesh joined #gluster
13:19 yinyin joined #gluster
13:36 akenney_ joined #gluster
13:36 hurdman hi, do you check your gluster replica 2 with nagios ? if yes, how ?
13:37 vpshastry left #gluster
13:42 DataBeaver It seems I'm unable to mount glusterfs volumes in a virtual machine because the glusterd management volume doesn't allow insecure connections.  How should I fix this?
13:43 dustint joined #gluster
13:46 jdarcy gluster volume set $VOLUME allow-insecure on
13:46 gbrand_ joined #gluster
13:46 DataBeaver And what is $VOLUME for the management volume?  Neither "glusterd" nor "management" are recognized.
13:55 hagarth joined #gluster
13:56 jtux joined #gluster
13:56 DataBeaver Alternatively, is there some way to circumvent the management volume and access the desired volume directly?
14:00 nueces joined #gluster
14:03 DataBeaver This isn't made any easier by the fact that the client provides no useful info in the log and the server doesn't flush its logs after each line...
14:05 jdarcy I think you can set allow-insecure separately for each volume, but it's not clear how that affects glusterd.  It's possible that you'll need to set it manually in /etc/glusterfs/glusterd.vol to have that happen.
14:05 jdarcy Your complaint about the logs is valid and familiar.
14:05 sripathi joined #gluster
14:08 DataBeaver Finally got it.  I indeed needed to add the option manually to that file, but the name is slightly different there: option rpc-auth-allow-insecure on
14:13 __Bryan__ joined #gluster
14:20 torbjorn__ joined #gluster
14:20 yinyin joined #gluster
14:21 obryan joined #gluster
14:26 theron joined #gluster
14:27 JoeJulian @locate
14:28 JoeJulian @whatis locate
14:28 glusterbot JoeJulian: Error: No factoid matches that key.
14:28 JoeJulian @whatis location
14:28 glusterbot JoeJulian: Error: No factoid matches that key.
14:28 JoeJulian @which brick
14:28 glusterbot JoeJulian: To determine on which brick(s) a file resides, run getfattr -n trusted.glusterfs.pathinfo $file through the client mount.
14:32 x4rlos i just downloaded 3.4 from experimental repo - is 3.4 stable? Only I thought 3.3 was still the stable release :-./
14:33 x4rlos hurdman: Did you get your nagios answer?
14:33 hurdman x4rlos: nop
14:33 hurdman x4rlos: i have writed something very bad with rsync -n
14:33 hurdman ^^"
14:38 ramkrsna joined #gluster
14:40 JoeJulian x4rlos: ... not sure what to say.... experimental...
14:40 rcheleguini joined #gluster
14:41 JoeJulian Sort-of experimental by definition...
14:41 raven-np joined #gluster
14:42 rwheeler joined #gluster
14:42 x4rlos hurdman: Why not use ssh to connect to the gluster servers, and then use the gluster volume status? Or alternatively, look at exporting a manual snmp command to expose to nagios? Dont think there is anything native ? :-s
14:43 x4rlos JoeJulian: Yeah, hmmm...
14:43 hurdman x4rlos: gluster volume status don't says if there a split brain or if some file are not on both brick :/
14:44 dbruhn joined #gluster
14:49 x4rlos hurdman: Well, run the appropriate command - same pricipal :-)
14:49 hurdman x4rlos: it seems to not have an appropriate commande ^^"
14:50 stopbit joined #gluster
14:53 bugs_ joined #gluster
14:58 obryan left #gluster
14:58 x4rlos hurdman: JoeJulian: can explain much better than me i'm sure. But could you not use something like: gluster volume heal database-archive info split-brain | grep entries | awk '{print $4}'
14:58 x4rlos where database-archive is your volume of course.
15:01 hurdman i didn't know heal, it's from the  3.3
15:01 hurdman so i have to migrate all my infra from the last 3.2.7 to 3.3 :/
15:01 hurdman thx
15:06 x4rlos oh. Yeah, thats a bit pants. And i dont know how easy it is to upgrade.
15:07 hurdman it seems i have to stop all my gluster to upgrade :/
15:07 x4rlos when you rn into mount problems when you upgrade, JoeJulian: has a great page on how to fix your problems :-)
15:07 hurdman ok thanks
15:09 z00dax hi guys, just wondering if there was a test suite for glusterfs, a sort of acceptance / functional / integration testing  setup
15:10 jrossi joined #gluster
15:10 ndevos z00dax: yeah, see the tests/README in the git repo
15:11 * z00dax goes to find the git repo
15:11 ndevos @repo
15:11 glusterbot ndevos: I do not know about 'repo', but I do know about these similar topics: 'repository', 'yum repo', 'yum repository', 'git repo', 'ppa repo', 'yum33 repo', 'yum3.3 repo', 'repos'
15:11 z00dax ok, github is easy
15:11 ndevos @git repo
15:11 glusterbot ndevos: https://github.com/gluster/glusterfs
15:12 z00dax ndevos: excellent.
15:12 z00dax thanks
15:12 ndevos you're welcome :)
15:14 jrossi I have a gluster host that is no longer able to run commands and see status of anything and after reboot the server is not able online.  In looking at the logs I see a HUGE amount of " 0-rpc-service: Auth too weak"  (Here is one example of the repeating messages: https://gist.github.com/4502775 this message block repeats for ever and hundreds of megs).  I dont even know where to look for where this problem is ahppening.   Thank
15:14 glusterbot Title: gist:4502775 (at gist.github.com)
15:21 gbrand__ joined #gluster
15:28 x4rlos Anyone know about a gluster gui?
15:28 jdarcy jrossi: That looks really weird.
15:28 johnmark x4rlos: that would be at ovirt.org
15:29 x4rlos Doubt i'd use one, just saw reference to it.
15:30 spn joined #gluster
15:31 jrossi jdarcy: Found the problem in the IRC logs
15:32 jrossi jdarcy: Problem was two version of gluster installed from two different ppa.  Removed everything and added the supported lauchpad ppa back in and reinstalled
15:32 spn joined #gluster
15:37 BobbyD_FL joined #gluster
15:37 jdarcy jrossi: I was wondering whether it might be a version thing, but hadn't gotten quite that far in the code yet.
15:39 dustint_ joined #gluster
15:39 BobbyD_FL left #gluster
15:43 lh joined #gluster
15:45 lhawthor_ joined #gluster
15:47 lh joined #gluster
15:53 aliguori joined #gluster
15:54 sjoeboo so, is there a list + descriptions of all the performance tweaks available for 3.3 ? i mostly find docs for 3.2, so ts not clear whats available, and what isn't...
15:55 sjoeboo i found the translators area on gluster.org, but some of teh pages are not really indexed/linked to, i ca get to them via google, but nothing on the site links to them so its not clear how "right" they are..
15:56 neofob joined #gluster
15:57 gbrand_ joined #gluster
16:02 lh joined #gluster
16:03 plarsen joined #gluster
16:07 daMaestro joined #gluster
16:13 deckid joined #gluster
16:14 NashTrash joined #gluster
16:14 NashTrash Hello Gluster'ers
16:15 deckid left #gluster
16:15 jdarcy I've never glusted in my life.
16:16 NashTrash ha
16:16 NashTrash Have a little problem I hope you all can help me with.
16:16 jdarcy All we can do is try.
16:16 * jdarcy <- the anti-Yoda
16:16 NashTrash My clients all connect using NFS via a load balancer.  They can mount fine, and write and read.  But they hang when trying to umount.
16:17 NashTrash I am new to gluster and am not certain where to look to troubleshoot.
16:18 kkeithley1 Darth Yoda?
16:19 ndevos NashTrash: it can be that data is written to the nfs-server, unmounting forces a sync - do you know if there is any network I/O happening?
16:20 jdarcy NashTrash: My first thought is that it sounds like a problem with the client or (more likely) the load balancer, but I guess you could check on the GlusterFS servers to see if there's anything interesting in their logs.
16:20 NashTrash ndevos: Yes, I assume there is lots of network traffic.  We just finished up a 12GB copy to the Gluster cluster.
16:21 NashTrash jdarcy: I see lots of stuff like this in the gluster/nfs.log:
16:21 ndevos NashTrash: well, the copy needs to be finished first before the umount succeeds
16:21 NashTrash [2013-01-10 10:05:23.868048] W [client3_1-fops.c:327:client3_1_mkdir_cbk] 0-vf_data-client-34: remote operation failed: File exists. Path: <gfid:d47f352e-2637-4717-8e​cf-2527176f14bf>/66b2aa3f77 (00000000-0000-0000-0000-000000000000)
16:21 NashTrash ndevos: They copy says that it completed quite a bit ago.
16:21 NashTrash But maybe things are still happening in the background
16:22 ndevos NashTrash: yeah, the nfs-client can cache too, the caches need to be written
16:22 NashTrash So, I just wait?  It has been about 20min since we tried to initiate the umount
16:22 ndevos NashTrash: check the command rpcdebug, that may help to verify if nfs is still doing something
16:23 NashTrash Do I run that on the client or the gluster server?
16:23 ndevos on the nfs-client
16:23 NashTrash Ok.  One moment...
16:25 ndevos NashTrash: I'd try something like "rpcdebug -m nfs -s mount -s client -s pagecache -s proc"
16:25 ndevos NashTrash: that will cause some logging in "dmesg"
16:26 ndevos NashTrash: but, be careful if your nfs-client does a lot of NFS work, logs may be overwhelming
16:31 NashTrash Ok.  How do I turn off the logging when I am done?
16:32 NashTrash Nothing is showing up in dmseg
16:33 tc00per joined #gluster
16:35 neofob joined #gluster
16:37 jbrooks joined #gluster
16:37 NashTrash Ok.  Nothing shows up.  Is there a way to force an unmount?
16:37 NashTrash Maybe I can get to where I can reboot the clients
16:48 ndevos NashTrash: "rpcdebug -m nfs -c" turns debugging of
16:48 ndevos f
16:50 ndevos NashTrash: you can not force an umount when using NFS
16:52 JoeJulian Sure you can! reboot -f
17:02 lh joined #gluster
17:08 theron_ joined #gluster
17:20 tryggvil joined #gluster
17:22 NashTrash Well...
17:22 NashTrash We do this crazy mount, then chroot into the mount, then fuse mount parts of the mount into the chroot.  And in all of that the NFS+Gluster combo seems to get lost.
17:27 erik48 for nodes dedicated to gluster, are there any guidelines for how much RAM or CPU is needed (or even if gluster is say, generally CPU heavy and requires little RAM)
17:29 semiosis erik48: all depends on what you're doing on that gluster volume... how many clients, how active they are... no one size fits all recommendation, sorry
17:29 dhsmith joined #gluster
17:30 erik48 okay, thanks
17:30 * m0zes finds that his gluster servers rarely use more than 1-core, even when sending/receiving data at >500MB/s
17:33 tc00per joined #gluster
17:33 rags_ joined #gluster
17:34 rags_ left #gluster
17:34 Mo__ joined #gluster
17:37 x4rlos How would one go about detaching a peer volume, and then adding another afterwards (actually will be the same one after a cleanup).
17:37 x4rlos ?
17:38 x4rlos i have so far stopped the volumes, and want to do a "gluster peer detach client2"
17:38 x4rlos but i still have bricks: "Brick(s) with the peer client2 exist in cluster "
17:38 zaitcev joined #gluster
17:42 jrossi left #gluster
17:49 bauruine joined #gluster
17:52 hagarth joined #gluster
18:03 _ilbot joined #gluster
18:03 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
18:06 rags_ joined #gluster
18:08 andrewbogott joined #gluster
18:09 andrewbogott joined #gluster
18:09 lh joined #gluster
18:09 lh joined #gluster
18:17 andrewbogott joined #gluster
18:18 andrewbogott joined #gluster
18:18 andrewbogott joined #gluster
18:33 manik joined #gluster
18:36 H___ joined #gluster
18:36 JuanBre joined #gluster
18:37 JuanBre I am trying to use dbench to measure the performance of gluster....however everytime I try "dbench --directory=<gluster mount point> 1" I get "[fuse-bridge.c:2025:fuse_writev_cbk] 0-glusterfs-fuse: 56: WRITE => -1 (Invalid argument)"
18:38 GLHMarmot joined #gluster
18:40 Teknix joined #gluster
18:40 chirino joined #gluster
18:40 theron joined #gluster
18:40 sjoeboo so, for those exposing glusgter via CIFS to users, do you run samba on each gluster node, serving a local glusterfs mount of the volume? or do you have a cifs/samba frontend box mounting the glsuterfs and re-exporting?
18:42 JuanBre sjoeboo: I use gluster native client to mount the volume in a server, and then samba to share it. So users access gluster via samba
18:42 JuanBre sjoeboo: I tried stopping the volume and then start it again with no luck
18:43 hateya joined #gluster
18:43 JuanBre sjoeboo: gluster version 3.3.1
18:43 JuanBre sjoeboo: os: debian squeeze
18:46 sjoeboo JuanBre: and you do this on a server that is NOT part of your gluster volume ?
18:47 sjoeboo just trying to find the best performance for exposing this...i like the diea of running samba on all my brick servers and using RRDNS to spread everyone out, but the performance of being a client it itself is pretty poor
18:48 sjoeboo wondering if having a pool of frontend-cifs servers re-sharing the gluster volume would be better
18:53 JuanBre sjoeboo: No, the server where samba is running is one of gluster clients. I am not wondering about performance yet...my problem is the that fuse error that doent let the test to run...
18:53 JuanBre sjoeboo: I mean, the server where samba is running is also one of the gluster bricks that conform the volume
18:54 sjoeboo right
19:19 jbautista joined #gluster
19:25 manik joined #gluster
19:29 gbrand_ joined #gluster
19:31 andreask joined #gluster
19:47 rags_ joined #gluster
19:54 edong23 joined #gluster
20:02 NashTrash Hello again
20:03 NashTrash In an attempt to clear up my issue (hanging client umount) I want to try to restart my entire gluster cluster.  Do I just do a service glusterfs stop on all servers and then start them again?
20:14 kkeithley1 pick one server and do a `gluster volume stop $volname`. That should stop the glusterfs (and glusterfsd) on all the servers.
20:14 kkeithley1 Then do a `gluster volume start $volname` to restart them.
20:25 NashTrash kkeithley: Great.  Thanks.  Does that also clear all file locks?
20:31 kkeithley1 It ought to. Off the top of my head, locks are cleared when the process holding them exits, so once the glusterfsd exits the locks are cleared.
20:40 rags_ joined #gluster
20:57 badone joined #gluster
21:00 sjoeboo joined #gluster
21:02 olri joined #gluster
21:04 jvyas joined #gluster
21:05 jvyas does stopping a volume lose all the data?  thats odd if yes.
21:06 johnmark jvyas: what gave you that idea?
21:06 nik_ joined #gluster
21:08 nik_ hallo
21:12 H__ joined #gluster
21:23 semiosis jvyas: stopping volume only cuts off access to data (kills brick export daemon ,,(processes)) but the data still remains on bricks
21:23 glusterbot information.
21:23 glusterbot jvyas: the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more
21:23 semiosis the bartender asks what can I get ya?
21:23 sjoeboo_ joined #gluster
21:24 semiosis a tachyon walks into a bar
21:24 semiosis did anyone else just see glusterbot do that?
21:26 elyograg last word came first.
21:27 andreask joined #gluster
21:27 elyograg hmm.  trying to install fedora 18 on an old server.  first it comes up and asks me what kind of cdrom boot type to use.  three options, all blank.  choosing the first option lets me continue, but then it seems that anaconda never starts.  next step is to try it in text mode, if i get that option.
21:28 ndevos semiosis: do what? http://fpaste.org/LYA5/
21:28 glusterbot Title: Viewing Paste #266174 (at fpaste.org)
21:28 semiosis ndevos: glusterbot broke a long message into two parts and the second part (information) arrived before the first (jvyas: the GlusterFS core...)
21:29 johnmark semiosis: ha!
21:29 johnmark I love that joke
21:30 semiosis someone in here told it not too long ago... was it you?  or jdarcy?
21:30 ndevos semiosis: right, now I see - I'm getting slow and must probably just leave it for today
21:33 badone joined #gluster
21:41 nik_ anyone know if glusterfs support for freebsd 9.1 is coming any time soon?
21:41 johnmark just saw an advisory of a major security flaw in Java 7 r10
21:46 johnmark http://www.kb.cert.org/vuls/id/625617
21:46 glusterbot Title: US-CERT Vulnerability Note VU#625617 - Java 7 fails to restrict access to privileged code (at www.kb.cert.org)
21:49 NashTrash joined #gluster
21:50 NashTrash left #gluster
22:00 badone joined #gluster
22:10 balunasj joined #gluster
22:11 rwheeler joined #gluster
22:12 andrewbogott joined #gluster
22:13 andrewbogott joined #gluster
22:13 andrewbogott joined #gluster
22:37 duerF joined #gluster
22:39 andrewbogott left #gluster
22:45 andreask joined #gluster
22:51 johnmark nik_: sorry, didn't see your question before.
22:51 johnmark nik_: unfortunately, in order to have a port for FreeBSD that works, we need significant help from the FreeBSD community
22:51 johnmark nik_: do you know anyone who could assist in that effort
22:51 johnmark ?
22:52 johnmark jvyas: hey - did you go back?
23:05 schmidmt1 joined #gluster
23:09 aliguori joined #gluster
23:23 schmidmt1 Quick question: I'm planning on running 3 machines with 4TB each with 2 bricks and set up so its distributed-replicated. Any recommendations on the cpu and memory?
23:28 layer3switch joined #gluster
23:29 badone joined #gluster
23:38 jvyas johnmark, hey
23:38 jvyas im here
23:38 elyograg I know this isn't a gluster question, but it's going to come up in relation to gluster.  A network bonding config that works perfectly in centos 6 isn't working in fedora 18.  According to what I can find for fedora 17, it should work for F17.  I can't seem to find any guide specific to F18.
23:49 haidz elyograg, can you pastebin your config?
23:50 haidz doing lacp or gratuitous arp?
23:50 haidz which mode bonding
23:50 elyograg only bonding for redundancy.  I don't have access to the machine at the moment - its network doesn't work. :) I'll copy the config from the other machine.
23:53 elyograg when i put it on the F18 system, I updated HWADR and UUID values.  http://www.fpaste.org/bamH/
23:53 glusterbot Title: Viewing bonding config from centos by elyograg (at www.fpaste.org)
23:54 elyograg I tried removing NM_CONTROLLED from all the files, because that's not in the ones built by F18 itself.  I also tried changing bonding.conf from netdev-bond0 to bond0 (as mentioned in the guide for F17)
23:56 elyograg I also updated IPADDR PREFIX and GATEWAY to what F18 did - the same thing with 0 appended.
23:56 elyograg none of those changes (each done separately with a reboot) made it work.
23:57 elyograg it doesn't create the bond0 interface, so it's not an ARP problem.
23:59 rags__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary