Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 \_pol joined #gluster
00:52 bala joined #gluster
00:53 jebba I have gluster working fine. I want to use it for users' /home/ directories. I'm kind of stuck on the permissions though. I do see a (new?) root-squash option that works. Ideally I would serve the workstation just their one home directory, and not give access to the other directories.
00:59 raghug joined #gluster
01:11 satheesh joined #gluster
01:15 vpshastry joined #gluster
01:27 kevein joined #gluster
01:31 sprachgenerator joined #gluster
01:33 asias joined #gluster
01:57 badone_ joined #gluster
02:19 harish joined #gluster
02:23 raghug joined #gluster
02:25 Chombly joined #gluster
02:28 Chombly Hi I'm wondering if someone can point me where can I find some examples info to test qemu 1.3 - libgfapi integration with Gluster, Thanks
02:29 \_pol joined #gluster
02:32 hagarth Chombly: this might help - http://raobharata.wordpress.com/2012/10​/29/qemu-glusterfs-native-integration/
02:32 glusterbot <http://goo.gl/f2MhH> (at raobharata.wordpress.com)
02:33 hagarth @learn qemu-libgfapi as "http://raobharata.wordpress.com/2012/10​/29/qemu-glusterfs-native-integration/"
02:33 glusterbot hagarth: The operation succeeded.
02:33 hagarth @qemu-libgfapi
02:33 glusterbot hagarth: http://goo.gl/f2MhH
02:38 Chombly Hagarth: Thank you,
02:57 kshlm joined #gluster
03:05 sgowda joined #gluster
03:05 bharata joined #gluster
03:30 dhsmith_ joined #gluster
03:38 shubhendu joined #gluster
03:44 dhsmith joined #gluster
03:45 \_pol_ joined #gluster
03:56 aravindavk joined #gluster
04:00 zombiejebus joined #gluster
04:00 vpshastry joined #gluster
04:01 itisravi joined #gluster
04:03 dhsmith joined #gluster
04:04 dhsmith joined #gluster
04:05 bigolbear joined #gluster
04:09 \_pol joined #gluster
04:11 dusmant joined #gluster
04:13 bigolbear Anyone out there? I am new to Gluster and am trying to put together hardware configurations. Can anyone recommend an SAS/SATA HBA? I am thinking about 24 disk SuperMicro chassis. Anyone have experience with SuperMicro backplanes and SAS expanders?
04:17 bigolbear I will be configuring several volumes. 20TB for home directories, 300+TB for sensor data, and 60TB in an HPC cluster for post processing.
04:19 bigolbear I am using 40Gb Infiniband for the backend and would like to see read speeds into the 10s of Gbps?
04:24 bigolbear I have done a PoC with several VM brick servers and several clients and I like what I see so far but I am concerned with bandwidth.
04:26 saurabh joined #gluster
04:26 bigolbear What is the optimum brick layout? Am I better off with 2, 3, or 4 bricks per server or one big brick per server? These will likely be SATA drives maybe SAS 10k drives in the HPC cluster.
04:28 ngoswami joined #gluster
04:28 bigolbear =-O
04:30 satheesh joined #gluster
04:33 shruti joined #gluster
04:36 bigolbear left #gluster
04:54 CheRi_ joined #gluster
04:58 lalatenduM joined #gluster
05:00 shylesh joined #gluster
05:04 kanagaraj joined #gluster
05:04 ababu joined #gluster
05:06 raghu joined #gluster
05:07 bulde1 joined #gluster
05:18 lalatenduM joined #gluster
05:19 codex joined #gluster
05:23 shireesh joined #gluster
05:37 vijaykumar joined #gluster
05:37 bala joined #gluster
05:38 raghug joined #gluster
05:41 Humble joined #gluster
05:43 shireesh joined #gluster
05:46 rastar joined #gluster
06:01 wushudoin joined #gluster
06:01 mohankumar joined #gluster
06:11 satheesh joined #gluster
06:14 ndarshan joined #gluster
06:14 rjoseph joined #gluster
06:17 dhsmith joined #gluster
06:27 guigui1 joined #gluster
06:31 raghug joined #gluster
06:34 thomaslee joined #gluster
06:35 mooperd joined #gluster
06:37 psharma joined #gluster
06:50 Recruiter joined #gluster
06:56 ekuric joined #gluster
07:03 ctria joined #gluster
07:05 vshankar joined #gluster
07:13 sgowda joined #gluster
07:18 jebba joined #gluster
07:29 flowouffff joined #gluster
07:30 badone_ joined #gluster
07:40 piap joined #gluster
07:41 piap need help on a replicate cluster with 2 servers (SERVER01 and SERVER02)
07:42 ipalaus joined #gluster
07:42 ipalaus joined #gluster
07:42 piap on SERVER01 : peer status
07:42 piap give me : State: Probe Sent to Peer (Connected)
07:42 piap on SERVER02 : peer status
07:42 piap give me : State: Sent and Received peer request (Connected)
07:43 piap it normally must be : peer status -> result for both : in cluster
07:46 ppacory joined #gluster
07:53 ppacory is someone have an idea to my problem ?
07:56 recidive joined #gluster
08:03 mohankumar joined #gluster
08:09 harish joined #gluster
08:31 psharma joined #gluster
08:33 puebele joined #gluster
08:36 atrius joined #gluster
08:41 shireesh joined #gluster
08:51 puebele1 joined #gluster
08:52 harish joined #gluster
08:53 ndarshan joined #gluster
08:55 itisravi_ joined #gluster
09:01 itisravi joined #gluster
09:02 bharata joined #gluster
09:07 deepakcs joined #gluster
09:12 ricky-ticky joined #gluster
09:15 zombiejebus_ joined #gluster
09:16 guigui5 joined #gluster
09:19 SynchroM joined #gluster
09:21 vincent_vdk joined #gluster
09:21 shylesh joined #gluster
09:23 ipalaus joined #gluster
09:23 ipalaus joined #gluster
09:23 atrius joined #gluster
09:27 pkoro joined #gluster
09:30 spider_fingers joined #gluster
09:48 Humble joined #gluster
09:56 kkeithley1 joined #gluster
10:11 sgowda joined #gluster
10:14 kshlm joined #gluster
10:19 kkeithley joined #gluster
10:20 kkeithley joined #gluster
10:20 chirino joined #gluster
10:23 kkeithley joined #gluster
10:25 Humble joined #gluster
10:27 edward1 joined #gluster
10:28 kkeithley joined #gluster
10:56 Humble joined #gluster
10:58 ipalaus joined #gluster
10:58 ipalaus joined #gluster
10:59 aliguori joined #gluster
11:01 sgowda joined #gluster
11:12 rwheeler joined #gluster
11:17 sas joined #gluster
11:19 chirino joined #gluster
11:20 ndarshan joined #gluster
11:21 CheRi_ joined #gluster
11:22 raghug joined #gluster
11:31 sgowda joined #gluster
11:31 ipalaus joined #gluster
11:31 ipalaus joined #gluster
11:40 CheRi_ joined #gluster
12:10 piotrektt joined #gluster
12:10 piotrektt joined #gluster
12:17 theron joined #gluster
12:19 ngoswami joined #gluster
12:30 _BuBU joined #gluster
12:31 _BuBU seems I've issue with glusterfs 3.3.2 under ubuntu:
12:31 _BuBU [2013-07-30 12:28:26.235547] A [mem-pool.h:65:__gf_default_malloc] (-->/usr/lib/libglusterfs.so.0(synctask_wrap+0x12) [0x7f65349bd152] (-->/usr/sbin/glusterfs(gluster​fs_handle_translator_op+0x16f) [0x7f6534e1b35f] (-->/usr/lib/libglusterfs.s​o.0(dict_unserialize+0x21e) [0x7f653498784e]))) : no memory available for size (18446744072683297280)
12:31 _BuBU I've 2 gluster servers with replica volume
12:32 _BuBU and seems heal self-daemon always crash after some time
12:32 _BuBU both boxes are 16G RAM
12:33 _BuBU http://pastebin.com/kM5sc1MK
12:33 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
12:35 _BuBU https://dpaste.de/mGfp7/
12:35 glusterbot Title: dpaste.de: Snippet #235651 (at dpaste.de)
12:36 kkeithley Yikes, no memory available for size (18446744072683297280)
12:44 ngoswami joined #gluster
12:46 plarsen joined #gluster
12:49 jmalm joined #gluster
12:52 recidive joined #gluster
12:55 chouchins joined #gluster
12:59 ndevos 18446744072683297280 are lot of bytes!
12:59 duerF joined #gluster
12:59 _BuBU yep :(
12:59 _BuBU some sort of memoryleak ?
13:01 kkeithley in dict_unserialize, I'd guess something didn't get byte-swapped on the wire. Do you have a core file?
13:01 _BuBU kkeithley: nope
13:01 _BuBU this is production server so did not activated any debug/core..
13:01 kkeithley :-(
13:02 bennyturns joined #gluster
13:06 kkeithley either something didn't get byte-swapped or a 32-bit/64-bit mismatch. 18446744072683297280 is 0xFFFFFFFFC2D49A00.  Looks suspicious
13:07 _BuBU this is on 64bits ubuntu 12.04 LTS
13:07 _BuBU both boxes..
13:07 _BuBU and for now I've no issues with the first box
13:10 aliguori joined #gluster
13:10 rastar joined #gluster
13:11 kkeithley I meant a 32-bit/64-bit mismatch in the code
13:19 rcheleguini joined #gluster
13:19 kkeithley joined #gluster
13:30 chirino joined #gluster
13:35 sgowda joined #gluster
13:35 vpshastry joined #gluster
13:40 lalatenduM joined #gluster
13:46 zaitcev joined #gluster
13:50 bennyturns joined #gluster
13:50 jruggiero joined #gluster
13:52 sprachgenerator left #gluster
13:55 aliguori_ joined #gluster
13:58 Technicool joined #gluster
14:02 sprachgenerator joined #gluster
14:07 vpshastry1 joined #gluster
14:16 manik joined #gluster
14:22 lalatenduM joined #gluster
14:23 satheesh joined #gluster
14:30 \_pol joined #gluster
14:30 wushudoin joined #gluster
14:37 kkeithley joined #gluster
14:49 tqrst joined #gluster
14:52 chirino joined #gluster
15:00 ekuric left #gluster
15:02 lpabon joined #gluster
15:09 raghug joined #gluster
15:09 aliguori joined #gluster
15:16 daMaestro joined #gluster
15:18 \_pol joined #gluster
15:18 bugs_ joined #gluster
15:20 vpshastry1 left #gluster
15:22 spider_fingers left #gluster
15:25 jclift joined #gluster
15:25 chirino joined #gluster
15:27 jag3773 joined #gluster
15:40 muhh joined #gluster
15:41 chouchins joined #gluster
15:51 guigui5 left #gluster
15:52 semiosis :O
15:53 hagarth :O
15:54 jclift ?
15:54 manik joined #gluster
15:54 samppah :O
15:55 hagarth jclift: git grep ':O'
15:58 dbruhn jclift, I got all that IB hardware ordered and it on it's way.
15:58 dbruhn Have the cards and everything in hand
15:58 jclift Sweet :)
15:59 dbruhn So hopefully I'll be ready for some testing soon.
15:59 jclift dbruhn: They should "just work" with the drivers that come with the "Infiniband Support" yum group in RHEL/CentOS 6.x, and any recent Fedora.
15:59 jmalm joined #gluster
15:59 jclift dbruhn: Yeah, I've been working on getting a mutli-node testing framework up and running.
16:00 jclift dbruhn: So far, I've got exact autotest kind of working _single node_
16:01 jclift dbruhn: Hmmm, to be clear, I had to learn how to get autotest running at all (badly documented), then write a bunch of Gluster specific utility code to be called from tests. (eg wiping gluster install between tests, etc).
16:02 jclift dbruhn: Got that "done" (to a point I'm ok with) yesterday.
16:02 devopsman Is there a gluster command to display the attributes which are configured with volume set ?
16:02 jclift dbruhn: My next trick is to now start extending it to be multi-node.  At a guess, it'll probably take a few days.  No clear idea yet. ;)
16:03 hagarth jclift: do you have this code hosted somewhere?
16:03 sas devopsman, gluster volume info command should help
16:03 jclift hagarth: Yep: https://github.com/justinclift/autot​est-client-tests/tree/gluster_basic
16:03 glusterbot <http://goo.gl/9Z7L5r> (at github.com)
16:04 hagarth jclift: neat, will check this over.
16:04 jclift hagarth: I'll need to write docs on how people can use it (simply) when the multi-node stuff is done.  At the moment it's all just "in my head"
16:04 devopsman thanks
16:04 sas devopsman, welcome :)
16:05 hagarth jclift: makes sense for now :)
16:05 jclift hagarth: I comes down to people needing to check out the main autotest repo (use git clone --recursive, so it grabs the main tests)
16:05 jclift s/I comes/It comes/
16:05 glusterbot jclift: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
16:05 jclift hagarth: Then pretty much coping the gluster* directories from my repo across to the checked out autotest/client/tests/ dir.
16:06 jclift hagarth: The utility code is mostly in the glusterd_start/ directory of my repo.
16:06 jclift To run the tests themselves, cd to the autotest/client/ directory as root, then: rm -rf tmp results; ./autotest-local tests/gluster_volume_tests/control
16:07 hagarth jclift: ok
16:07 \_pol joined #gluster
16:07 chouchins joined #gluster
16:08 chouchins joined #gluster
16:09 jclift hagarth: If all goes well, this is what the test run looks like in the terminal: http://fpaste.org/29038/
16:09 glusterbot Title: #29038 Fedora Project Pastebin (at fpaste.org)
16:09 jclift You'll probably need to turn off the "wrapping" on fpaste.org, for that to look ok
16:10 hagarth jclift: neat
16:10 jclift hagarth: This is the web based report it generates: http://justinclift.fedorapeople.org/gluster_a​utotest/201307301710/default/job_report.html
16:11 glusterbot <http://goo.gl/4qBecx> (at justinclift.fedorapeople.org)
16:11 jclift Just uploaded that, from a test run yesterday.
16:11 jclift I need to run for an airport now though, hopefully back online in a few hours. (not sure, depends on wifi access, etc)
16:11 bala joined #gluster
16:11 jclift hagarth: Note, the above tests are pretty much a port of gluster/tests/basic/volume.t, as I needed something reasonable to get going with. ;)
16:12 hagarth jclift: that's some start, nevertheless!
16:12 jclift hagarth: Hopefully it works out ok. :)
16:12 jclift Anyway, got to run. :)
16:12 hagarth jclift: ttyl
16:17 ngoswami joined #gluster
16:18 wushudoin left #gluster
16:20 recidive joined #gluster
16:29 Gilbs1 I'm doing some log checking and are seeing these on my client servers, is this something I should be worried about?
16:29 Gilbs1 volume-dht: found anomalies in /path/path/path. holes=1 overlaps=1
16:33 xavih joined #gluster
16:40 jebba joined #gluster
16:42 ngoswami joined #gluster
16:53 \_pol joined #gluster
16:54 \_pol joined #gluster
17:07 Technicool joined #gluster
17:10 bulde joined #gluster
17:18 Mo_ joined #gluster
17:19 thomaslee joined #gluster
17:26 mooperd joined #gluster
17:27 chirino joined #gluster
17:35 hchiramm_ joined #gluster
17:47 recidive joined #gluster
17:56 ipalaus joined #gluster
17:56 ipalaus joined #gluster
17:58 chirino joined #gluster
18:07 glusterbot New news from resolvedglusterbugs: [Bug 814534] [FEAT] Need cleaner way to do handshake during 'peer probe' <http://goo.gl/q06JH2> || [Bug 819130] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/GfSUw>
18:19 JoeJulian Gilbs1: Take a look at http://joejulian.name/blog​/dht-misses-are-expensive/ wrt how dht hashes work. A hole is where there's a gap that's not covered by the masks. An overlap... well, that's obvious now. :D
18:19 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
18:20 JoeJulian The fix is probably a rebalance, at least a fix-layout
18:21 NuxRo JoeJulian: can you advise on a ports issue, which ports do i need open to allow clients to mount a volume, while not letting them issue "peer" commands?
18:22 JoeJulian NuxRo: The real question is why do you care if they can issue a "peer" command?
18:24 JoeJulian Short answer, though, you can't. The clients pick up their configuration through 24007 and that's the same port used for the management communication.
18:25 JoeJulian The longer answer is, "so what" (ok, that's pretty short)? It's called a "trusted peer group" because the peers trust each other. Once the peer group is established, only a member of that trusted peer group can add another peer.
18:25 * JoeJulian throws up a big asterisk... *
18:26 JoeJulian Not entirely true, since you can also use the cli's --remote-host option to make another management daemon do the dirty work.
18:27 JoeJulian 3.4 has the ability to configure ssl. I'm not sure if that helps though.
18:28 JoeJulian btw... I've already filed a security bug report against the remote-host thing.
18:28 Recruiter joined #gluster
18:29 dhsmith joined #gluster
18:33 dscastro joined #gluster
18:34 JoeJulian hmm, bz seems to be unhealthy today.
18:35 Mick271 joined #gluster
18:36 Mick271 guys, I understand it is best to have a brick on its own partition, but is it mandatory ?
18:36 JoeJulian Nope
18:36 Mick271 ok
18:36 Mick271 thought so
18:37 JoeJulian You just don't want logs and brick data filling up your root partition and the havoc that comes with that.
18:37 tqrst JoeJulian: has it ever been healthy? It's always been slow as molasses for me
18:38 JoeJulian Well, always slow but doesn't /usually/ take 5 minutes for my search that shows which bugs my name's attached to.
18:38 tqrst ah right, it's usually more like 30 seconds
18:39 * tqrst wonders if bugzilla runs on an old forgotten 386 in a closet somewhere
18:39 JoeJulian I don't think bz was meant to scale to the level it's being used by RH.
18:40 NuxRo JoeJulian: if they can do "peer probe existing-server", can't they then issue commands on "gluster volume"?
18:40 JoeJulian Hmm, that's weird. I could have sworn that I filed a bug listing remote-host as a security issue.
18:41 JoeJulian NuxRo: Nope. The existing server will refuse them.
18:41 tqrst there's https://bugzilla.redhat.com/show_bug.cgi?id=880241
18:41 glusterbot <http://goo.gl/rOZ3P> (at bugzilla.redhat.com)
18:41 glusterbot Bug 880241: high, high, ---, kparthas, NEW , Basic security for glusterd
18:41 NuxRo hm, okay, didnt know this, so far i only have 2 peers in my setup and to set it up all i needed to do is just "peer probe" in one of them
18:42 NuxRo hence my confusion
18:42 JoeJulian Right, but once they've established a "trusted pool", nobody else can join that way. Only an existing peer can probe a new server.
18:43 NuxRo ok, good to know
18:43 NuxRo thanks
18:44 NuxRo so far I'm only exporting my volumes via NFS to some customers, I'm too afraid to let them use the native protocol directly
18:44 NuxRo as the bug indicates, gluster needs serious auth/acl
18:52 chirino joined #gluster
18:58 jag3773 joined #gluster
19:10 Gilbs1 JoeJulian:  thanks!
19:13 JoeJulian You're welcome
19:17 Gilbs1 That explains the ---------T. 2 nobody nobody      0 Jul 26 14:07 file I found.
19:18 JoeJulian yep :)
19:18 JoeJulian semiosis: http://lists.gnu.org/archive/html/g​luster-devel/2013-07/msg00137.html
19:18 glusterbot <http://goo.gl/M4UtBF> (at lists.gnu.org)
19:21 semiosis JoeJulian: why rfc3164 instead of the much better iso8601?
19:22 JoeJulian Because I did the search on my phone...
19:22 JoeJulian dammit
19:22 JoeJulian I can never remember which one is which...
19:22 semiosis lol
19:23 semiosis iso8601 is 2013-01-01T13:02:12
19:23 semiosis sorts naturally
19:24 semiosis idk how any computer person could ever write any other date
19:24 semiosis it's so obvs
19:24 JoeJulian I just searched logstash timestamp and that's the one that came up... <sigh>
19:25 semiosis thats odd
19:26 JoeJulian Anyway.... json logs... :)
19:31 zaitcev joined #gluster
19:36 jag3773 joined #gluster
19:41 plarsen joined #gluster
19:50 theron joined #gluster
19:57 bulde joined #gluster
20:17 karthik joined #gluster
20:27 terje joined #gluster
20:50 cfeller joined #gluster
20:58 jag3773 joined #gluster
21:01 chirino joined #gluster
21:04 mooperd joined #gluster
21:06 mooperd_ joined #gluster
21:17 \_pol joined #gluster
21:18 plarsen joined #gluster
21:19 nightwalk joined #gluster
21:24 \_pol_ joined #gluster
21:34 mooperd joined #gluster
21:39 mooperd_ joined #gluster
21:40 sprachgenerator joined #gluster
21:41 theron joined #gluster
21:43 \_pol joined #gluster
21:47 \_pol_ joined #gluster
21:51 jag3773 joined #gluster
21:57 neofob left #gluster
22:14 \_pol joined #gluster
22:15 mrfsl joined #gluster
22:20 y4m4 joined #gluster
22:24 y4m4 joined #gluster
22:25 \_pol joined #gluster
22:33 y4m4 joined #gluster
22:34 y4m4 joined #gluster
22:36 zombiejebus joined #gluster
22:37 y4m4 joined #gluster
22:38 mooperd joined #gluster
22:39 mrfsl Hello all. I have a gluster environment with 3 gluster nodes - replicate 2 and a total of 12 bricks. How do you benchtest this? I tried dd but that is only really writing to one brick and its replica which isn't really accurate right?
22:39 mrfsl Suggestions?
22:39 JoeJulian identify how you want it to be used in production and design tests to emulate that.
22:39 semiosis NEEDS MOAR DD
22:39 JoeJulian hehe
22:40 mrfsl more dd processes.... makes sense.
22:40 JoeJulian Or just throw it in production and see if anyone complains. ;)
22:40 JoeJulian @joe's performance metric
22:40 glusterbot JoeJulian: nobody complains.
22:40 semiosis yeah, dd is "less than ideal" but if you run a whole bunch of them on several client machines you may be able to approximate an actual workload
22:41 mrfsl well I have tried to emulate a workload with bonnie++ --- like this:
22:41 mrfsl bonnie++ -d ./ -n 64:5242880:16384:250
22:42 mrfsl and I could run a few of these.... but the real problem is that I never seem to be able to saturate the network (1 Gb)
22:42 mrfsl ultimately we are trying to determine if there is benefit to 10GboE network upgrade or not
22:42 y4m4 mrfsl: iozone or IOR are better alternatives than bonnie++
22:42 y4m4 mrfsl: http://sourceforge.net/projects/ior-sio/ - IOR
22:43 glusterbot Title: IOR HPC Benchmark | Free System Administration software downloads at SourceForge.net (at sourceforge.net)
22:43 \_pol_ joined #gluster
22:44 semiosis mrfsl: large block size.  character & small chunk tests are going to perform poorly.  dd bs=1M for example
22:44 y4m4 mrfsl: bonnie++ small block i/o is a bad fit for benchmarking, since GlusterFS client if you are using 3.3.x wouldn't do caching so saturating a link is out of the question
22:44 semiosis mrfsl: a bunch of dd with bs=1M should be able to saturate a single glusterfs client machine's nic
22:45 y4m4 mrfsl: what semiosis said ^^
22:46 semiosis how's it going y4m4?
22:46 semiosis and thanks for that ,,(undocumented options) article!
22:47 glusterbot Undocumented options for 3.4: http://goo.gl/Lkekw
22:47 mrfsl and so... if I have an application which writes millions of 16KiB files? Then gluster is a bad idea? Or I have it improperly tuned?
22:48 mrfsl Or use nfs?
22:49 semiosis mrfsl: replication is sensitive to latency. the penalty is amortized over larger data ops, but with tiny data ops just checking if the replica metadata is in sync dominates the operation
22:50 semiosis best way to reduce that penalty (without giving up replication altogether) is to reduce latency... SSD + infiniband
22:50 semiosis imho
22:50 semiosis gotta run
22:50 semiosis ttyl
22:50 mrfsl thank you
22:50 semiosis yw
22:52 daMaestro joined #gluster
22:52 daMaestro joined #gluster
23:07 mrfsl left #gluster
23:07 raghug joined #gluster
23:20 fidevo joined #gluster
23:21 daMaestro joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary