Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 jbd1 glusterfs code review bug: https://bugzilla.redhat.co​m/show_bug.cgi?id=1086460
00:01 glusterbot Bug 1086460: high, high, 3.4.4, kkeithle, MODIFIED , Ubuntu code audit results (blocking inclusion in Ubuntu Main repo)
00:04 mattappe_ joined #gluster
00:17 jbd1 http://www.gluster.org/community/documentation​/index.php/Getting_started_install#For_Ubuntu
00:17 glusterbot Title: Getting started install - GlusterDocumentation (at www.gluster.org)
00:17 jbd1 (I updated the ubuntu install instructions)
00:23 theron joined #gluster
00:24 Ark joined #gluster
00:42 yinyin_ joined #gluster
00:48 recidive joined #gluster
00:48 yinyin- joined #gluster
00:55 lyang0 joined #gluster
01:05 badone joined #gluster
01:06 mattappe_ joined #gluster
01:09 mattapp__ joined #gluster
01:14 theron joined #gluster
01:14 bala joined #gluster
01:15 liammcdermott joined #gluster
01:16 theron_ joined #gluster
01:20 harish joined #gluster
01:20 theron joined #gluster
01:27 mattapperson joined #gluster
01:28 gtobon joined #gluster
01:29 gtobon I have the follow issue gluster volume geo-replication gv0_shares 10.1.163.170::/gv0_shares_dr start
01:29 gtobon One or more nodes do not support the required op version.
01:29 gtobon geo-replication command failed
01:29 gtobon any Ideas?
01:37 mattappe_ joined #gluster
01:54 theron joined #gluster
01:54 sjm joined #gluster
01:56 jobewan joined #gluster
02:04 mjsmith2 joined #gluster
02:13 harish joined #gluster
02:29 vpshastry joined #gluster
02:30 Ark_ joined #gluster
02:31 overclk_ joined #gluster
02:33 k3rmat joined #gluster
02:41 rjoseph joined #gluster
02:45 mattappe_ joined #gluster
02:51 bharata-rao joined #gluster
02:58 sjm joined #gluster
02:59 haomaiwang joined #gluster
03:09 sjm left #gluster
03:37 itisravi joined #gluster
03:42 plarsen joined #gluster
03:42 d-fence joined #gluster
03:46 nishanth joined #gluster
03:47 shubhendu joined #gluster
03:51 kanagaraj joined #gluster
03:54 ppai joined #gluster
03:54 shubhendu_ joined #gluster
04:04 Pupeno_ joined #gluster
04:08 latha joined #gluster
04:11 sputnik13 joined #gluster
04:22 psharma joined #gluster
04:36 sputnik13 joined #gluster
04:36 ndarshan joined #gluster
04:43 hagarth joined #gluster
04:48 dusmant joined #gluster
04:50 RameshN joined #gluster
04:53 spandit joined #gluster
04:53 sputnik13 joined #gluster
05:00 kdhananjay joined #gluster
05:01 bala joined #gluster
05:02 jiku joined #gluster
05:05 ceddybu joined #gluster
05:06 kshlm joined #gluster
05:06 ceddybu Hello! Can a server be both a GlusterFS server and client?
05:06 ceddybu Using a dedicated raid array just for the bricks
05:10 RameshN joined #gluster
05:12 aravindavk joined #gluster
05:25 ceddybu anybody around?
05:26 sjm joined #gluster
05:27 lalatenduM joined #gluster
05:31 davinder11 joined #gluster
05:35 askb joined #gluster
05:38 vpshastry joined #gluster
05:40 baojg joined #gluster
05:42 davinder13 joined #gluster
05:42 purpleidea ceddybu: you can be a server and a client, yes. i g2g though
05:43 vpshastr1 joined #gluster
05:44 vpshastr1 left #gluster
05:44 vpshastr1 joined #gluster
05:45 vpshastr1 left #gluster
05:45 vpshastr1 joined #gluster
05:46 ceddybu purpleidea: thanks!
05:46 vpshastry left #gluster
05:46 purpleidea ceddybu: if you can explain your question better it might help get a better answer
05:47 raghu joined #gluster
05:47 ceddybu you answered it pretty much. currently trying to replace lsyncd
05:47 baojg_ joined #gluster
05:50 purpleidea ceddybu: just don't read/touch the data directly in the brick dir. mount to itself is okay, get it?
05:50 baojg_ joined #gluster
05:50 ceddybu ya i plan on using the native gluster fuse client
05:50 purpleidea +1 :)
05:52 prasanthp joined #gluster
05:52 dusmant joined #gluster
05:52 ceddybu is it a common to setup gluster on web servers? I only have 1GB network so I figure some requests will just hit the local machines
05:54 rjoseph joined #gluster
05:55 hagarth joined #gluster
05:58 kumar joined #gluster
05:58 sjm left #gluster
06:01 morfair joined #gluster
06:05 morfair Hi all! Will this system work? I wand read perfomance with two servers. http://media.tvoynet.ru/upload/img/201​4-05/28/q9zmpoeh63cf0a5xj19c9xl43.jpg
06:05 morfair # gluster volume create gv0 replica 2 transport tcp server1:/brick1 server2:/brick3 server2:/brick4 server1:/brick2
06:09 karnan joined #gluster
06:18 ricky-ticky joined #gluster
06:20 davinder13 joined #gluster
06:21 ceddybu morfair: like this? http://i.imgur.com/xpww8DL.png
06:21 vimal joined #gluster
06:22 karimb joined #gluster
06:22 ceddybu do you care about data loss ?
06:23 ceddybu i would failover test that fo sho
06:24 morfair ceddybu, yes, like this.
06:25 morfair in this order of bricks (create volume) data should not loss
06:25 morfair no?
06:26 morfair everu brick will has replica at another server
06:26 ceddybu i think you can lose one of the brick, but in a real scenario, the entire server would probably fail
06:27 morfair my image not same your image
06:27 ceddybu 4 bricks on 2 servers right ?
06:27 ceddybu my image is with 4 servers correct
06:27 morfair yes. replica bricks on other node
06:28 morfair http://media.tvoynet.ru/upload/img/201​4-05/28/q9zmpoeh63cf0a5xj19c9xl43.jpg
06:28 morfair replica 2 bricks 1 3 4 2
06:30 ceddybu if you are going for read performance over data integrity then i think it will work, do you have a test environment?
06:31 hagarth joined #gluster
06:31 morfair no(
06:31 morfair i'm just planning
06:31 morfair i have not 4 nodes
06:33 rjoseph joined #gluster
06:33 Philambdo joined #gluster
06:33 ceddybu hopefully someone that knows more can help
06:33 dusmant joined #gluster
06:34 morfair ceddybu, thanks
06:34 vpshastr1 joined #gluster
06:46 kdhananjay joined #gluster
06:58 rahulcs joined #gluster
06:59 ctria joined #gluster
07:02 vpshastry joined #gluster
07:02 vpshastry left #gluster
07:02 vpshastr1 joined #gluster
07:10 vpshastry1 joined #gluster
07:10 kaushal_ joined #gluster
07:11 vpshastry1 left #gluster
07:12 vpshastry2 joined #gluster
07:13 vpshastry1 joined #gluster
07:14 warci joined #gluster
07:16 eseyman joined #gluster
07:26 mbukatov joined #gluster
07:26 keytab joined #gluster
07:28 haomaiwa_ joined #gluster
07:31 ProT-0-TypE joined #gluster
07:33 fsimonce joined #gluster
07:37 haomaiw__ joined #gluster
07:42 kdhananjay joined #gluster
07:44 nshaikh joined #gluster
07:45 coredumb i have huge memory leaks on gluster 3.5... is that a known issue ?
07:46 coredumb using NFS to rsync 40GB, i have 16GB of memory + 2GB of swap completely hammered by glusterfs processes
07:46 coredumb :/
07:46 kshlm joined #gluster
07:47 ngoswami joined #gluster
07:48 coredumb my nfs log is filled with "[rpc-drc.c:499:rpcsvc_add_op_to_cache] 0-rpc-service: DRC failed to detect duplicates" not sure if it's linked
07:48 Sunghost joined #gluster
07:50 coredumb any idea?
07:51 baojg joined #gluster
07:51 coredumb ok seems like i'm not alone
07:51 coredumb digging this out
07:52 Sunghost hi, i setup a new server with distributed volume - i copied the first 7TB of data from old one and now i see my root is full with vol1.log of over 5GB - is this normal?
07:52 Sunghost in log i see "txattr failed on key trusted.glusterfs.dht.linkto (No data available)" <- what does this mean? the vol1 actually exists of only one brick
07:54 Sunghost system is debian wheezy with glusterfs 3.5
07:56 vpshastry joined #gluster
07:59 Sunghost filesystem is xfs
08:03 monotek joined #gluster
08:03 vpshastry1 joined #gluster
08:05 haomaiwa_ joined #gluster
08:07 haomaiw__ joined #gluster
08:07 liquidat joined #gluster
08:11 Ark joined #gluster
08:13 meghanam joined #gluster
08:13 meghanam_ joined #gluster
08:21 monotek left #gluster
08:21 rgustafs joined #gluster
08:22 haomaiwang joined #gluster
08:24 rahulcs joined #gluster
08:26 Sunghost no help``
08:34 ndevos coredumb: I'm not aware of anyone that filed a bug for that, if you found one, please let me know
08:35 coredumb ndevos: http://gluster.org/pipermail/glus​ter-devel/2014-March/028712.html
08:35 glusterbot Title: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak (at gluster.org)
08:35 coredumb i've set nfs.drc off and restarted my gluster daemon and reloaded my rsync
08:35 coredumb will see if it's better
08:35 ndevos coredumb: oh, right, I've disabled nfs.drc on my setup too
08:35 coredumb :D
08:36 coredumb what's DRC ?
08:36 ndevos coredumb: Duplicate Request Cache
08:36 coredumb ok
08:37 ndevos coredumb: it caches calls/replies and on a call that was cached already, the same reply gets send
08:37 coredumb any chance to have better troughput than ~30MB/s - two node two bricks replica 2 over Gb link ?
08:37 haomaiw__ joined #gluster
08:37 coredumb ndevos: ok
08:38 ndevos better throughput should be possible, but I dont know where the bottleneck would be
08:39 coredumb yikes rebuilding rsync file list gets painfully slow ^^
08:46 glusterbot New news from newglusterbugs: [Bug 1101942] Unable to peer probe 2nd node on distributed volume (3.5.1-0.1.beta1) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101942>
08:47 coredumb ndevos: not sure you saw my quorum question yesterday
08:47 coredumb if i understood correctly, i can add a node to the peer list that doesn't have to host bricks and will only be used as a "whitness" to prevent spli brains right ?
08:49 rastar joined #gluster
08:52 nishanth joined #gluster
08:52 dusmant joined #gluster
08:53 Thilam Hi, I've a new question regarding quota hagarth, if you have a min
08:53 Thilam first time i've try to enable them on a volume I got the error message : quota: Could not start quota auxiliary mount
08:53 Paul-C joined #gluster
08:53 Thilam I've looked at log and find that /var/run/gluster folder wasn't created
08:54 Thilam So i've create it and all worked fine
08:54 Thilam today I've rebooted all my servers
08:54 Thilam and surprise, I've got the same error message
08:54 Thilam all /var/run/gluster folder have desapeared !
08:55 Thilam again, it works if I manually create them, but it not seams to be a good behaviour
08:56 Thilam have you an idea on it?
08:57 Paul-C left #gluster
08:58 haomaiwa_ joined #gluster
09:00 DV joined #gluster
09:00 coredumb Thilam: which distro ?
09:01 harish joined #gluster
09:01 hagarth Thilam: is /var/ or /var/run a tmpfs partition which gets wiped upon reboot?
09:02 coredumb that was my guess
09:02 haomaiw__ joined #gluster
09:03 Thilam ./var/run/gluster
09:03 Sunghost Hi, i must ask again - i have lots of "txattr failed on key trusted.glusterfs.dht.linkto (No data available)" message in node logfile over 5GB <- what does it mean
09:04 Thilam hum, you think all content of /var/run is wipe after each reboot?
09:04 Thilam debian wheezy for coredumb
09:04 rahulcs joined #gluster
09:05 Thilam so if yes, the problem is gluster don't create this directory at start
09:06 hagarth Sunghost: is it getxattr at the beginning of the log message?
09:06 getup- joined #gluster
09:07 ndevos coredumb: for the quorum question, your "whitness" is more commonly called "arbiter"
09:07 ndevos coredumb: and yes, you can configure quorum to use an arbiter (brickless) node
09:11 karimb hi guys, how can i copy permissions from a windows share to a gluster volume ?
09:12 Sunghost hagarth - sorry while time was important i deleted the file now and since that there is no new log file <- i think it will create with service restart or?
09:13 Sunghost @karimb - i would preferred samba to access the files on a vol for windows clients
09:14 Sunghost @karimb - perhabs with openldap in large networks <- but its not tested by me
09:14 Sunghost forgett the last samba can integrated in active directory so it must work all over samba and ad
09:16 karimb Sunghost, it does
09:16 karimb but permissions arent carried other
09:16 Sunghost @hagarth - i monitore my disk on root and since i start with coping the files the disk getting fuller
09:16 glusterbot New news from newglusterbugs: [Bug 1093594] Glfs_fini() not freeing the resources <https://bugzilla.redhat.co​m/show_bug.cgi?id=1093594> || [Bug 1086743] Add documentation for the Feature: RDMA-connection manager (RDMA-CM) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086743>
09:17 haomaiwang joined #gluster
09:19 vpshastry joined #gluster
09:21 hagarth Sunghost: are you running your bricks with log level DEBUG?
09:21 RameshN joined #gluster
09:22 nishanth joined #gluster
09:22 Sunghost just default after installation
09:22 Sunghost please help to check this
09:23 coredumb ndevos: that's neat
09:24 coredumb ndevos: is there any particular requirements for this node ? like CPU/RAM or any low ressources node added to the trusted pool would do ? Just peer probing it right ?
09:25 Sunghost as far as i can read in the documentation its by default set to info <- right?
09:26 Sunghost gluster volume info shows no extra configuration
09:28 Sunghost docu is for 3.2 and as i can see in release info for 3.5 there are many changes for logging too <- right
09:31 rahulcs joined #gluster
09:32 Slashman joined #gluster
09:34 hagarth Sunghost: yes, by default log level is set to info
09:34 Sunghost ok
09:35 Sunghost but what could be the problem? i installed fresh with 3.5 set only allow for local network and copied files from another client via fuse mount
09:35 haomaiwang joined #gluster
09:35 [o__o] joined #gluster
09:36 coredumb where can i find details about client side file encryption ?
09:38 hagarth Sunghost: cannot see any obvious problems. Can you fpaste the logs if you happen to observe it again?
09:41 Sunghost yes i can - if the copy job ends i can restart glusterfs service so the file should recreated <- right? then i will watch at it while i copy new files
09:42 vpshastry joined #gluster
09:42 hagarth Sunghost: yes, once the glusterfs service is restarted, the log files would be recreated
09:43 karnan joined #gluster
09:44 Sunghost ok thanks hagarth
09:46 glusterbot New news from newglusterbugs: [Bug 1086755] Add documentation for the Feature: readdir-ahead <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086755>
09:47 vpshastry joined #gluster
09:50 dusmant joined #gluster
09:53 rahulcs joined #gluster
09:57 rahulcs joined #gluster
10:06 rahulcs_ joined #gluster
10:07 jiku joined #gluster
10:12 rahulcs_ joined #gluster
10:16 glusterbot New news from newglusterbugs: [Bug 1101993] [SNAPSHOT]: While rebalance is in progress as part of remove-brick the snapshot creation fails with prevalidation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101993>
10:28 vimal joined #gluster
10:53 diegows joined #gluster
10:54 ProT-0-TypE joined #gluster
11:00 tdasilva joined #gluster
11:01 andreask joined #gluster
11:02 ppai joined #gluster
11:03 rtalur_ joined #gluster
11:09 vimal joined #gluster
11:10 baojg joined #gluster
11:12 B21956 joined #gluster
11:12 B21956 joined #gluster
11:13 getup- joined #gluster
11:17 nbalachandran joined #gluster
11:17 ricky-ticky1 joined #gluster
11:19 [o__o] joined #gluster
11:22 hagarth joined #gluster
11:28 glusterbot New news from resolvedglusterbugs: [Bug 1086759] Add documentation for the Feature: Improved block device translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086759>
11:29 Philambdo joined #gluster
11:29 mattapperson joined #gluster
11:37 ricky-ticky joined #gluster
11:42 ira joined #gluster
11:54 baojg joined #gluster
11:55 mattappe_ joined #gluster
11:58 ricky-ticky joined #gluster
12:01 ricky-ticky1 joined #gluster
12:02 itisravi joined #gluster
12:07 ndevos coredumb: little resources should be ok, just depends if you also want to use it as an nfs server (enabled by default)
12:08 mattapperson joined #gluster
12:12 coredumb ndevos: ok, don't plan to use it as NFS, just as an arbitrer
12:14 coredumb a VIP between the other two for NFS will be sufficient :)
12:16 vincent_vdk joined #gluster
12:17 vincent_vdk joined #gluster
12:19 rahulcs joined #gluster
12:24 mattapperson joined #gluster
12:28 edward1 joined #gluster
12:31 tdasilva joined #gluster
12:35 jag3773 joined #gluster
12:44 tdasilva left #gluster
12:46 tdasilva joined #gluster
12:46 theron joined #gluster
12:48 chirino joined #gluster
12:59 [o__o] joined #gluster
13:00 japuzzo joined #gluster
13:03 sroy_ joined #gluster
13:03 monotek joined #gluster
13:07 mattapperson joined #gluster
13:08 Ark joined #gluster
13:11 haomaiwa_ joined #gluster
13:14 rwheeler joined #gluster
13:17 sjm joined #gluster
13:21 mattapperson joined #gluster
13:25 [o__o] joined #gluster
13:26 plarsen joined #gluster
13:28 haomaiw__ joined #gluster
13:32 rahulcs joined #gluster
13:35 mjsmith2 joined #gluster
13:37 mattapperson joined #gluster
13:44 sjoeboo joined #gluster
13:51 hagarth joined #gluster
13:56 bennyturns joined #gluster
13:59 zaitcev joined #gluster
14:01 itisravi joined #gluster
14:07 marbu joined #gluster
14:15 kshlm joined #gluster
14:21 wushudoin joined #gluster
14:26 _dist joined #gluster
14:33 mattapperson joined #gluster
14:33 harish joined #gluster
14:33 davinder13 joined #gluster
14:38 mattappe_ joined #gluster
14:41 chirino_m joined #gluster
14:42 karimb hello buddies
14:42 karimb to copy permissions from a windows share to a gluster volume, do i need some special permissions on the user mouting the share ?
14:43 jag3773 joined #gluster
14:44 RameshN joined #gluster
14:44 Ulrar_ joined #gluster
14:45 Ulrar_ Hi, I used http://www.howtoforge.com/high-availability-stora​ge-with-glusterfs-3.2.x-on-ubuntu-11.10-automatic​-file-replication-across-two-storage-servers-p2 but I don't understand. In the fstab, if I use server1, when server1 is down I will not be able to mount the storage, am I wrong ?
14:45 glusterbot Title: High-Availability Storage With GlusterFS 3.2.x On Ubuntu 11.10 - Automatic File Replication Across Two Storage Servers - Page 2 | HowtoForge - Linux Howtos and Tutorials (at www.howtoforge.com)
14:45 Ulrar_ Shouldn't there be an IP shared by the cluster ?
14:54 mbukatov joined #gluster
14:55 _dist Ulrar: mount point does need to be up at the time your fstab mounts it, after that (if you're using the gluster client) it won't matter which one stays up
14:58 Ulrar_ _dist: Well if server1 is down, the mount just fails
14:58 Norky there used to be a way to specify an alternate "volfile" server for gluster-FUSE mounts
14:58 Ulrar_ since it's the server specified in the fstab
14:58 Norky does that still exist in 3.4/3.5?
14:59 Norky mount -t glusterfs -o backupvolfile-server=server2 server1:volume /mnt/volume    for example
15:00 JustinClift *** Gluster Community Meeting time is NOW: #gluster-meeting on irc.freenode.net ***
15:00 sprachgenerator joined #gluster
15:01 harold joined #gluster
15:07 Ulrar_ Looks better with a volfile
15:07 Ulrar_ using this http://superrb.com/blog/2011/10/14/high-​availability-file-system-for-load-balanc​ed-webservers-with-glusterfs-and-ubuntu
15:07 Ulrar_ But it still freez if the storage server is disconnected while it was mounted
15:08 Ulrar_ Ho no I'm wrong
15:08 Ulrar_ it switches to the next server, just takes a minute
15:08 Ulrar_ Cool
15:10 lmickh joined #gluster
15:11 dusmant joined #gluster
15:12 jiku joined #gluster
15:13 baojg joined #gluster
15:16 rotbeard joined #gluster
15:29 mattapperson joined #gluster
15:31 _dist I was wondering if anyone has done any work with gluster storage for vms, the heal time seems quite long, they are all always healing so it's "tricky" to tell what's healed and what isn't etc.
15:31 nbalachandran joined #gluster
15:40 daMaestro joined #gluster
15:42 recidive joined #gluster
15:43 jbd1 joined #gluster
15:53 sjm joined #gluster
15:56 mattapperson joined #gluster
16:18 mattapperson joined #gluster
16:20 vimal joined #gluster
16:22 ProT-O-TypE joined #gluster
16:24 ProT-0-TypE joined #gluster
16:25 churnd left #gluster
16:25 Mo__ joined #gluster
16:25 churnd joined #gluster
16:29 ProT-O-TypE joined #gluster
16:29 sputnik13 joined #gluster
16:30 ProT-0-TypE joined #gluster
16:34 mattappe_ joined #gluster
16:35 mattappe_ joined #gluster
16:40 vpshastry joined #gluster
16:42 chirino joined #gluster
16:44 vpshastry1 joined #gluster
16:56 DanishMan joined #gluster
16:59 hchiramm_ joined #gluster
16:59 lpabon joined #gluster
17:02 sjusthome joined #gluster
17:07 diegows joined #gluster
17:08 lkoranda joined #gluster
17:09 zaitcev joined #gluster
17:09 firemanxbr joined #gluster
17:13 jag3773 joined #gluster
17:16 sjm joined #gluster
17:18 glusterbot New news from newglusterbugs: [Bug 1102293] NFS subdir authentication doesn't correctly handle multi-(homed,protocol,etc) network addresses <https://bugzilla.redhat.co​m/show_bug.cgi?id=1102293>
17:25 plarsen joined #gluster
17:31 _dist Is there any way I can walk through the steps of how gluster is reaching the conclusion that something needs to be healed? I really want to get to the bottom of why my VM images heal all day long.
17:33 hchiramm_ joined #gluster
17:37 kmai007 joined #gluster
17:37 kmai007 can a client that have 2 gluster-fuse mounts from different gluster pools, affect each other?
17:39 recidive joined #gluster
17:41 _dist kmai007: can you rephrase the question? Maybe a bit more detail?
17:41 kmai007 i have a client that has 2 gluster-fuse mounts mounted to it
17:41 kmai007 from different gluster pools
17:41 kmai007 so 1 volume is from a 8 -node gluster pool
17:42 kmai007 and another volume is on a 4 node gluster pool
17:42 kmai007 mounted to the server on different mount points
17:42 kmai007 i experienced a degrade in performance, and from the gluster log it shows disconnected
17:42 _dist so your client machine has mounted two different volumes. Unless something is setup wrong they should be indepedent.
17:42 kmai007 at the same time frame,
17:43 _dist wrong/weird (could be intetional)
17:43 kmai007 correct
17:43 kkeithley Two mount points, two glusterfs client processes. Writes to one mount point go through fuse to one glusterfs process, and then to the associated server.  Writes to the other mount point go through fuse to the other glusterfs process, and then it its associated server.
17:44 kmai007 i'm not sure what is the commonality other then the clients that mount them
17:44 kkeithley It's hard for me to imagine how they could affect each other. Unless there's a bug in FUSE.
17:44 kmai007 exactly!
17:44 kmai007 i'm going looney
17:47 kmai007 ok here is a left field question
17:48 kmai007 i'm running glusterfs 3.4.2 , i had profiling turned on the gluster storage pools,
17:48 kmai007 i turned them off across all the storage servers
17:48 kmai007 would that have caused a disconnect domino affect across my clients?
17:49 kmai007 i know when there is a cli execution there is a vol change that is sent out to all the clients?
17:50 semiosis simplest explanation is your client machine experienced a network problem
17:51 kmai007 i would agree, but would that cause a domino affect on outstanding operations, that would cause the clients to drop and reconnect across all storage nodes?
17:52 kmai007 only the logs will tell, i know, but i found it so hard to see how
17:52 kmai007 i guess its all the same clients,
17:52 kmai007 on the same network
17:52 kmai007 thats the only way it would show degraded performance across silo'd gluster pools
17:54 _dist So I know I'm nagging, but one of my replica bricks was down for about 10 min, and it's been self healing for 5 hours, heal info still shows GFIDs (I've noticed they resolve to files when things, closer to being done). Anything I can do?
18:00 glusterbot New news from resolvedglusterbugs: [Bug 918917] 3.4 Alpha3 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=918917> || [Bug 951549] license: xlators/protocol/server dual license GPLv2 and LGPLv3+ <https://bugzilla.redhat.com/show_bug.cgi?id=951549> || [Bug 951551] license: xlators/protocol/server dual license GPLv2 and LGPLv3+ <https://bugzilla.redhat.com/show_bug.cgi?id=951551>
18:00 kmai007 can making vol file changes on the fly cause disconnects for the clients?
18:01 lpabon joined #gluster
18:01 _dist kmai007: It can, depending on what you're changing, I've noticed certain options changes cause disconnect
18:04 lpabon joined #gluster
18:04 kmai007 yikes, _dist do you know if its "profiling" fits that bill?
18:04 semiosis _dist: can you give steps to reproduce that?  sounds like a bug
18:04 kmai007 i know i've seen a bugzilla for 3.4.2 that vol file changes disconnect gNFS
18:05 _dist semiosis: if you make changes to optimize for libgfapi it'll drop connection for a sec
18:05 semiosis what changes, exactly?
18:06 _dist these are the ones I have set https://dpaste.de/3UCp
18:06 glusterbot Title: dpaste.de: Snippet #269644 (at dpaste.de)
18:06 [o__o] joined #gluster
18:06 _dist but iirc the eager lock and read ahead did it
18:07 qdk_ joined #gluster
18:09 prasanthp joined #gluster
18:14 diegows joined #gluster
18:56 Ark joined #gluster
19:01 recidive joined #gluster
19:02 tdasilva joined #gluster
19:21 mortuar joined #gluster
19:22 mortuar hey guys - I have a question about auth for gluster for new hosts - any way to add them dynamically ?
19:22 JoeJulian @puppet
19:22 glusterbot JoeJulian: https://github.com/purpleidea/puppet-gluster
19:23 kmai007 JoeJulian: have you seen where a client mounting fuse will disconnect due to a vol file change? and recover cleanly?  on gluster3.4.2
19:27 zerick joined #gluster
19:28 JoeJulian You mean you make a change, "gluster volume set..." and the client log shows a disconnect and reconnect?
19:30 _dist JoeJulian: Glad you're here, when you finish helping kmai007 could you let me know if anything has changed positively in the VM images always healing no easy way to know who's healthy etc please? :)
19:30 kmai007 correct
19:30 JoeJulian _dist: I haven't seen anything toward that end, no.
19:30 kmai007 but this time i typed "gluster volume profile <vol> stop"
19:31 JoeJulian kmai007: yep. That's expected.
19:31 kmai007 but is it risky?  would there be impact to the clients?
19:31 kmai007 i'm trying to find a pattern whre I experienced a production issue
19:31 JoeJulian It should not be risky.
19:32 MugginsM joined #gluster
19:32 kmai007 where all my clients dropped from gluster nodes and reconnected, and rewind action, but i cannot pinpoint why
19:32 kmai007 strangest part is i have 2 isolated gluster pools, that the clients mount up volumes from and they all saw the problem the same time
19:33 kmai007 now i'm trying to parse these logs by time
19:33 kmai007 to find out any bad actors
19:33 JoeJulian That's where log aggregation saves the day, see #logstash.
19:34 kmai007 i recall you mentioned that, didn't get a chance to dig into it yet
19:35 _dist JoeJulian: Ok thanks, but you still see it too right? I've spoken with others as well, I just want to make sure I'm not unqiue. There's no data integrity issue or anything.
19:35 borreman_123 joined #gluster
19:36 JoeJulian _dist: Pranith said he was able to duplicate it. I'm pretty sure figuring that out will be one of my top priorities starting next week.
19:36 _dist That's awesome, let me know what you want on that pizza.
19:37 tdasilva joined #gluster
19:38 JoeJulian Have you ever noticed that a veggie pizza has no vegetables?
19:38 kmai007 is it all fruits?
19:39 _dist I put broccoli on mine :)
19:39 _dist but yeah, almost never now that you mention it
19:39 jmarley joined #gluster
19:49 ernetas left #gluster
19:52 diegows joined #gluster
19:56 jag3773 joined #gluster
20:10 _dist One other thing, is there someway I can tell the gluster heal daemon to use more than 10Mbits of my 20,000Mbits? Options like window size etc ?
20:11 JoeJulian I know heals run at a lower priority than other ops. I'm not aware of any bandwidth limitation though.
20:12 _dist ah, looks like it might be cpu based, I've got 24 cores and it looks like it's using about 2-3. I'm only seeing about 1-3Mbytes sending in iftop
20:13 _dist maybe the block size should be different for vm disk files
20:17 kmai007 has anybody has experience with this message "fdctx not valid" in the gluster client logs?
20:19 _dist kmai007: which log, I can check our file server
20:19 kmai007 i found it in /var/log/glusterfs/<vol>.log
20:20 kmai007 here is a snippet of where it is
20:20 kmai007 http://fpaste.org/105495/08422140/
20:20 glusterbot Title: #105495 Fedora Project Pastebin (at fpaste.org)
20:20 _dist yeap, I have an entry of that from today
20:21 _dist no idea what it means :)
20:21 kmai007 intersting, just wanting to decipher it
20:21 kmai007 but the "Connection refused" in my log was when i killed all gluster* on my storage to restart it
20:21 _dist says "[2014-05-28 15:09:39.652794] W [client-lk.c:367:delete_granted_locks_owner] 0-datashare_volume-client-0: fdctx not valid"
20:21 kmai007 I got tired of this disconnecting/Connected cycle on my client
20:22 _dist we run a debian server that serves up smb shares via gluster client, authenticated by AD so all our window shares/files are actually on a gluster replicate
20:22 kmai007 yep we do that too
20:22 kmai007 but on rhel
20:22 _dist that's where the log error was, in the client that does that
20:23 kmai007 gotcha
20:23 kmai007 except my client here is an apache server
20:23 _dist it's pretty much the only fuse client we have actually, all our vm stuff is over libgfapi
20:23 kmai007 but the vol is mounted in alot of places
20:24 _dist ah, for us just in one place.
20:24 _dist kmai007: we used pbis for our AD authentication, what did you use?
20:25 kmai007 its security = domain
20:25 kmai007 i'm not familiar
20:25 kmai007 the smb is soupy here
20:26 kmai007 we have 6 DC's and it can use any to authenticate
20:28 jbrooks joined #gluster
20:43 mmorsi left #gluster
20:52 Pupeno joined #gluster
20:53 ceddybu joined #gluster
20:54 ceddybu Anybody use glusterfs for large magento environments?
20:56 jruggiero left #gluster
20:59 MugginsO joined #gluster
21:02 yinyin_ joined #gluster
21:04 tdasilva left #gluster
21:10 kmai007 when the log fuse client log message contains "reading from socket failed." there is always disaster for me it seems.
21:13 n0de I hate seeing those messages
21:13 n0de Even if they are safe to ignore, such as NFS related messages. I do not use NFS.
21:14 ryant joined #gluster
21:14 n0de Lately I've been using the Gluster Fuse mount to write the data, and nginx try_files to read.
21:14 n0de Much faster that way.
21:15 kmai007 yeh i'm writing to gluster FUSE and apache serves them
21:15 ceddybu so if I mount 'gluster01:/gvol0' and gluster01 goes down completely, is there an automatic failover? another node takes over as master?
21:15 JoeJulian @mount server
21:15 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
21:19 ceddybu @rrdns
21:19 glusterbot ceddybu: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
21:19 ceddybu @mount
21:19 glusterbot ceddybu: I do not know about 'mount', but I do know about these similar topics: 'If the mount server goes down will the cluster still be accessible?', 'mount server'
21:20 ceddybu thanks JoeJulian
21:20 JoeJulian You're welcome
21:21 ceddybu I wonder if using /etc/hosts would work in the same manner
21:24 kmai007 JoeJulian: is the network.ping-timeout really a ping?
21:24 kmai007 to the storage nodes/
21:28 jbd1 kmai007: IME, it's certainly something similar.  Had a brick server kernel panic, but keep responding to pings, and GlusterFS happily kept it in the cluster.  The result was a site outage :(
21:29 kmai007 jdb1 so to catch you up
21:29 kmai007 today i had a domino affect of gluster volumes disconnecting and reconnecting to my apache clients
21:29 kmai007 nothing apparent stands out, still gathering logs
21:29 jbd1 kmai007: nasty
21:29 kmai007 but to my in house monitors it was come and go
21:30 kmai007 and sporadic, where the 'df' would take a long time
21:30 kmai007 and web pages would crawl to be served
21:30 jbd1 I imagine that a flapping glusterfs would be really bad for clients
21:30 kmai007 trying to analyze it all on a conf. bridge, we ended up waiting it out and it all got better at 12:01PM
21:31 kmai007 so for 1.5 hours it was come and go, not knowing what the bad "cookie/actor" was
21:31 kmai007 yeh the intermittent stuff sucks
21:31 kmai007 i did not have to "recycle" glusterd on my storage nodes,
21:31 jbd1 kmai007: could there have been someone working on the network until 12:01 PM/
21:31 jbd1 ?
21:31 kmai007 it just magically said "Connected"
21:32 kmai007 man, i am having my network team look, but they never will admit to it
21:32 jbd1 kmai007: haha, I know the feeling.
21:32 kmai007 i have monitoring for node down on my hosts, but those never tripped to my operations center
21:32 ceddybu any errors on the network interfaces? ethtool -S
21:33 kmai007 ceddybu: any particular thing i should grep from that output?
21:33 ceddybu dmesg might show interface flapping
21:33 glusterbot ceddybu: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
21:33 kmai007 its massive
21:33 ceddybu kmai007: i mostly grep 'error'
21:33 jbd1 @glusterbot, you made a mistake
21:33 ceddybu should be all 0
21:34 ceddybu doesnt work on some cloud servers it seems, not sure what kinda hardware you have
21:34 kmai007 ceddybu: you input is appreciated http://fpaste.org/105515/13128481/
21:34 glusterbot Title: #105515 Fedora Project Pastebin (at fpaste.org)
21:34 kmai007 we house all our storage servers on physical boxes
21:34 jbd1 glusterbot: your bad parsing isn't helping
21:34 jbd1 hmm, flapping
21:34 glusterbot jbd1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
21:35 jbd1 helping
21:35 glusterbot jbd1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
21:35 jbd1 bwahaha
21:35 kmai007 put some clothes on
21:35 jbd1 derping
21:35 glusterbot jbd1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
21:35 jbd1 regex needs a bit of work there :)
21:36 ceddybu kmai007: looks good, i would check each machine involved, gluster servers and your web servers
21:36 ceddybu errors/drops - and grep dmesg for eth0/1 for flaps
21:37 ceddybu network guys can check for similar issues on the net devices
21:37 kmai007 thanks ceddybu
21:37 kmai007 will do
21:37 kmai007 ping
21:37 glusterbot kmai007: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
21:37 kmai007 p1ng
21:39 jbd1 pong?
21:39 kmai007 i wanted to restart glusterd so bad across all my storage nodes
21:40 jbd1 kmai007: 9/10 times that's not a good idea you know
21:40 kmai007 what would be the appropriate approach?
21:40 jbd1 glusterfs is so fragile, it craps if you breathe on it wrong
21:40 kmai007 *agreed*
21:40 social kmai007: what setup you have
21:40 social kmai007: distributed-replica?
21:41 social jbd1: bugreport or gtfo
21:41 jbd1 social: I have my share of bug reports
21:41 social mind to share them?
21:42 kmai007 1. https://bugzilla.redhat.com/show_bug.cgi?id=787365
21:42 glusterbot Bug 787365: urgent, high, ---, vagarwal, CLOSED CURRENTRELEASE, nfs:fdctx not valid in nfs.log
21:42 Ark joined #gluster
21:42 kmai007 opps wrong one
21:42 kmai007 https://bugzilla.redhat.co​m/show_bug.cgi?id=1041109
21:42 glusterbot Bug 1041109: urgent, unspecified, ---, csaba, NEW , structure needs cleaning
21:42 jbd1 my current "favorite" gluster bug is https://bugzilla.redhat.com/show_bug.cgi?id=985957
21:42 glusterbot Bug 985957: high, unspecified, ---, nsathyan, NEW , Rebalance memory leak
21:43 kmai007 social: distr-rep 2 x 4 = 8
21:43 social both fixed in 3.4.3 as I remember
21:43 kmai007 i have 6 volumes
21:43 kmai007 i was debating if i should hold out for 3.5.1
21:44 kmai007 not opposed to upgrading but i suppose i need to get off 3.4.2 ?
21:45 social kmai007: anyway if you really do need to restart always keep one of the replica pair up and give them reasonable time to sync (good thing is to check for gluster volume <vol> heal info) before restarting second of the pair
21:45 Ark joined #gluster
21:45 sjm left #gluster
21:46 social I wonder why people report like I have issue with 3.4.1 and it's still there as gluster has 3.4 as major release and the minor numbers are bugfixes so obviously if there is a patch it'll go to 3.4.n+1 >.>
21:47 jbd1 kmai007: I'm about to go 3.3.2 -> 3.4.3, then probably will wait for 3.5.2 or 3.5.3
21:48 jbd1 it's a bit stressful because I have 4 rebalances in my near future
21:49 kmai007 yikes,
21:49 kmai007 let me know when 3.4.3 goes live for you jbd1
21:49 jbd1 (well, given that the last rebalance took 5 weeks, it'll probably be December before the last one is done)
21:49 jbd1 kmai007: will do
21:49 kmai007 social: i'm not sitting put, im in a transition period
21:49 kmai007 and i'm trying to understand what problems i have now before i go to the next release
21:49 ceddybu 3.5 is not considered stable yet ?
21:50 social kmai007: always read the logs btw, usually from all the nodes
21:50 social I used to use splunk for that >.>
21:51 JoeJulian kmai007, jbd1: No, it's not an ICMP ping. ping-timeout has to do with being able to get responses from the server to RPC calls.
21:51 jbd1 JoeJulian: interesting.  I wonder why my paniced host didn't drop out of the cluster (I waited 10 minutes)
21:52 kmai007 JoeJulian: why don't u get that naked ping notice?
21:53 jbd1 ceddybu: Red Hat says 3.5 is the one to use, but the 3.5.1 beta changelog says 31 bugs are "expected to be fixed" in that version.
21:54 jbd1 ceddybu: so those of us with big production setups tend to wait a while before upgrading
21:55 kmai007 funny thing is, I NEVER SEE THIS IN DEV/TEST
21:55 kmai007 hilarious
21:55 ceddybu i see, does anyone know if "RHS
21:55 ceddybu red had storage, is just gluster ?
21:56 jbd1 I am pleased to see there will be a 3.4.4
21:57 JoeJulian Yeah, I suspect I'll be sticking with 3.4 for at least another 6 months.
22:01 JoeJulian Panicked you say? Now I have to look at that code again...
22:03 jbd1 ceddybu: RHS is gluster but packaged and supported (enterprise-style) by Red Hat.
22:04 jbd1 kmai007: the ping has to be at the end of your statement, like: stopping
22:04 glusterbot jbd1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
22:05 kmai007 hilarious
22:05 kmai007 pooping
22:05 glusterbot kmai007: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
22:06 jbd1 haha naked pooping
22:06 glusterbot jbd1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
22:07 * glusterbot warms up his ban hammer...
22:07 jbd1 sigh.
22:08 JoeJulian @meh
22:08 glusterbot JoeJulian: I'm not happy about it either
22:13 yinyin joined #gluster
22:13 theron joined #gluster
22:14 * jbd1 just verified that https://bugzilla.redhat.co​m/show_bug.cgi?id=1090298 shows a workable fix for the post-upgrade peer probe issue
22:14 glusterbot Bug 1090298: high, unspecified, 3.4.4, ravishankar, CLOSED WONTFIX, Addition of new server after upgrade from 3.3 results in peer rejected
22:22 ceddybu do you guys mount gluster volumes directly as /var/www/media/ or symlink the web directory to something like /mnt/gvol0
22:23 JoeJulian Interesting, but I would have preferred a documentation fix rather than some obscure bugzilla entry.
22:24 kmai007 we mount it to where we want on the client, and have apache know where the /DocumentRoot is
22:24 JoeJulian ceddybu: I typically just point the web configuration at /mnt/gluster/$volume
22:24 jbd1 JoeJulian: I agree.  If you can point me to a 3.3-3.4 upgrade guide on the wiki, I'll gladly add it.  The only thing I've seen though is Vijay Bellur's guide though, and I'm not fond of what he wrote there
22:25 ceddybu thanks. another option is bind mounts
22:25 JoeJulian Yeah, just Vijay's wordpress guide. Probably should copy that into a wiki page.
22:26 ceddybu just doing 'glusterfs01:/gvol0 /var/www/' for now
22:26 ceddybu on the client, on the server…
22:26 ceddybu ./dev/vgglus1/gbrick1/gluster xfs inode64,nobarrier 0 0
22:27 jbd1 JoeJulian: from what I've seen on the -users list, an active volume will always have data to be self-healed, which is why I take exception with Vijay's blog post.  If I have to quiet the volume to do the upgrade, that's the same thing as scheduling a downtime, which means that both of his solutions are actually the same thing
22:28 Ark joined #gluster
22:28 JoeJulian jbd1: ... plus, there's bug 1089758 which I think it a lot more severe than the repro suggests.
22:28 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1089758 high, unspecified, ---, pkarampu, ASSIGNED , KVM+Qemu + libgfapi: problem dealing with failover of replica bricks causing disk corruption and vm failure.
22:31 jbd1 JoeJulian: yeah, based on that it sounds like it is best to do a volume stop before any upgrades
22:33 JoeJulian Well, I start my new job on Monday and I suspect one of my first tasks will be to properly document that bug so we can get it fixed.
22:33 kmai007 congrats
22:34 jbd1 JoeJulian: new job at RedHat or new job at Paul's company?
22:34 JoeJulian IO
22:35 jbd1 right on
22:36 JoeJulian By the end of the year we /should/ have the largest installation of GlusterFS.
22:36 jbd1 wow
22:37 semiosis JoeJulian++
22:38 * jbd1 is kind of waiting for all this scary stuff to settle down before putting vm images on GlusterFS.  Until then, local storage ftw
22:39 JoeJulian I'm sure I'll be blogging more, too.
22:40 JoeJulian I've kind-of been doing stuff lately that doesn't seem blog worthy.
22:41 ceddybu is there any read performance gain by having multiple bricks on the same server in a replica-distributed config?
22:42 JoeJulian there /can/ be, depending on use
22:42 kmai007 JoeJulian: what action can i take on a client when the gluster-fuse logs show the volume "disconnecting/Connecting" though IP from NIC->switch is up/up
22:42 semiosis ceddybu: quite often
22:42 JoeJulian kmai007: wireshark
22:43 kmai007 i wish i could have tcpdump'd it then
22:43 ceddybu cool, i think a basic 2 brick replica on two servers will be fine for my case
22:43 kmai007 would it warrent a force umount/mount ?
22:44 JoeJulian kmai007: Find the network tech and offer him pie to tell you what they did to the network overnight.
22:44 kmai007 lol
22:44 semiosis that encourages more outages!
22:44 JoeJulian kmai007: If there's no additional errors, I wouldn't bother.
22:44 JoeJulian semiosis: hmm... good point.
22:45 semiosis break network, receive pie
22:45 in joined #gluster
22:46 JoeJulian threaten to require different qos rules based on RPC call?
22:48 recidive joined #gluster
22:53 ceddybu kmai007: did the volumes self heal or did it take some heroic manual recovery ?
22:54 chirino joined #gluster
22:57 kmai007 ceddybu: where can i find timestamps of the healing ?
22:59 sjm joined #gluster
23:00 ceddybu im not sure, i just started messing with gluster and the logs are pretty cryptic to me atm
23:01 ceddybu i can ifdown one of my nodes and see
23:37 kmai007 JoeJulian: if i were wanted to find the "why" a client or storage server disconnected is that captured in a particular log?
23:37 kmai007 before he says no response for X amount of time i'm disconnecting...?
23:37 kmai007 besides wireshark
23:38 JoeJulian Not likely.
23:38 kmai007 k
23:38 JoeJulian if there is anything to see, I would compare with the brick log.
23:39 kmai007 so here is the spookyness
23:39 kmai007 i have 6 volumes in my glusterpool
23:39 kmai007 only 3 of  them appeared to trip out
23:40 kmai007 and disconnect/reconnect
23:40 kmai007 nevermind
23:40 kmai007 i didn't get to them yet
23:40 kmai007 ALL volumes logged the same behavior
23:40 kmai007 and times
23:41 JoeJulian So what component is the single point of failure common to all of them?
23:41 kmai007 they all mount the same gluster pool of 8 servers
23:41 kmai007 same network
23:42 kmai007 so here is something that doesn't make sense to me
23:42 JoeJulian So it MUST be the network.
23:42 kmai007 http://fpaste.org/105540/20553140/
23:42 glusterbot Title: #105540 Fedora Project Pastebin (at fpaste.org)
23:43 kmai007 is it just symantecs....that it says hey i got a new vol file, and then followed by no cahnge in vol file continuing.....
23:43 kmai007 i guess i'm trying to wrap my head around what to expect when there is a CLI change....
23:43 JoeJulian The first one is the option change that affected the client. The remaining two changes did not.
23:44 JoeJulian That may have been the result of a single volume set command.
23:44 kmai007 all i did was turn off profling to a volume
23:44 JoeJulian bingo
23:44 kmai007 right, but my expectation is that there shouldn't be a critical disconnect
23:45 JoeJulian That doesn't show that there was one.
23:45 kmai007 let me get the parsed logs of "disconneting| Connected"
23:46 kmai007 http://fpaste.org/105541/20758140/
23:46 glusterbot Title: #105541 Fedora Project Pastebin (at fpaste.org)
23:46 recidive joined #gluster
23:47 JoeJulian Looks like the client spawned a new thread for the new graph.
23:47 kmai007 so within a matter of seconds there would be s chunk of disconnecting, accepted, disconnecting again
23:48 kmai007 thread = PID ?
23:48 JoeJulian yes
23:49 kmai007 when i ps -ef|grep gluster, i have old time stamps for glusterfsd, the only new thing is root     17495     1  0 09:51 ?        00:00:01 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l
23:49 kmai007 so a new NFS socket
23:49 kmai007 but i'm not using NFS anywhere anymore
23:50 kmai007 lsof|grep delete doesn't show any zombies
23:50 sjm joined #gluster
23:50 kmai007 the flutter across my environment was from 09:50AM - 12:01PM
23:51 kmai007 i do see alot of "unwinds" across the bricks
23:53 JoeJulian unwinds happen after any failed rpc call, I think.
23:53 kmai007 from observation yes,

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary