Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 askb joined #gluster
00:51 mattappe_ joined #gluster
00:58 Alex joined #gluster
01:15 hflai joined #gluster
01:27 edoceo I've got one Gluster with about 20TB used, like an NFS single server style.  I'm wanting to make replicate.
01:27 mattappe_ joined #gluster
01:27 edoceo Should I rsync my data over to the 2nd system before trying to add it as a replicate brick?
01:27 edoceo Or should I add it and then rely on the `find -exec stat` trick to bring the data in sync
01:28 askb joined #gluster
01:37 primechuck joined #gluster
02:07 psyl0n joined #gluster
02:23 kdhananjay joined #gluster
02:52 bharata-rao joined #gluster
03:25 kshlm joined #gluster
03:30 shubhendu joined #gluster
03:38 primechuck joined #gluster
03:41 psharma joined #gluster
03:58 bala joined #gluster
04:11 ndarshan joined #gluster
04:21 itisravi joined #gluster
04:23 Alex joined #gluster
04:27 mattapperson joined #gluster
04:28 spechal joined #gluster
04:29 itisravi_ joined #gluster
04:34 RameshN joined #gluster
04:34 itisravi joined #gluster
04:48 nshaikh joined #gluster
04:50 MiteshShah joined #gluster
04:54 ababu joined #gluster
04:58 ppai joined #gluster
05:01 CLDSupportSystem joined #gluster
05:21 MiteshShah joined #gluster
05:24 CheRi joined #gluster
05:28 dusmant joined #gluster
05:34 hagarth joined #gluster
05:39 primechuck joined #gluster
05:41 vpshastry joined #gluster
05:43 bharata-rao joined #gluster
05:43 glusterbot New news from newglusterbugs: [Bug 1023974] Moving a directory with content, into a directory where there is no quota left, succeeds <https://bugzilla.redhat.com/show_bug.cgi?id=1023974>
05:43 spandit joined #gluster
05:59 shylesh joined #gluster
05:59 saurabh joined #gluster
06:18 morsik hm.
06:18 morsik gluster supports quota? ;o
06:20 prasanth joined #gluster
06:23 mohankumar joined #gluster
06:24 mohankumar joined #gluster
06:32 zeittunnel joined #gluster
06:33 lalatenduM joined #gluster
06:44 psyl0n joined #gluster
06:50 anands joined #gluster
06:56 ngoswami joined #gluster
06:58 satheesh1 joined #gluster
07:03 ricky-ti1 joined #gluster
07:07 Dave2 joined #gluster
07:08 Alex_ joined #gluster
07:08 kanagaraj joined #gluster
07:13 glusterbot New news from newglusterbugs: [Bug 1045992] [RFE] CTDB - GlusterFS NFS Monitor Script <https://bugzilla.redhat.com/show_bug.cgi?id=1045992>
07:20 bharata-rao joined #gluster
07:24 vimal joined #gluster
07:26 mohankumar joined #gluster
07:26 ababu joined #gluster
07:40 primechuck joined #gluster
07:51 bala1 joined #gluster
08:14 meghanam joined #gluster
08:16 bala1 joined #gluster
08:24 ctria joined #gluster
08:26 ekuric joined #gluster
08:30 morse joined #gluster
08:33 prasanth_ joined #gluster
08:39 mgebbe_ joined #gluster
08:45 spandit joined #gluster
08:45 shubhendu joined #gluster
08:53 hagarth joined #gluster
08:57 pk joined #gluster
08:58 aravindavk joined #gluster
08:59 pk mohankumar: ping
09:00 mohankumar pk: if one of the bricks fail in a replicated volume (2 bricks for replication)
09:00 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <https://bugzilla.redhat.com/show_bug.cgi?id=962619>
09:00 mohankumar how glusterd chooses the source brick to heal?
09:00 pk mohankumar: its not based on the brick
09:00 pk mohankumar: its per file/dir
09:01 mohankumar pk, how source file selected?
09:01 pk mohankumar: There are some extended attributes based on which afr comes to conclusions about which is the correct file and then does heal
09:06 vpshastry joined #gluster
09:07 soumya joined #gluster
09:09 soumya sobhan
09:13 mkzero has anybody had any trouble with newly created directories having absolutely no permissions? I got a great deal of dirs showing up as 'd---------' in ls, but not always. When I do multiple ls -al's on the same dir I sometimes get this and sometimes the correct permissions.. :/
09:15 dneary joined #gluster
09:18 mohankumar thanks pk
09:23 kshlm joined #gluster
09:24 vpshastry mohankumar: AFAIK, glusterd doesn't choose the source brick. AFR chooses it based on the xattrs (trusted.afr.<volume-name>-client-<client-number>)it stores on the backend.
09:24 mohankumar thanks vpshastry
09:29 soumya joined #gluster
09:29 mohankumar joined #gluster
09:41 primechuck joined #gluster
09:42 vikhyat joined #gluster
09:59 mohankumar joined #gluster
10:00 satheesh_ joined #gluster
10:05 spechal joined #gluster
10:07 stopbit joined #gluster
10:07 JonathanD joined #gluster
10:07 juhaj joined #gluster
10:07 Amanda joined #gluster
10:23 hagarth joined #gluster
10:23 shubhendu joined #gluster
10:26 zapotah joined #gluster
10:29 spandit joined #gluster
10:41 bala1 joined #gluster
11:02 ccha4 I have replica 2 and got file with 0x000000010000000000000000 flag
11:03 ccha4 on 1 server I deleted the file. The file healed but still with same flag
11:04 ccha4 the file on both server has same md5sum
11:04 samppah ccha4: did you delete file from .glusterfs directory aswell?
11:04 ccha4 nope
11:05 ccha4 I should ?
11:06 samppah yes, it's hardlinked to same file and if you want to heal file from good node it's necessary to delete it
11:07 ccha4 so I just need to delete the hardlink or hardlink and the file on 1 server ?
11:07 ccha4 is it better to delete or use setfattr to change the flag ?
11:07 samppah @split-brain
11:07 glusterbot samppah: To heal split-brain in 3.3+, see http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ .
11:08 samppah ccha4: i can't say.. i have not tried changing the flags
11:08 anands joined #gluster
11:08 ccha4 and spilt-brain status Number of entries: 0
11:09 samppah btw is the file shown in heal info list?
11:09 samppah or heal-failed
11:09 ccha4 both
11:09 samppah is that a large file?
11:09 ccha4 heal info heal list onlyu
11:10 ccha4 heal list and healed list
11:10 ccha4 small file
11:10 ccha4 <1mb
11:13 vpshastry joined #gluster
11:18 ccha4 samppah: deleted both files and new flag is fine now
11:18 samppah ccha4: good :
11:18 samppah :)
11:26 DV__ joined #gluster
11:28 ababu joined #gluster
11:33 ccha4 there are alot of these messages in glustershd.log
11:33 ccha4 [2013-12-23 11:18:18.401546] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-DATA-client-0: remote operation failed: No such file or directory. Path: <gfid:9e99e845-a9ca-4d20-ad8e-4e2ddfe9d705> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path
11:34 satheesh joined #gluster
11:38 ccha4 does it mean there is a file without ./glusterfs hardlink ?
11:41 primechuck joined #gluster
11:44 glusterbot New news from newglusterbugs: [Bug 1028672] BD xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1028672>
11:46 pk ccha4: This can happen when heal info and unlink of that file happens in parallel.
11:47 bala1 joined #gluster
11:47 ccha4 so I don't need to do something... just let's it be ?
11:47 pk ccha4: yep
11:48 shubhendu joined #gluster
11:49 ccha4 any way to avoid these falses heal info ?
11:49 aravindavk joined #gluster
11:50 ccha4 I created monitoring about heal info
11:51 diegows joined #gluster
12:03 psyl0n joined #gluster
12:05 pk ccha4: It seemed to have already been fixed 861015
12:05 pk ccha4: Are you using 3.3?
12:06 ccha4 3.32
12:06 ccha4 3.3.2
12:06 pk ccha4: It may not be present in 3.3
12:07 pk ccha4: I see the patch in 3.4.0
12:11 satheesh joined #gluster
12:14 mohankumar joined #gluster
12:16 CheRi joined #gluster
12:20 rotbeard joined #gluster
12:22 ppai joined #gluster
12:44 ccha4 http://review.gluster.org/#/c/5392/ since which version this fix is applied ?
12:44 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:45 ccha4 today I got a client with 3.4.1 which got oom killer and another client using 2Go RAM now
12:46 pk ccha4: The patch fixes a bug which was introduce recently. I don't think it existed in 3.3.2
12:46 ccha4 even on 3.4 ?
12:46 pk ccha4: Could you collect statedump of that client?
12:47 pk ccha4: that should help us debug it better. Does your application use so many getxattrs?
12:47 ccha4 client 3.4.1 and server 3.3.2
12:47 pk ccha4: let me check
12:49 pk ccha4: 3.4 doesn't seem to have that code base
12:50 pk ccha4: I mean the one that introduced the leak
12:50 pk ccha4: wait
12:52 pk ccha4: There are leaks when system.posix_acl_access/selinux based xattrs are used in getxattr by applications
12:56 ccha4 pk | ccha4: Could you collect statedump of that client? <-- I mean glusterfs client... you can statedump on a glusterfs client ?
12:58 itisravi joined #gluster
12:59 pk ccha4: You need to do kill -USR1 on the pid of that glusterfs client
12:59 pk ccha4: Do that when the load is low
13:02 ccha4 pk: what USR1 does ? is should free memoty ?
13:03 shubhendu joined #gluster
13:04 pk ccha4: It prints the statedump to a file
13:05 hagarth joined #gluster
13:09 pk ccha4: What applications run on the mount. I would like to know what is the kind of load that is being run on the mount point
13:10 ccha4 apache and some cron script
13:11 ccha4 I don't find where is the statedump... not in /tmp and neither in /var/log/glusterfs
13:12 pk /var/run/gluster
13:19 ccha4 hum no found
13:23 anands joined #gluster
13:24 ndevos ccha4: in case /var/run/gluster (or /usr/local/var/run/gluster) does not exist, you may need to create that directory first
13:25 pk ndevos: ah!, good catch ndevos. But this is should be a usability bug. Wonder if it is fixed after 3.4
13:25 ndevos pk: it should, at least the rpms should create that dir now
13:26 pk ndevos: yeah, just saw the code.
13:26 ccha4 deb
13:26 pk ccha4: ah!
13:26 pk ndevos: Who maintans deb spec for us?
13:27 ndevos pk: semiosis does the majority of the .deb packaging
13:27 pk ndevos: We should tell him about this
13:28 pk semiosis: he is on IRC, not sure if he is 'online'
13:28 ndevos @later tell semiosis Can you make sure that the .deb packages create the /var/run/gluster directory? It is used for generating state-dumps (client + server).
13:28 glusterbot ndevos: The operation succeeded.
13:28 pk ndevos: wow!, what does this command do?
13:29 ndevos @later tell pk_ It sends private messages, and buffers them when your are not online.
13:29 glusterbot ndevos: The operation succeeded.
13:29 ndevos pk: if you change your nick to pk_, you should get that message :)
13:30 pk ah!
13:30 pk ccha4: Any luck with statedump?
13:31 pk ndevos: Do you use docker?
13:31 ndevos pk: never tried it
13:31 pk ndevos: I have a feeling you may love it
13:32 ccha4 I got the statedump what info you need ?
13:32 pk ndevos: I just started learning it yesterday. Should have a reasonable working setup in a week
13:33 pk ndevos: How do we get files that need to be analyzed? fpaste?
13:33 ndevos pk: it looks interesting, but I'm not sure if it fits my needs :)
13:33 zeittunnel joined #gluster
13:33 ndevos ~paste | ccha4
13:33 glusterbot ccha4: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
13:33 pk ndevos: Not sure. It is perfect for setting up clusters. Vijay is thinking of bringing it into our test framework.
13:34 ndevos ccha4: or just go to fpaste.org and copy/paste the file there
13:34 pk ndevos: It could be huge.
13:35 ndevos pk: then *you* need to say what details you need ;)
13:35 pk ndevos: I need everything. Leak can be happening anywhere :-(
13:36 ndevos pk: for a lot of my testing, I need specific kernel versions and all, containers dont work for that - but yes, it fits the test-framework use-case
13:37 pk ndevos: yes sir!
13:38 pk ccha4: I gotta leave now. Leave the url here. I will pick it up from the chat logs.
13:40 ndarshan joined #gluster
13:40 pk left #gluster
13:51 edward2 joined #gluster
13:52 KORG joined #gluster
14:03 mattapperson joined #gluster
14:04 itisravi joined #gluster
14:06 primechuck joined #gluster
14:08 plarsen joined #gluster
14:09 dusmant joined #gluster
14:10 diegows joined #gluster
14:11 harish joined #gluster
14:14 dbruhn joined #gluster
14:18 vpshastry joined #gluster
14:20 vpshastry left #gluster
14:22 ccha4 pk : here the statedump http://pastebin.com/Scpi1is8
14:22 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:23 dbruhn So, is there a trick to managing a rebalance in 3.3.2?
14:25 sroy_ joined #gluster
14:30 dbruhn Also, is there anyway to take an existing RDMA volume and make it TCP and RDMA?
14:30 psyl0n joined #gluster
14:30 psyl0n joined #gluster
14:32 rwheeler joined #gluster
14:38 theron joined #gluster
14:41 jobewan joined #gluster
14:44 mattappe_ joined #gluster
14:54 mattappe_ joined #gluster
14:56 CLDSupportSystem joined #gluster
14:57 mattapperson joined #gluster
14:59 mattapperson joined #gluster
15:02 mattappe_ joined #gluster
15:04 mattapp__ joined #gluster
15:07 mattappe_ joined #gluster
15:10 mattap___ joined #gluster
15:11 social happy Xmas and thank you for great work you people do.
15:13 gmcwhistler joined #gluster
15:13 abyss^ why when I moved glusterfs via this method: http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server the size of data is different? No files in .glusterfs that have only one links...
15:13 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
15:18 wushudoin| joined #gluster
15:28 hagarth social: thanks and merry Xmas to you too!
15:32 pk1 joined #gluster
15:34 vpshastry joined #gluster
15:36 pk1 ccha4: ping
15:38 pk1 ccha4: I don't see any accounted information that could lead to leaks. So it must be some un-accounted memory that is increasing. Do you think there is a way to simulate apache load on a mount point so that we can probably re-create the issue?
15:39 aixsyd joined #gluster
15:40 jag3773 joined #gluster
15:42 ccha4 but on this client, right now glusterfs is using 2Go RAM
15:43 ccha4 on the another client, which OOM, volume remonted, and when OOM there isn't alot load
15:46 pk1 ccha4: hmm... :-(. The problem is for me to debug, I need to know the I/O pattern to see where the memory leaks are happening
15:47 pk1 ccha4: Its extremely difficult to figure out what the leak is without such test case.... :-(
15:48 jbrooks joined #gluster
15:49 ndevos pk1: I guess running a client under valgrind isnt straight forward?
15:50 pk1 ndevos: we can run, but for what test case?....
15:50 ndevos pk1: not we, but ccha4
15:50 sroy_ joined #gluster
15:50 pk1 ndevos: oh you are suggesting ccha4 to run it in valgrind?
15:51 ndevos pk1: only if that is relatively easy to do, and helps you with debugging
15:51 pk1 ndevos: it most probably does.
15:55 bala joined #gluster
16:01 mattappe_ joined #gluster
16:03 glusterbot New news from resolvedglusterbugs: [Bug 839595] Implement a server side quorum in glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=839595>
16:04 mattap___ joined #gluster
16:09 dusmant joined #gluster
16:22 vpshastry left #gluster
16:26 sroy_ joined #gluster
16:28 flrichar joined #gluster
16:32 zerick joined #gluster
16:35 ErikEngerd joined #gluster
16:35 ccha4 pk1: thanks the help, the client is a production server, the problem doesn't often happen. I check if this will happene again.
16:36 ccha4 or upgrade client for new version
16:36 ccha4 3.4.2 or 3.5
16:36 ErikEngerd Hi, I am experimenting a bit with gluster 3.4.1 on centos 6.5 (replication setup with 2 servers) and have a question about self -healing.
16:37 ErikEngerd I have setup successful replication between two servers. Now, I shutdown server2 and while server2 is shutdown, I add a new file on server1.
16:38 ErikEngerd Then I startup server 2 again and there the new file shows with size 0. Also the self healing info command (gluster volume heal axfs info) shows that the new file requires healing.
16:38 ErikEngerd This is not that bad perse, but when I cat the new file on server2, it always outputs an empty file first.
16:38 ErikEngerd The second time it outputs the correct contents. This is weird.
16:39 ErikEngerd Automatic self healing also works but then I have to wait until the self healing is triggered. What worries me is the inconsistent data that I get before the self healing is done. I would have expected to see the correct data, especially because gluster apparently knows that the file requires healing.
16:40 ErikEngerd Is this a configuration option somewhere?
16:41 ErikEngerd It looks like the healing is triggered asynchronously by the first cat command.  Nevertheless, I would expect to see the correct contents also the first time I do a cat on the file.
16:42 ndevos ErikEngerd: you should not access the files on the bricks directly, but through a mount-point, the contents of the file should always be correct that way, and it should be healed quicker that way too (whe na stat() is done)
16:42 johnbot11 joined #gluster
16:43 ErikEngerd I am accessing the files though a glusterfs mount
16:43 ErikEngerd I am not using the bricks directly.
16:43 pk1 ErikEngerd: Strange!
16:43 ErikEngerd Basically server 1 and server 2 together provide a replicated volume named axfs.
16:43 ErikEngerd Then I mount the volume on both servers using the following entry in fstab.
16:44 ErikEngerd localhost:/axfs         /mnt/axfs               glusterfs   _netdev,defaults 0 0
16:44 ndevos hmm, that looks good
16:45 ErikEngerd I have setup the servers (virtual machines) today by using the gluster RPM repository:
16:45 ErikEngerd wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
16:45 ErikEngerd Followed by: yum install glusterfs-server
16:45 ErikEngerd Running on Centos 6.5
16:46 ndevos that should be fine, what mountpoint are you using for the bricks?
16:46 ErikEngerd /dev/vg/axfs-brick0  /data/gluster/axfs/brick0 ext4 defaults 0 0
16:47 jbrooks joined #gluster
16:47 ndevos okay, no issues with that either...
16:47 ErikEngerd Then I am using the 'brick' subdirectory of that directory as brick (as was recommended)
16:47 ErikEngerd gluster volume info
16:47 ErikEngerd Volume Name: axfs
16:47 ErikEngerd Type: Replicate
16:47 ErikEngerd Volume ID: a91d6461-9b1d-4695-a432-417e7e4d28a9
16:47 ErikEngerd Status: Started
16:47 ErikEngerd Number of Bricks: 1 x 2 = 2
16:47 ErikEngerd Transport-type: tcp
16:47 ErikEngerd Bricks:
16:47 ErikEngerd Brick1: dev100:/data/gluster/axfs/brick0/brick
16:47 ErikEngerd Brick2: dev101:/data/gluster/axfs/brick0/brick
16:47 ErikEngerd Options Reconfigured:
16:47 ErikEngerd cluster.heal-timeout: 60
16:48 semiosis ErikEngerd: use pastie.org or similar
16:48 ErikEngerd Ok.
16:48 pk1 ErikEngerd: Things look ok. Do you take down 101 or 100 in this setup?
16:48 ndevos ErikEngerd: you configured cluster.heal-timeout before or after this behaviour?
16:48 semiosis ndevos: i'll add /var/run/glusterd to the debs, thx for the tip!
16:48 ErikEngerd I am taking down the 101 in this case. I haven't tried by taking down the 100
16:49 pk1 semiosis:  not glusterd
16:49 ndevos semiosis: /var/run/gluster
16:49 semiosis got it
16:49 ndevos pk1: you have good eyes ;)
16:49 ErikEngerd I have configured the two options after I have observed this behavior.
16:49 pk1 semiosis: /var/run/gluster
16:49 semiosis pk1: got it, thx
16:49 pk1 ndevos: :-)
16:50 ndevos ErikEngerd: then I dont have any further ideas at the moment...
16:50 ErikEngerd The reduced heal timeout is a work around more or less. The ping timeout is there to reduce the time that the client hangs on the dev100 server while dev101 is shutting down.
16:50 * ndevos is moving into holiday mode now...
16:50 ErikEngerd (that is another issue by the way, I would really like it if dev100 would not hang at all while dev101 is shutting down).
16:51 pk1 ErikEngerd: I will try this on some of my VMs tomorrow and get back to you. It is a bit late here...
16:51 pk1 cya folks
16:53 ErikEngerd ok, thanks
16:53 ErikEngerd joined #gluster
16:54 pk1 left #gluster
16:55 harish_ joined #gluster
16:57 ErikEngerd Interestingly, the behavior is identical is I shutdown dev100, then create a new non-empty file on dev101,and then startup dev100 again
17:06 mattappe_ joined #gluster
17:10 sroy_ joined #gluster
17:14 Mo_ joined #gluster
17:18 CheRi joined #gluster
17:18 harish_ joined #gluster
17:21 mattappe_ joined #gluster
17:22 zwu joined #gluster
17:27 ErikEngerd left #gluster
17:29 edoceo joined #gluster
17:29 thogue joined #gluster
17:47 diegows joined #gluster
17:51 SpeeR joined #gluster
18:05 skullone joined #gluster
18:12 techminer1 joined #gluster
18:24 zaitcev joined #gluster
18:33 calum_ joined #gluster
18:47 skullone have people had really good success running gluster as backend for openstack storage?
18:54 edoceo I've got one Gluster with about 20TB used, like an NFS single server style.  I'm wanting to make replicate.
18:54 edoceo Should I rsync my data over to the 2nd system before trying to add it as a replicate brick?
18:54 edoceo Or should I add it and then rely on the `find -exec stat` trick to bring the data in sync
18:54 psyl0n joined #gluster
18:56 mattapperson joined #gluster
19:14 gmcwhistler joined #gluster
19:18 mtanner_ joined #gluster
19:20 wgao_ joined #gluster
19:20 simon_ joined #gluster
19:20 jbautista- joined #gluster
19:22 jiqiren_ joined #gluster
19:25 mattapperson joined #gluster
19:25 social_ joined #gluster
19:36 lyang0 joined #gluster
19:45 davinder joined #gluster
19:49 ErikEngerd joined #gluster
19:50 diegows joined #gluster
19:51 ErikEngerd I have figured out what my problem was. I turns out that 'gluster volume heal axfs info' was always showing the root entry './' as neading healing. Probably this was caused by the fact that during setup, I had my firewall configured wrongly. I fixed the replication by removing all data from one of the bricks and doing a full heal.
19:54 zwu joined #gluster
19:59 Mo_ joined #gluster
20:00 diegows joined #gluster
20:00 MacWinner joined #gluster
20:12 sroy_ joined #gluster
20:16 diegows joined #gluster
20:17 theron joined #gluster
20:23 RicardoSSP joined #gluster
20:23 RicardoSSP joined #gluster
20:23 gdubreui joined #gluster
20:34 diegows joined #gluster
20:53 rotbeard joined #gluster
21:20 diegows joined #gluster
21:35 daMaestro joined #gluster
21:54 ErikEngerd joined #gluster
21:58 andreask joined #gluster
21:59 qdk joined #gluster
22:49 gmcwhistler joined #gluster
22:53 neofob joined #gluster
22:56 ErikEngerd joined #gluster
23:07 dbruhn left #gluster
23:19 qdk joined #gluster
23:25 mattappe_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary