Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 psyl0n joined #gluster
00:10 erik49_ joined #gluster
00:12 JoeJulian purpleidea, Unfortunately, I have no idea what the numeric states mean.
00:17 KaZeR_ joined #gluster
00:19 purpleidea JoeJulian: do you know what the "Accepted peer request" means? it corresponds to state == 4, but they are equivanet
00:19 purpleidea s/equivanet/equivalent/
00:19 glusterbot What purpleidea meant to say was: JoeJulian: do you know what the "Accepted peer request" means? it corresponds to state == 4, but they are equivalent
00:20 JoeJulian I don't know any of the handshaking statuses. Would be a good thing to figure out and document though.
00:20 purpleidea JoeJulian: fair enough. thanks! I think I found a bug where a peer get stuck in that state, even though everything is fine :P
00:21 MacWinner joined #gluster
00:23 JoeJulian Good! That's been a bug for a while but it's never been isolated for repro.
00:24 purpleidea JoeJulian: you know of this bug? I can reproduce it 100% of the time it seems. Although it looks like the host it happens for changes... Any idea if there is an open tracker?
00:24 purpleidea s/open tracker/open bug id/
00:24 glusterbot What purpleidea meant to say was: JoeJulian: you know of this bug? I can reproduce it 100% of the time it seems. Although it looks like the host it happens for changes... Any idea if there is an open bug id?
00:24 JoeJulian It's not been reported afaik. Most people have just restarted glusterd.
00:25 purpleidea In fact, i had to add a workaround to puppet-gluster because it was causing my builds to fail which pissed me off of course :P
00:25 purpleidea @teach joejulianisthenewbugtracker
00:28 purpleidea JoeJulian: btw, i'd like to teach glusterbot some useful things tonight. this okay? what is the syntax please?
00:28 JoeJulian @learn
00:28 glusterbot JoeJulian: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
00:28 purpleidea JoeJulian: thanks! i've got the puppet-gluster+vagrant stuff all done... just working on a blog post...
00:29 JoeJulian excellent
00:29 purpleidea i hope you give it a try and use it to reproduce bugs, for example :)
00:31 JoeJulian Very strong probability if I can get this damned VPNaaS to work in openstack.... I have 3 blog posts I need to write as well, but this is taking up all my working hours.
00:36 purpleidea JoeJulian: i swear that VPNaas is not a word, but i get what you're saying. Feel free to ping me if I can help with something although my openstack skills are kind of minimal atm.
00:37 JoeJulian Just trying to figure out why, when I have it all configured, it doesn't appear to even be trying to configure openswan.
00:43 wildfire joined #gluster
00:44 purpleidea JMWbot: @remind git.gluster.org does not have a valid https certificate
00:44 JMWbot purpleidea: Okay, I'll remind johnmark when I see him. [id: 8]
00:45 purpleidea JoeJulian: yuck! i think i know the openswan guy. I met him irl in Toronto
00:45 JoeJulian cool
01:02 badone joined #gluster
01:18 KaZeR joined #gluster
01:21 _pol_ joined #gluster
01:23 aixsyd joined #gluster
01:23 aixsyd Hey guys - anyone know why a 10gig IB card would iperf at 1.5gbps?
01:24 aixsyd i have a pair that run at about 7.5gbps, and a second pair at 1.5 - all 4 nodes have identical cards
01:30 robo joined #gluster
01:33 aixsyd god damnit
01:33 aixsyd its up to 1.84gbps now..
01:34 aixsyd this makes no sense - its 4 identical servers, identical hardware.
01:37 aixsyd dbruhn: you alive? :P
01:39 robo joined #gluster
01:56 harish joined #gluster
02:03 bala joined #gluster
02:05 johnbot1_ joined #gluster
02:14 foster_ joined #gluster
02:14 rcaskey joined #gluster
02:14 codex_ joined #gluster
02:15 jiffe98 joined #gluster
02:15 klaas joined #gluster
02:15 Slasheri joined #gluster
02:15 Slasheri joined #gluster
02:15 tziOm joined #gluster
02:17 kkeithley joined #gluster
02:18 KaZeR_ joined #gluster
02:20 Alex joined #gluster
02:24 mattappe_ joined #gluster
02:24 badone joined #gluster
02:25 mattappe_ joined #gluster
02:28 mattappe_ joined #gluster
02:54 failshell joined #gluster
02:57 bharata-rao joined #gluster
03:16 johnbot11 joined #gluster
03:20 kshlm joined #gluster
03:22 shubhendu joined #gluster
03:22 dbruhn joined #gluster
03:26 dbruhn aixsyd, sorry I wasn't around earlier. Figure it out?
03:29 dylan_ joined #gluster
03:31 davinder joined #gluster
03:45 kanagaraj joined #gluster
03:47 robo joined #gluster
03:50 kdhananjay joined #gluster
03:58 shyam joined #gluster
04:05 purpleidea @learn
04:05 glusterbot purpleidea: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
04:05 itisravi joined #gluster
04:06 purpleidea @vagrant
04:07 purpleidea @learn vagrant as https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/
04:07 glusterbot purpleidea: The operation succeeded.
04:07 purpleidea @vagrant
04:07 glusterbot purpleidea: https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/
04:07 purpleidea @learn forget 1
04:07 glusterbot purpleidea: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
04:07 purpleidea @learn forget vagrant 1
04:07 glusterbot purpleidea: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
04:07 purpleidea @forget
04:07 glusterbot purpleidea: (forget [<channel>] <key> [<number>|*]) -- Removes a key-fact relationship for key <key> from the factoids database. If there is more than one such relationship for this key, a number is necessary to determine which one should be removed. A * can be used to remove all relationships for <key>. If as a result, the key (factoid) remains without any relationships to a factoid (key),
04:07 glusterbot purpleidea: it shall be removed from the database. <channel> is only necessary if the message isn't sent in the channel itself.
04:08 purpleidea @forget vagrant 1
04:08 glusterbot purpleidea: The operation succeeded.
04:08 purpleidea @vagrant
04:08 purpleidea @learn vagrant as Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/
04:08 glusterbot purpleidea: The operation succeeded.
04:08 purpleidea @vagrant
04:08 glusterbot purpleidea: Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/
04:08 purpleidea @learn vagrant as Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/
04:08 glusterbot purpleidea: The operation succeeded.
04:08 purpleidea @vagrant
04:09 glusterbot purpleidea: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/
04:09 purpleidea @learn vagrant as Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/
04:09 glusterbot purpleidea: The operation succeeded.
04:09 purpleidea @learn vagrant as Part 4 @ https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/
04:09 glusterbot purpleidea: The operation succeeded.
04:09 purpleidea @vagrant
04:09 glusterbot purpleidea: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @ https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/
04:11 purpleidea JoeJulian: all done! if you want to refer n00bs to an easy way to deploy glusterfs, feel free to tell them about (,,vagrant) let me know if you find any issues or if there is something i should make clearer/easier.
04:11 purpleidea JoeJulian: all done! if you want to refer n00bs to an easy way to deploy glusterfs, feel free to tell them about ,,(vagrant) let me know if you find any issues or if there is something i should make clearer/easier.
04:11 glusterbot JoeJulian: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
04:11 glusterbot https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/
04:19 KaZeR joined #gluster
04:27 MiteshShah joined #gluster
04:28 _pol joined #gluster
04:31 meghanam joined #gluster
04:31 meghanam_ joined #gluster
04:32 saurabh joined #gluster
04:38 vpshastry joined #gluster
04:40 shylesh joined #gluster
04:49 quillo joined #gluster
04:49 wgao_ joined #gluster
04:50 zapotah_ joined #gluster
04:50 zapotah_ joined #gluster
04:50 vpshastry1 joined #gluster
04:51 theron joined #gluster
04:53 chirino_m joined #gluster
04:56 codex joined #gluster
04:56 dylan_ joined #gluster
04:57 harish joined #gluster
04:57 _pol joined #gluster
04:58 kanagaraj_ joined #gluster
05:02 tziOm joined #gluster
05:04 mattappe_ joined #gluster
05:04 deepakcs joined #gluster
05:07 zaitcev joined #gluster
05:14 aravindavk joined #gluster
05:16 kdhananjay joined #gluster
05:17 aixsyd dbruhn: nope.
05:19 spandit joined #gluster
05:20 KaZeR_ joined #gluster
05:25 prasanth joined #gluster
05:28 psharma joined #gluster
05:30 MiteshShah joined #gluster
05:30 bala joined #gluster
05:30 CheRi joined #gluster
05:31 TonySplitBrain joined #gluster
05:35 kanagaraj joined #gluster
05:37 ppai joined #gluster
05:40 nshaikh joined #gluster
05:50 shubhendu joined #gluster
05:54 aravindavk joined #gluster
06:04 lalatenduM joined #gluster
06:04 raghu joined #gluster
06:04 satheesh joined #gluster
06:06 kdhananjay joined #gluster
06:08 overclk joined #gluster
06:15 tor joined #gluster
06:15 Philambdo joined #gluster
06:18 benjamin_______ joined #gluster
06:20 KaZeR joined #gluster
06:26 jporterfield joined #gluster
06:34 dneary joined #gluster
06:40 _pol joined #gluster
06:41 Cenbe joined #gluster
06:42 nshaikh joined #gluster
07:09 XATRIX joined #gluster
07:12 FarbrorLeon joined #gluster
07:14 aravindavk joined #gluster
07:15 hagarth joined #gluster
07:17 KORG|2 joined #gluster
07:17 jtux joined #gluster
07:17 vimal joined #gluster
07:21 KaZeR joined #gluster
07:22 _pol joined #gluster
07:31 kdhananjay joined #gluster
07:42 kanagaraj joined #gluster
07:42 vpshastry1 joined #gluster
07:43 aravindavk joined #gluster
07:50 ekuric joined #gluster
07:51 ctria joined #gluster
08:03 wildfire1 joined #gluster
08:04 vpshastry1 joined #gluster
08:08 satheesh joined #gluster
08:10 eseyman joined #gluster
08:14 deepakcs joined #gluster
08:18 blook joined #gluster
08:20 shubhendu joined #gluster
08:21 KaZeR joined #gluster
08:25 KaZeR__ joined #gluster
08:26 ndarshan joined #gluster
08:36 davinder joined #gluster
08:42 yosafbridge joined #gluster
08:42 pk joined #gluster
08:44 pk cfeller: ping
08:47 peem left #gluster
08:48 nshaikh joined #gluster
08:49 ProT-0-TypE joined #gluster
08:49 hagarth joined #gluster
08:50 Oneiroi joined #gluster
08:50 Oneiroi joined #gluster
09:03 13WABPNKM joined #gluster
09:05 16WAABDSY joined #gluster
09:06 mohankumar joined #gluster
09:10 andreask joined #gluster
09:17 jclift joined #gluster
09:17 pkoro joined #gluster
09:17 tryggvil joined #gluster
09:34 shyam joined #gluster
09:39 hagarth joined #gluster
09:42 navid__ joined #gluster
09:51 blook joined #gluster
09:52 _pol joined #gluster
09:54 dusmant joined #gluster
10:06 blook joined #gluster
10:08 Alpinist joined #gluster
10:13 bgpepi joined #gluster
10:14 d-fence joined #gluster
10:23 dusmant joined #gluster
10:24 jclift left #gluster
10:28 bala joined #gluster
10:37 tryggvil joined #gluster
10:39 shyam joined #gluster
10:40 Oneiroi joined #gluster
10:40 Oneiroi joined #gluster
10:44 shylesh joined #gluster
10:55 prasanth joined #gluster
10:57 harish joined #gluster
11:00 satheesh joined #gluster
11:00 erik49__ joined #gluster
11:02 KaZeR_ joined #gluster
11:03 hagarth1 joined #gluster
11:04 dusmant joined #gluster
11:06 psyl0n joined #gluster
11:06 psyl0n joined #gluster
11:06 itisravi joined #gluster
11:07 hagarth joined #gluster
11:07 badone joined #gluster
11:09 shylesh joined #gluster
11:11 qdk joined #gluster
11:12 dneary joined #gluster
11:12 hagarth1 joined #gluster
11:18 shylesh joined #gluster
11:30 mohankumar__ joined #gluster
11:38 TonySplitBrain joined #gluster
11:38 bala joined #gluster
11:41 jclift joined #gluster
11:48 hagarth joined #gluster
11:53 _pol joined #gluster
11:57 badone joined #gluster
11:57 psyl0n joined #gluster
12:04 tryggvil joined #gluster
12:12 kkeithley1 joined #gluster
12:13 CheRi joined #gluster
12:22 ppai joined #gluster
12:25 kdhananjay joined #gluster
12:27 pdrakewe_ joined #gluster
12:33 tryggvil joined #gluster
12:35 theron joined #gluster
12:35 edward1 joined #gluster
12:38 hagarth joined #gluster
12:39 pk cfeller: ping
12:47 DV joined #gluster
12:53 davinder2 joined #gluster
12:53 asku joined #gluster
12:54 prasanth joined #gluster
13:01 bulde joined #gluster
13:02 getup- joined #gluster
13:07 ira joined #gluster
13:07 ira joined #gluster
13:12 chirino joined #gluster
13:14 dbruhn joined #gluster
13:18 cfeller pk: ping
13:20 vpshastry1 left #gluster
13:21 pk cfeller: pong
13:21 cfeller pk: nice
13:22 pk cfeller: not really :-(
13:22 pk cfeller: I am almost leaving office....
13:22 cfeller pk: ah, I need to get up earlier then.
13:22 pk cfeller: oh wait
13:22 pk cfeller: what is the time now for you?
13:23 cfeller 5:22am
13:23 pk cfeller: sorry sorry
13:23 cfeller I'm on PST
13:23 pk cfeller: got it
13:23 glusterbot New news from newglusterbugs: [Bug 1049616] Error: Package: glusterfs-ufo-3.4.0-8.fc19.noarch <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049616>
13:24 pk cfeller: Here is the deal, tonight I need to send a patch so will have to stay late. For some 5 hours more. I will come online in 3 hours, we can resume then>
13:24 badone joined #gluster
13:24 pk cfeller: sounds okay?
13:24 cfeller pk: sure, I'll be here.
13:24 pk cfeller: great
13:24 pk cfeller: cya then
13:27 pk left #gluster
13:28 KaZeR joined #gluster
13:28 sroy_ joined #gluster
13:34 _1_FabioR joined #gluster
13:52 koehlerm joined #gluster
13:52 koehlerm quit
13:54 vpshastry joined #gluster
13:56 vpshastry left #gluster
13:57 grisu joined #gluster
14:09 _ilbot joined #gluster
14:09 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
14:09 primechuck joined #gluster
14:09 bennyturns joined #gluster
14:14 Zylon joined #gluster
14:19 kkeithley_ grisu: a quick look at the source suggests to me that there's an edge condition that's not handled properly. Please file a defect report.
14:19 kkeithley_ @bug
14:19 glusterbot kkeithley_: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
14:19 kkeithley_ @file a bug
14:19 glusterbot kkeithley_: I do not know about 'file a bug', but I do know about these similar topics: 'fileabug'
14:19 kkeithley_ @fileabug
14:19 glusterbot kkeithley_: Please file a bug at http://goo.gl/UUuCq
14:20 aixsyd joined #gluster
14:20 robo joined #gluster
14:20 TonySplitBrain grisu: same bug probably: http://www.spinics.net/lists​/gluster-devel/msg10547.html
14:20 glusterbot Title: [Gluster-users] Repeative log messages, please help me interpret... Gluster Development (at www.spinics.net)
14:20 aixsyd dbruhn: hola, boyo - so my infiniband problems become more interesting
14:21 aixsyd nodes 1 and 2 communicate at 7.4gbps. notes 3 and 4 at 1.5gbps. nodes 1 and 3 at 6gbps, and 1 and 4 at 6 gbps.
14:21 aixsyd o_O
14:22 jclift joined #gluster
14:22 kkeithley_ grisu: ^^^
14:22 kkeithley_ sigh, I wish glusterbot didn't insist on shortening urls
14:22 aixsyd nodes 3 and 4, when one cable is pulled, dont fail over. even though theyre bonded with active-backup
14:22 B21956 joined #gluster
14:22 dbruhn what's it doing?
14:23 dbruhn How are you seeing the communication rates?
14:23 aixsyd iperf
14:23 aixsyd crap - morning meeting. be back in 30
14:23 B21956 joined #gluster
14:24 grisu @TonySplitBrain, yes the link points out the exact same problem. At the moment I fixed it by turning off the quota system, but this is not the best solution for my case.
14:24 lalatenduM joined #gluster
14:25 marcoceppi joined #gluster
14:27 dneary joined #gluster
14:35 diegows joined #gluster
14:36 dylan_ joined #gluster
14:42 qdk joined #gluster
14:50 jclift joined #gluster
14:55 blook joined #gluster
14:56 jruggiero joined #gluster
14:58 marcoceppi joined #gluster
14:58 marcoceppi joined #gluster
14:59 jag3773 joined #gluster
15:00 aixsyd dbruhn: back
15:01 aixsyd it seems like theres a multitude of isses with node 3 and 4. slow speed, not failing over...
15:01 aixsyd im going to make sure nodes 1 and 2 fail over - i'm using the same configs and same hardware on all four nodes
15:04 aixsyd okay great - neither pair is failing over
15:08 aixsyd UGH. so pair1 (nodes 1 and 2) only have one of two ports w/ link lights on. pair two has both port sets with link lights - neither fail over.
15:10 aixsyd okay, opensm restart fixed pair1's problem. it fails over.
15:11 aixsyd okay, same thing, pair 2 now fails over again. lets try speeds
15:13 aixsyd dbruhn: FUCKITY FUCK. http://fpaste.org/67036/80385138/
15:13 glusterbot Title: #67036 Fedora Project Pastebin (at fpaste.org)
15:13 spechal joined #gluster
15:14 spechal Is it not possible to mount a directory from within a volume?  i.e. My volume is /gluster ... I am trying to mount /gluster/stuff ... when I do this I get an error stating: failed to get the 'volume file' from server
15:18 aixsyd_ joined #gluster
15:18 aixsyd back
15:19 jobewan joined #gluster
15:19 wushudoin joined #gluster
15:26 daMaestro joined #gluster
15:26 daMaestro joined #gluster
15:27 zaitcev joined #gluster
15:27 daMaestro joined #gluster
15:30 tjikkun_work joined #gluster
15:35 TonySplitBrain spechal: maybe `mount --bind $srcDir $targetDir` will solve your problem?
15:35 dbruhn aixsyd, what do you mean by failing over?
15:35 skered- We have gluster behind a round robin set of ctdb managed ips.  How do gluster handle requests for data that's being copied or hasn't been copied yet?  It appears gluster is pretty faster to create empty files and ~3 seconds to copy a 4GB DVD ISO.
15:35 aixsyd active-backup bonding
15:36 P0w3r3d joined #gluster
15:36 bugs_ joined #gluster
15:36 skered- Is there some gluster magic that's happening to make it look like that file has been fully copied?
15:38 failshell joined #gluster
15:38 jclift joined #gluster
15:39 TonySplitBrain skered-: what OS do you use? i have problems setting up CTDB on Gluster on Debian..
15:39 skered- TonySplitBrain: RHLE 6.4
15:39 skered- RHEL rather
15:40 TonySplitBrain skered-: can I ask, what version of Gluster you use?
15:41 skered- 3.4.1
15:42 dbruhn aixsyd, have you tried taking th bonds out of the picture so make sure all of your links are good?
15:42 TonySplitBrain skered-: same here
15:42 TonySplitBrain skered-: can you please test your gluster with this? https://wiki.samba.org/index.php/Ping_pong
15:43 glusterbot Title: Ping pong - SambaWiki (at wiki.samba.org)
15:43 zerick joined #gluster
15:43 aixsyd dbruhn: http://fpaste.org/67043/89281235/ - http://fpaste.org/67045/81391138/
15:43 glusterbot Title: #67043 Fedora Project Pastebin (at fpaste.org)
15:44 TonySplitBrain skered-: about your problem: is it a problem actually?
15:46 aixsyd dbruhn: heres a really shitty problem - if I connect nodes 1 and 3 , iperf is at about 6gpbs. same with 1 and 4.
15:46 TonySplitBrain skered-: somewere in gluster news/talks i saw something about fast creation of zero-filled files, but probably it was a feature request or plans, not somethimg implemented already
15:46 aravindavk joined #gluster
15:48 dbruhn aixsyd, little confused, 4 "nodes" servers? clients?, also, are you trying to use multiple nice for failover purposes, or for speed improvements? lastly, is your volume distributed, or replicated, or both? or striped?
15:48 ira TonySplitBrain: Are you finding that gluster is passing that ping_pong?
15:48 aixsyd dbruhn: I have two pairs of two nodes. all four nodes are identical hardware. both PAIRS have identical configs. I dont even have gluster installed yet on any of them.
15:49 TonySplitBrain ira: my gluster cannot pass this test (starting from "Testing IO coherence")
15:49 diegows joined #gluster
15:50 ira TonySplitBrain: Ok, that test was reworked and added to the test suite in samba... Right now I expect gluster to fail that ping_pong, and pass the one in the main testsuite.
15:50 dbruhn aixsyd, what's the intention here?
15:51 ira aixsyd: What type of bonding are you using?
15:51 aixsyd ira: active-backup
15:51 TonySplitBrain ira: main testsuite of what?
15:51 aixsyd dbruhn: two pairs of replicated clusters - two different data uses
15:51 ira TonySplitBrain: Samba itself.  I'd have to look up the correct incantation.
15:52 spechal TonySplitBrain: Thanks
15:52 TonySplitBrain ira: i also found /usr/bin/ping_pong in Debian package ctdb
15:54 dbruhn aixsyd, have you seen this? http://www.openfabrics.org/downloads/OFE​D/ofed-1.4/OFED-1.4-docs/ib-bonding.txt
15:54 TonySplitBrain ira: i just upgraded ctdb to 2.5, will try it now
15:54 blook joined #gluster
15:54 aixsyd dbruhn: yep. i have one pair working fine at the right speeds.
15:57 JonnyNomad joined #gluster
15:57 dbruhn hmm, have you tried swapping your cables out? just to make sure you don't have an issue there? or maybe swap your cards around to make sure all of them are working properly?
15:59 benjamin__ joined #gluster
16:00 aixsyd dbruhn: yep, swapped cables. I didnt swap card because attaching node 1 and 3 and 1 and 4 work fine. so the cards capable - its something between 3 and 4 when connected
16:01 dbruhn No switch right?
16:02 aixsyd no switch
16:02 dbruhn maybe competing subnet managers?
16:03 spechal TonySplitBrain: If I use the mount --bind option, would I specify the fstype as glisters or the underlying ifs?
16:03 spechal xfs*
16:03 kl4m joined #gluster
16:03 spechal glusterfs and xfs (stupi autocorrect)
16:04 aixsyd dbruhn: should opensm be installed on all 4 nodes?
16:04 spechal never mind, I now see I should use the fstype bind
16:04 dbruhn You only need one subnet manager per network I believe.
16:05 aixsyd lemme see what happens
16:06 aixsyd no change
16:08 dbruhn ok so you tested 1/2, 2/3, 3/4?
16:11 aixsyd i tested 1/2, 3/4, 1/3, 1/4
16:11 aixsyd all is good except 3/4
16:12 dbruhn So can you just run the 1/3, 2/4 as pairs?
16:12 jbrooks joined #gluster
16:12 aixsyd hm
16:12 dbruhn I know it's not a solution, but it's a solution, lol.
16:12 aixsyd i suppose I *could* -
16:13 aixsyd this is just mind boggling. i got somsone else in another room saying, "that does indeed qualify as fucking odd"
16:13 ira TonySplitBrain: I don't know if that one is correct.  I'd have to look into it. :/
16:14 ira aixsyd: Have you swapped the switch ports for 3 or 4 with 1 or 2?
16:15 dbruhn aixsyd, yeah it's weird
16:16 aixsyd ira: no switch
16:17 dbruhn I hate to say this... but run your perf tests while wiggling cables?
16:18 dbruhn maybe there is something with a cold solder at a joint, or maybe one of the transceivers is wonky
16:18 dbruhn as I am assuming you haven't been racking and unracking servers
16:19 ira aixsyd: Cross over cabling everything?
16:20 TonySplitBrain aixsyd: is your issue gluster-specific?
16:22 aixsyd TonySplitBrain: not yet - but I'm using IB *for* Gluster. not gonna attempt to set up a gluster cluster without IB working properly first
16:23 aixsyd ira: explain? i have port 1 to port 1, port 2 to port 2 on each pair
16:23 aixsyd dbruhn: i just set up nodes 1 and 3, and 2 and 4, and now all of them iperf at 6gbps. thats a loss of 1.5gbps from when 1 and 2 were iperfing at 7.41gbps
16:24 skered Is there a command that will return zero/non-zero if a volume self-heal is running?  I know I can grep the output of 'gluster volume heal $name info' for 'Number of entries: ' but I thought maybe there could be a better way
16:28 TonySplitBrain skered: maybe there are some another channel(s) with more IB knowledge in them? no offence..
16:29 TonySplitBrain skered: i saved some links on gluster health monitoring..
16:29 TonySplitBrain skered: http://serverFault.com/questions/296383/
16:29 glusterbot Title: monitoring - How to monitor glusterfs volumes - Server Fault (at serverFault.com)
16:30 aixsyd TonySplitBrain: i'm trying #DRBD, #proxmox, and #reddit-sysadmin - everyone is at a loss at this point - as am i
16:31 qdk joined #gluster
16:33 JoeJulian Too bad there's no active #infiniband channel...
16:33 aixsyd JoeJulian: tell me about it
16:33 JoeJulian Despite my abhorrence of vendor support, I think you're at that point.
16:33 TonySplitBrain aixsyd: there also other cluster FSs, like #ceph (on irc.oftc.net)
16:33 skered TonySplitBrain: Thanks
16:35 herdani joined #gluster
16:37 dbruhn aixsyd, I would start swapping cards around, you might have two bad cards/cables between 3/4, and the combination is probably rearing it's head is it's worst form between those two.
16:37 dbruhn And +1 to vendor support, Mellenox has been decent to work with for me.
16:37 kaptk2 joined #gluster
16:37 aixsyd dbruhn: card swapping is next. thankfully I have a spare
16:37 herdani Hi, testing gluster with 2 nodes, replica 2, one clusterfs client mount. If I shutdown a node, create a new file, get the sleeping node up again, he sees the new file, but it is empty. Do I miss something ?
16:37 JoeJulian As for concern that this is off topic, I don't mind infiniband talk in here as long as someone can learn from it. Diagnostic methods, architecture, even some brand feedback can be useful for gluster users in the future.
16:38 JoeJulian Just make sure that on-topic doesn't get ignored.
16:39 aravindavk joined #gluster
16:40 JoeJulian herdani: "he sees the new file..." Is "he" the brick or the client mount?
16:40 herdani JoeJulian: the brick
16:41 JoeJulian self-heal is a process. Check again in 10 minutes. If it's still empty, then I would start looking in logs.
16:42 JoeJulian You can force it by accessing the file through the client.
16:44 herdani JoeJulian: it's been a while already, and indeed I can touch the file through the client, but it's not really an option to go for if this happens in production ... which logs should I consult ?
16:46 aixsyd JoeJulian: in herdani's situation - is it *possible* that a node goes down for say.. 4 hours.. and in those 4 hours, new data is written to the up node... what happens when the other node comes back and that new data is then not touched at all for say, a year.
16:47 aixsyd would it not self-heal until its accessed again?
16:49 JoeJulian That's what the self-heal daemon is for. When a file is created/deleted/changed an entry is added in .glusterfs/indices (or landfill for deletions iirc). When the missing brick returns to service, glustershd reconnects. That re-connection triggers a walk of that tree. Each item is then self-healed.
16:49 JoeJulian You can get a list of items in that tree with "gluster volume heal $vol info"
16:50 JoeJulian Why is his item empty though... Not sure. That should be in one of the server's /var/log/glusterfs/glustershd.log files.
16:50 herdani interesting detail: I've shutdown the second node (which had the content of the file), and magic: the other got the file content right away .... I don't get it :)
16:50 herdani but at the end I'm quite happy: my content looks safe
16:51 dbruhn JoeJulian, maybe help me better my understanding here, when a node goes away in AFR, is the client responsible for keeping the data for files that need to be healed?
16:51 JoeJulian I swear there's some caching thing happening on the brick end. I see this every so often from other people and it always ends up that the data's actually there.
16:53 JoeJulian The client marks the ,,(extended attributes) as operations are performed and finished. When it can't be finished (a brick is missing) those attributes are not cleared. I haven't looked at the process for *how* the indices entries are added and removed, but I suspect that's also part of afr on the client. The heal, however, is not performed by the client unless the client *needs* that heal to be performed for a file access.
16:53 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
16:54 flakrat joined #gluster
16:55 flakrat left #gluster
17:01 johnbot11 joined #gluster
17:03 Oneiroi joined #gluster
17:06 herdani seems to be a latency issue indeed, looks ok. Thanks JoeJulian !
17:08 JoeJulian You're welcome.
17:09 neofob joined #gluster
17:12 bala joined #gluster
17:13 KaZeR joined #gluster
17:17 aixsyd joined #gluster
17:18 aixsyd dbruhn: if I wanted to directly connect *three* nodes together - and all three nodes have two ports on their IB cards - is this possible without a switch?
17:23 dbruhn :/
17:23 dbruhn i guess you could get crazy with ip tables if you really wanted, but you would be better off finding a switch
17:25 Mo_ joined #gluster
17:26 skered If I run 'gluster volume heal volname info' on a volume that's under heavy user/large amount of files the command just hangs and don't produce output.
17:26 ira I missed the IB part.  ok.
17:29 davinder joined #gluster
17:34 jclift joined #gluster
17:35 jclift Jeez I hate this Empathy IRC program.  Definitely going to learn irssi tomorrow
17:35 JoeJulian I use Xchat
17:35 divbell irssi rocks
17:35 divbell client of the future
17:36 jclift If irssi doesn't work out for me, I'll look at xchat
17:36 divbell hexchat now, isn't it
17:36 jclift JoeJulian: There was a user around having Infiniband troubles?
17:36 psyl0n joined #gluster
17:36 divbell coming from ircii->bitchx->irssi, (he)xchat felt really clunkly and bad to me
17:36 jclift Empathy screwed up and disconnected me during the conversation, and hasn't logged it since.
17:36 jclift JoeJulian: Is the user still in channel?
17:37 JoeJulian jclift: aixsyd is.
17:37 jclift aixsyd: I used to use Infiniband a lot, and still remember a useful amount.  What's the situation/problem you're having with it?
17:37 jclift JoeJulian: Thanks btw. :)
17:38 aixsyd jclift: i'll send a PM
17:38 jclift If you want.  Some of the ppl here might find the troubleshooting process/steps useful tho.
17:39 aixsyd i pasted it all up above already - just wanna keep the non-glusterspam down
17:39 jclift np.
17:39 LoudNoises joined #gluster
17:40 jclift Yeah, chuck it in a pm then.  Have can't see any of IRC history at all.  Fucking Empathy (crap IRC client. avoid)
17:40 diegows joined #gluster
17:41 jclift aixsyd: Note, I have to leave in abt 15 mins.
17:42 aixsyd understood - get my PMs?
17:42 jclift aixsyd: Nope
17:42 redbeard joined #gluster
17:44 redbeard joined #gluster
17:47 lalatenduM joined #gluster
17:49 bala joined #gluster
17:51 _pol joined #gluster
17:52 badone joined #gluster
17:53 _BryanHM_ joined #gluster
17:56 _pol joined #gluster
17:57 failshel_ joined #gluster
18:03 failshell joined #gluster
18:12 dbruhn I am using xchat too, crashes from time to time, but seems to have gotten much better over the last couple updates.
18:13 semiosis check out konversation, it's great
18:14 KaZeR joined #gluster
18:18 skered Ok my trust with gluster is complete... it does the right thing during self-heal/sync/replication
18:19 pk2 joined #gluster
18:19 pk2 cfeller: ping
18:26 cfeller pk: pong
18:26 cfeller pk: midnight for you
18:28 pk2 cfeller: yes sir!
18:29 cfeller pk2: well if you are up for some debugging at this hour, lets go for it.
18:29 pk2 cfeller: lets complete this
18:30 pk2 cfeller: Damn, I must have closed the tab with the bug, could you please give me the bug id please
18:30 cfeller RHBZ # 1028582
18:31 mattapperson joined #gluster
18:32 pk2 cfeller: I understood this mount: gluster0.my.local.domain:gv0    /mnt/gluster/gv0 glusterfs defaults,_netdev,fetch-attempts=3 0 0
18:32 JoeJulian bug 1028582
18:32 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1028582 unspecified, unspecified, ---, pkarampu, ASSIGNED , GlusterFS files missing randomly - the miss triggers a self heal, then missing files appear.
18:33 pk2 cfeller: I did not understand this mount: /mnt/gluster/gv0/debian /var/www/mirror/debian  bind    defaults,bind   0 0
18:33 cfeller pk2: right, that line is in my /etc/fstab on the client. where <gluster0.my.local.domain> is the actual server name.
18:34 semiosis bind mount is a trick to make a subdir of a volume available but not the whole volume
18:34 cfeller correct
18:35 pk2 semiosis: interesting!, This may be a dumb question but I wonder if the files/dirs that go to that mount point actually go through gluster or not
18:36 semiosis they should
18:36 cfeller They should.  On my server I actually have this:
18:36 cfeller <gluster_server>:gv0    /mnt/gluster/gv0 glusterfs defaults,_netdev,fetch-attempts=3 0 0
18:36 cfeller /mnt/gluster/gv0/debian /var/www/mirror/debian  bind    defaults,bind   0 0
18:36 cfeller /mnt/gluster/gv0/fedora /var/www/mirror/fedora  bind    defaults,bind   0 0
18:36 pk2 semiosis: according to manpage they seem so!
18:36 cfeller /mnt/gluster/gv0/fedora-epel /var/www/mirror/fedora-epel bind defaults,bind 0 0
18:37 cfeller and there are no issues with the fedora bind mounts... so I don't think bind is the issue.
18:38 pk2 cfeller: learned something new today :-). Reading the bug description again...
18:42 pk2 cfeller: Can you please get getfattr -d -m. -e hex output of both "i18n/Translation-en.bz2" and "i18n" please
18:44 cfeller # file: i18n/
18:44 cfeller security.selinux=0x73797374656d5f753a6f6​26a6563745f723a6675736566735f743a733000
18:44 cfeller # file: i18n/Translation-en.bz2
18:44 cfeller security.selinux=0x73797374656d5f753a6f6​26a6563745f723a6675736566735f743a733000
18:47 Technicool joined #gluster
18:48 sroy_ joined #gluster
18:51 glusterbot New news from resolvedglusterbugs: [Bug 971805] nfs: "rm -rf" throws "E [client3_1-fops.c:5214:client3_1_inodelk]" Assertion failed <https://bugzilla.redhat.com/show_bug.cgi?id=971805>
18:51 pk2 cfeller: looks a lot like one of the bug I looked at
18:51 pk2 cfeller, just look at 971805
18:52 pk2 cfeller: http://review.gluster.org/5178 is the fix
18:52 glusterbot Title: Gerrit Code Review (at review.gluster.org)
18:53 cfeller pk2: the BZ report said it was fixed in 3.4.0... is this a regression, or? as I'm running 3.4.2.
18:53 cfeller pk2: was running 3.4.1 before that, skipped 3.4.0
18:53 cfeller pk2: and the bug wasn't present in 3.3.2
18:54 pk2 cfeller: Exactly what I am trying to figure out, it could be wrong info in the fixed-in-version ? just checking wait...
18:54 cfeller pk2: it was only after the 3.3.2 to 3.4.1 upgrade that this popped up.
18:55 pk2 cfeller: pk@localhost - ~/workspace/gerrit-repo (3.5)
18:55 pk2 00:24:40 :) ⚡ git log origin/release-3.4 | grep "http://review.gluster.org/5178"
18:55 pk2 pk@localhost - ~/workspace/gerrit-repo (3.5)
18:55 pk2 00:24:55 :( ⚡ git log origin/release-3.5 | grep "http://review.gluster.org/5178"
18:55 pk2 Reviewed-on: http://review.gluster.org/5178
18:55 pk2 pk@localhost - ~/workspace/gerrit-repo (3.5)
18:55 pk2 00:24:59 :) ⚡ git log origin/master | grep "http://review.gluster.org/5178"
18:55 pk2 Reviewed-on: http://review.gluster.org/5178
18:55 glusterbot Title: Gerrit Code Review (at review.gluster.org)
18:55 lalatenduM joined #gluster
18:55 pk2 cfeller: patch never went in 3.4....
18:55 pk2 cfeller: it is present in 3.5 and above....
18:55 Hoggins joined #gluster
18:56 pk2 cfeller: I sent this patch, some 7 months back.... according to the patch timestamps
18:57 hagarth pk2, cfeller: sounds like a good candidate for inclusion in 3.4.3
18:57 pk2 hagarth: I thought you were asleep :-P
18:57 hagarth pk2: your debugging kept me awake :P
18:58 cfeller pk2: since 3.5 isn't stable yet, do we want to try a test build of 3.4.3?
18:58 pk2 hagarth: what is the timeline for 3.4.3?
18:58 Hoggins hello everybody ! I have a two-node replicated GlusterFS storage, and one of the bricks has been down for a few days due to connectivity issues. As the remaining brick also had difficulties, I *know* that, when the offline brick is going to be online again, there is gonna be a split-brain condition.  Question : is there a way to tell Gluster that I want to consider the latest files as the valid ones anyway, so that a choice will be made when t
18:58 Hoggins he split-brain condition is detected, avoiding to "lock" the files until the condition is restored ?
18:58 hagarth pk2: no timeline as yet.. we are trying to determine what all we need to pull into 3.4.3
18:58 kkeithley_ 3.4.3 wouldn't be much different than 3.4.2 right now
19:00 pk2 hagarth: could you paste the wiki link so that cfellar can add this patch for inclusion in 3.4.3?
19:00 hagarth http://www.gluster.org/community/docu​mentation/index.php/Backport_Wishlist
19:00 glusterbot Title: Backport Wishlist - GlusterDocumentation (at www.gluster.org)
19:01 pk2 cfeller: Add the patch there. I will post a 3.4 release backport tomorrow,
19:02 pk2 hagarth: Can we provide him with the testbuild so that he can confirm?
19:02 pk2 hagarth: that the fix works....
19:02 hagarth pk2: sure, let us figure out a way
19:03 lpabon joined #gluster
19:03 pk2 hagarth: thanks
19:03 hagarth i will be off now
19:03 pk2 cfeller: I will be off now?...
19:05 blook joined #gluster
19:05 cfeller pk2: sure.  I'm in the process of adding the backport request.  if I  have any issues, I'll bug one of these guys. =)
19:06 pk2 cfeller: fyi, this is comment which gave away the problem: https://bugzilla.redhat.com/​show_bug.cgi?id=1028582#c11
19:06 glusterbot Bug 1028582: unspecified, unspecified, ---, pkarampu, ASSIGNED , GlusterFS files missing randomly - the miss triggers a self heal, then missing files appear.
19:06 pk2 cfeller: have a nice day, cya
19:13 lpabon joined #gluster
19:14 cfeller Since there was no reference (yet) of that fix in my bug report, I put the review.gluster.org/5178 into the requested backport... I assume that is OK?
19:14 KaZeR_ joined #gluster
19:17 marcoceppi joined #gluster
19:17 marcoceppi joined #gluster
19:17 jclift left #gluster
19:17 cfeller It seems like it, I see a couple others that only reference the review # and not a BZ #
19:19 Technicool joined #gluster
19:20 sroy_ joined #gluster
19:23 Staples84 joined #gluster
19:24 17WAARZDF joined #gluster
19:25 jruggiero left #gluster
19:26 jclift joined #gluster
19:26 jclift aixsyd: Found it.  http://moca.espresso.gr.jp/w​iki/wiki.cgi?page=InfiniBand
19:26 glusterbot Title: InfiniBand - Esprersso FreeStyleWiki (at moca.espresso.gr.jp)
19:26 jclift aixsyd: Search on "SFS-HCA-E2T7-A1" there.
19:37 jclift left #gluster
19:42 tryggvil joined #gluster
19:49 TonySplitBrain https://wiki.samba.org/index.php​/CTDB_Setup#GlusterFS_filesystem points to http://bugs.gluster.com/cgi-bin​/bugzilla3/show_bug.cgi?id=159 , which is probably on RH Bugzilla now
19:49 glusterbot Bug 159: high, high, ---, dkl, CLOSED WORKSFORME, login problem on tty1; other ttys problem with enter key
19:49 TonySplitBrain how to find this bug page?
19:52 blook joined #gluster
19:59 japuzzo_ joined #gluster
20:06 social joined #gluster
20:08 RedShift joined #gluster
20:09 sbabbaro_ joined #gluster
20:11 __ash joined #gluster
20:14 sroy_ joined #gluster
20:15 KaZeR_ joined #gluster
20:17 sbabbaro_ hi, i'm new to gluster, and i'm not sure i really need it ;) i have 2 physical servers and in the next two weeks i need to store 300gb on them (not a lot of disk space, not a lot of files). I'll create these files on one server. On both of them i have an nginx that will serve that files. I need a way to easily replicate these files (without forgetting about rsyncing), and gluster seems to be easy. The problem is that
20:17 sbabbaro_ only have /boot and / on these servers, no dedicated disks for gluster. I can't find doc about the issues of having bricks on / . Should i just go with rsync or can i use gluster?
20:18 sbabbaro_ (/ is ext4, ubuntu 13.04/13.10)
20:19 flrichar my bricks are on / , /export/bricks :oD
20:19 NeatBasis_ joined #gluster
20:21 sbabbaro_ flrichar: mmmh.. joking?
20:21 flrichar yea, I'm kinda new too
20:21 flrichar I don't see where it would be an issue
20:21 flrichar unless you're exporting the entire / filesystem...
20:22 JoeJulian oldbug 159
20:22 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=761891 medium, low, ---, vikas, CLOSED CURRENTRELEASE, Client hangs when coherent byte range locks is attempted in replicate setup
20:22 JoeJulian TonySplitBrain: ^
20:22 aixsyd oh my god. I'm gonna scream. and throw these infiniband cards in a grinder.
20:23 flrichar call them infiniblonde.  they love that
20:23 JoeJulian sbabbaro_: It only really matters if your disk gets full.
20:24 aixsyd server A card A and server B card B = 7.41gbps. server C card C and server D card D = 1.5gps. Server A card A and Server C card C = 6gbps. Server A Card A and Server D card D = 6gbps. Server A card A and server D card B = 5.5gbps.
20:24 JoeJulian If you're concerned about that, use a loopback image for a brick.
20:24 aixsyd why do my speeds vary SO GREATLY  D: D: D:
20:25 flrichar some of the groups are lazy?
20:25 flrichar ;oD
20:25 aixsyd this defys all logic
20:26 aixsyd wow - WHUT?! Server A card A and server D card B - if i iperf with S-A as the server and S-D as the client, i get 5.5gbps. if i reverse it, i get 1gbps.
20:26 sbabbaro_ JoeJulian: how much full? i mean, i have 2TB disks, and sensu sends me warning emails at 80%. Safe enough?
20:26 JoeJulian yep
20:27 flrichar 2tb disks for 300gb of files?
20:28 sbabbaro_ JoeJulian: thanks :)
20:29 sbabbaro_ flrichar: not just for that :)
20:29 JoeJulian sbabbaro_: Just to make sure you're clear, however, gluster is not a filesystem replicating tool. GlusterFS is a filesystem that uses your backend filesystem for its own storage. You'll have replicated files, but you'll access them through the client.
20:29 flrichar hehe
20:29 sbabbaro_ JoeJulian: yes, i know, thanks
20:30 JoeJulian :)
20:30 flrichar its too bad there wasn't a cgroup or selinux setup which only allowed the gluster process to write to the backend bricks
20:30 flrichar makes me want to put a file there called "dont_write_here_dummy!"
20:31 sbabbaro_ an alternative seems to be https://github.com/calmh/syncthing , maybe too bleeding edge
20:31 glusterbot Title: calmh/syncthing · GitHub (at github.com)
20:31 JoeJulian Make your bricks in /data/gluster/brick and make /data/gluster mode 700.
20:31 flrichar ahh cool
20:39 tryggvil joined #gluster
20:43 diegows joined #gluster
20:44 jag3773 joined #gluster
20:44 qdk joined #gluster
20:52 badone joined #gluster
20:52 TonySplitBrain JoeJulian: thank you!
20:57 theron joined #gluster
21:07 plarsen joined #gluster
21:15 KaZeR_ joined #gluster
21:27 Mo_ When I try to create gluster volume I get "Operation failed" message
21:28 Mo_ I checked that dir exists and everything looks good
21:28 bstr left #gluster
21:29 Mo_ Is there a way to find out why it might be failing? Logs doesn't show much
21:46 MacWinner joined #gluster
21:50 diegows joined #gluster
22:04 dbruhn are all of your peers showing as up?
22:06 mattappe_ joined #gluster
22:08 diegows joined #gluster
22:16 KaZeR_ joined #gluster
22:18 gmcwhistler joined #gluster
23:00 bala joined #gluster
23:14 psyl0n joined #gluster
23:17 KaZeR_ joined #gluster
23:22 jag3773 joined #gluster
23:24 psyl0n joined #gluster
23:36 robo joined #gluster
23:39 psyl0n joined #gluster
23:40 DV joined #gluster
23:42 nueces joined #gluster
23:55 khushildep joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary