Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 theron joined #gluster
00:21 overclk joined #gluster
00:26 vpshastry joined #gluster
00:28 jporterfield joined #gluster
00:41 b0e1 joined #gluster
00:46 jag3773 joined #gluster
00:46 cjanbanan joined #gluster
00:56 rpowell joined #gluster
00:58 vpshastry joined #gluster
01:19 JonnyNomad joined #gluster
01:27 cjanbanan joined #gluster
01:33 davinder joined #gluster
01:56 cjanbanan joined #gluster
01:57 jporterfield joined #gluster
02:04 B21956 joined #gluster
02:22 jporterfield joined #gluster
02:45 cjanbanan joined #gluster
02:56 cjanbanan joined #gluster
03:01 nightwalk joined #gluster
03:07 sprachgenerator joined #gluster
03:11 harish_ joined #gluster
03:25 bala joined #gluster
03:34 cjanbanan joined #gluster
03:43 shyam joined #gluster
04:03 cjanbanan joined #gluster
04:05 jporterfield joined #gluster
04:10 jporterfield joined #gluster
04:49 cjanbanan joined #gluster
04:51 shyam joined #gluster
05:17 hagarth joined #gluster
05:27 cjanbanan joined #gluster
05:38 prasanth joined #gluster
05:42 hagarth joined #gluster
05:43 mohankumar joined #gluster
05:49 shyam joined #gluster
05:55 shyam1 joined #gluster
05:58 benjamin_ joined #gluster
05:58 cjanbanan joined #gluster
06:09 mohankumar joined #gluster
06:17 davinder2 joined #gluster
06:21 davinder joined #gluster
06:24 hagarth joined #gluster
06:30 cjanbanan joined #gluster
06:32 jporterfield joined #gluster
06:57 ctria joined #gluster
06:57 sputnik13 joined #gluster
07:01 cjanbanan joined #gluster
07:01 shyam joined #gluster
07:08 mohankumar joined #gluster
07:12 ktosiek joined #gluster
07:24 harish_ joined #gluster
07:26 cjanbanan joined #gluster
07:39 jporterfield joined #gluster
07:44 RedShift joined #gluster
07:54 prasanth joined #gluster
07:56 davinder2 joined #gluster
07:57 aquagreen joined #gluster
07:57 cjanbanan joined #gluster
08:02 mohankumar joined #gluster
08:21 glusterbot New news from resolvedglusterbugs: [Bug 796656] running ping_pong on a file from a gluster mount crashed the gluster client <https://bugzilla.redhat.com/show_bug.cgi?id=796656>
08:21 vpshastry joined #gluster
08:24 mohankumar joined #gluster
08:26 cjanbanan joined #gluster
08:30 benjamin_ joined #gluster
08:48 bala joined #gluster
08:54 mohankumar joined #gluster
08:56 harish joined #gluster
08:58 rossi_ joined #gluster
09:01 cjanbanan joined #gluster
09:21 mohankumar joined #gluster
09:29 mohankumar joined #gluster
09:31 cjanbanan joined #gluster
10:12 cjanbanan joined #gluster
10:21 vpshastry left #gluster
10:34 mohankumar joined #gluster
10:41 jporterfield joined #gluster
10:45 cjanbanan joined #gluster
10:46 mohankumar joined #gluster
10:57 cjanbanan joined #gluster
11:30 cjanbanan joined #gluster
11:49 ctria joined #gluster
11:56 mohankumar joined #gluster
12:06 cjanbanan joined #gluster
12:22 rfortier joined #gluster
12:25 mohankumar joined #gluster
12:30 rfortier1 joined #gluster
12:30 askb joined #gluster
12:32 rossi_ joined #gluster
12:40 bala1 joined #gluster
12:42 rossi_ joined #gluster
12:43 mohankumar joined #gluster
12:52 haomaiwang joined #gluster
13:00 cjanbanan joined #gluster
13:06 haomai___ joined #gluster
13:19 neurodrone__ joined #gluster
13:28 hagarth joined #gluster
13:43 jporterfield joined #gluster
13:44 cjanbanan joined #gluster
14:07 satheesh2 joined #gluster
14:16 redbeard joined #gluster
14:18 cjanbanan joined #gluster
14:19 shapemaker joined #gluster
14:20 pdrakeweb joined #gluster
14:20 ^^rcaskey joined #gluster
14:22 haomaiwang joined #gluster
14:22 mtanner joined #gluster
14:23 johnmark joined #gluster
14:23 rwheeler_ joined #gluster
14:24 JonnyNomad_ joined #gluster
14:24 d-fence_ joined #gluster
14:25 aurigus_ joined #gluster
14:25 klaas joined #gluster
14:26 ktosiek_ joined #gluster
14:26 bazzer joined #gluster
14:27 [o__o] joined #gluster
14:28 tdasilva joined #gluster
14:30 Dave2_ joined #gluster
14:32 wcchandler joined #gluster
14:33 crazifyngers joined #gluster
14:37 ultrabizweb joined #gluster
14:37 eryc joined #gluster
14:39 16WAAS17A joined #gluster
14:39 16WAAS14A joined #gluster
14:39 wrale joined #gluster
14:39 16WAAR7KB joined #gluster
14:43 redbeard joined #gluster
14:49 hchiramm_ joined #gluster
14:49 jporterfield joined #gluster
14:50 sputnik13 joined #gluster
14:50 ron-slc joined #gluster
14:51 mohankumar joined #gluster
14:51 the-me joined #gluster
14:51 DV joined #gluster
14:51 PatNarciso joined #gluster
14:51 georgeh|workstat joined #gluster
14:51 primusinterpares joined #gluster
14:52 ThatGraemeGuy joined #gluster
14:52 neurodrone joined #gluster
14:52 NuxRo joined #gluster
14:52 glusterbot New news from resolvedglusterbugs: [Bug 764964] deadlock related to transparent hugepage migration in kernels >= 2.6.32 <https://bugzilla.redhat.com/show_bug.cgi?id=764964>
14:53 wgao_ joined #gluster
14:53 atrius` joined #gluster
14:53 atrius joined #gluster
15:04 ilbot3 joined #gluster
15:04 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
15:04 NuxRo joined #gluster
15:05 abyss^ joined #gluster
15:05 wgao_ joined #gluster
15:05 nikk joined #gluster
15:05 lyang0 joined #gluster
15:06 tru_tru joined #gluster
15:06 zoldar joined #gluster
15:06 georgeh|workstat joined #gluster
15:06 codex joined #gluster
15:06 REdOG joined #gluster
15:06 xymox joined #gluster
15:07 DV joined #gluster
15:07 samppah joined #gluster
15:07 sticky_afk joined #gluster
15:07 sulky joined #gluster
15:08 the-me_ joined #gluster
15:08 mkzero_ joined #gluster
15:08 mohankumar__ joined #gluster
15:08 jurrien__ joined #gluster
15:08 Cenbe_ joined #gluster
15:08 uebera|| joined #gluster
15:08 purpleidea joined #gluster
15:08 divbell_ joined #gluster
15:08 tjikkun_work joined #gluster
15:08 jiffe98 joined #gluster
15:09 jclift joined #gluster
15:10 orion7643 joined #gluster
15:13 16WAAS17A joined #gluster
15:13 kkeithley joined #gluster
15:13 romero joined #gluster
15:13 hchiramm__ joined #gluster
15:13 askb joined #gluster
15:13 msvbhat joined #gluster
15:13 sulky joined #gluster
15:13 jclift joined #gluster
15:13 mohankumar__ joined #gluster
15:14 social joined #gluster
15:18 Rydekull joined #gluster
15:21 ilbot3 joined #gluster
15:21 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
15:23 james joined #gluster
15:23 jurrien joined #gluster
15:24 inodb_ joined #gluster
15:24 twx joined #gluster
15:24 hchiramm__ joined #gluster
15:24 DV joined #gluster
15:27 sulky joined #gluster
15:27 Kins joined #gluster
15:27 junaid joined #gluster
15:27 Oneiroi joined #gluster
15:30 ilbot3 joined #gluster
15:30 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
15:30 askb joined #gluster
15:30 redbeard joined #gluster
15:30 borreman_dk joined #gluster
15:30 Cenbe joined #gluster
15:30 sputnik13 joined #gluster
15:31 VerboEse joined #gluster
15:31 JonathanD joined #gluster
15:31 l0uis joined #gluster
15:31 cyberbootje joined #gluster
15:31 l0uis joined #gluster
15:31 NuxRo joined #gluster
15:31 social joined #gluster
15:31 harish_ joined #gluster
15:33 overclk joined #gluster
15:33 lyang0 joined #gluster
15:34 cjanbanan joined #gluster
15:35 REdOG joined #gluster
15:36 mkzero joined #gluster
15:37 XpineX joined #gluster
15:37 atrius` joined #gluster
15:37 Gugge joined #gluster
15:37 RobertLaptop joined #gluster
15:37 eastz0r_ joined #gluster
15:37 asku joined #gluster
15:37 crazifyngers joined #gluster
15:37 Slasheri_ joined #gluster
15:37 17SAAIAG2 joined #gluster
15:37 eshy joined #gluster
15:38 mohankumar__ joined #gluster
15:38 orion7643 joined #gluster
16:00 ilbot3 joined #gluster
16:01 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
16:02 edong23 joined #gluster
16:02 ulimit joined #gluster
16:03 Gugge joined #gluster
16:07 DV joined #gluster
16:08 Gugge_ joined #gluster
17:34 ilbot3 joined #gluster
17:34 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
17:35 borreman_dk joined #gluster
17:35 nikk joined #gluster
17:42 ilbot3 joined #gluster
17:42 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
17:42 jiqiren joined #gluster
17:48 hchiramm__ joined #gluster
17:48 Faed joined #gluster
17:48 REdOG joined #gluster
17:49 lanning joined #gluster
17:49 aquagreen joined #gluster
17:49 harish_ joined #gluster
17:52 sulky joined #gluster
18:16 ilbot3 joined #gluster
18:16 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:16 askb joined #gluster
18:17 Rydekull joined #gluster
19:59 ilbot3 joined #gluster
19:59 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
20:22 ilbot3 joined #gluster
20:22 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
20:23 ThatGraemeGuy joined #gluster
21:13 ilbot3 joined #gluster
21:13 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
21:13 samppah https://gist.github.com/avati/af04f1030dcf52e16535#file-plan-md
21:13 glusterbot Title: GlusterFS 4.0 (at gist.github.com)
21:33 puddles Nice, thank you.  That was pretty interesting.  It seems like they might be shortcutting the problem, which is that replica X really only needs >= X bricks, not a multiple of X bricks.
21:36 puddles Which seems to be problem of doing distribution on top of replication.
21:37 dbruhn joined #gluster
21:37 puddles And actually being able to do distribution on replication on top of distribution would effectively fix my problem.
21:38 puddles [ 88gb ] replicated [ ( 2tb ) distributed ( 2tb ) distributed ( 360mb )  ]
21:38 puddles as opposed to
21:39 puddles rather like this
21:40 puddles ( [ 88gb ] replicated [ 2tb ] ) distributed ( [ 2tb ] replicated [ 360mb ] )
21:42 puddles Ha, actually the proposed changes will soften control even further.
21:50 dbruhn puddles, sorry missed most of what you said, what are you trying to accomplish?
21:51 puddles I just have disk sizes that don't work well with replica.
21:52 dbruhn What are your disk sizes and what are you trying to accomplish?
21:53 puddles I was trying to do a replica 2, but across a 250gb, 880gb, 1tb and another 1tb disk.
21:54 dbruhn 6 servers?
21:54 dbruhn or?
21:54 puddles If it works out I was going to include more, but right now I have essentially 3 servers.
21:54 puddles The 1tb and 1tb are on the same machine.
21:55 puddles I also have 2 more with 6 more disks.
21:55 dbruhn agh ok, well you could go with your lowest common denominator, create 200GB file systems in an even quantity and then build your volume off a mess of bricks that are goofy
21:55 dbruhn you'll create a management and expansion mess later though
21:55 puddles That's what I was thinking.
21:56 puddles Not only that, but I may lose quite a bit of space.
21:56 dbruhn typically in a replica you want to have at least a multiple of the devices that you have for your replica
21:56 puddles ya, because I'd end up with an odd number of 200gb bricks.
21:56 puddles I have plenty of devices and disks to do replica 2.
21:56 dbruhn well you need to have an even number of bricks to the replica number
21:57 dbruhn yeah, but you have 3 servers and want to do a replica 2
21:57 puddles Yes
21:57 dbruhn you should have 4 servers really
21:57 puddles Hrm..
21:57 puddles I don't follow the logic on that.
21:58 dbruhn are you worried about losing a server?
21:58 puddles More worried about losing disks.
21:58 puddles Even if I lose a server I can pull the disks and downtime won't be too much of an issue.
21:59 dbruhn Well, if you are ok with having your volume go offline with a single server failure...
21:59 puddles If I do replica 2 one server won't bring it down, will it?
21:59 dbruhn what is your acceptable down time?
21:59 puddles 1 day
21:59 dbruhn typically in a x2 system you want to have a multiple of servers x2 and multiple of bricks x2
22:00 puddles You mean replica X you want 2*X servers and 2*X bricks?
22:00 dbruhn yep
22:01 puddles Well, if I bring my other disks into it I would have more than that.
22:01 puddles But I'm not ready to transition anything yet.
22:02 dbruhn Gluster really is much better behaved in a homogenous environment
22:02 puddles On what level?
22:02 puddles Disk size?
22:02 puddles Architecture?
22:02 puddles I'm seeing that about disk size.
22:03 puddles Though two of my devices are ARM and the others are x64.
22:03 puddles Seems to be fine.
22:03 dbruhn It will work
22:03 dbruhn We are maybe talking about two different approaches to this
22:04 puddles Maybe
22:04 dbruhn Are you using this in a production system for a business?
22:04 puddles It's pretty small, but yes.
22:04 puddles Even if the system goes down it's not a big deal.
22:04 dbruhn well with those arm boxes, this is a test you'll want to run
22:04 dbruhn the self heal will clobber most boxes
22:05 puddles cpu?
22:05 puddles memory?
22:05 dbruhn yep
22:05 puddles They are pretty limited on both.
22:05 dbruhn CPU mostly, but memory too
22:05 puddles All self-heals or specifically when there's something wrong?
22:05 m0zes joined #gluster
22:05 uebera|| joined #gluster
22:05 purpleidea joined #gluster
22:05 divbell_ joined #gluster
22:05 tjikkun_work joined #gluster
22:05 jiffe98 joined #gluster
22:05 dbruhn Well.... yeah...
22:06 dbruhn purpleidea, your talk was good yesterday, it was cool to see your stuff in action
22:07 dbruhn puddles, I would suggest you set up a pair of boxes with symmetrical disks, or 4 boxes with symmetrical disks if you want to do dist+replica x2
22:08 dbruhn you or the next guy who has to work on this system will probably run into a ton of issues when you go to grow the system, or if you ever have to work on it. You are setting yourself up for a long long long day/night if you do.
22:08 dbruhn You can make what you want to do work, but it's going to be messy at best
22:08 puddles I was finding that.  I was going to pull out the oddly sized disks.
22:08 puddles and stick to 1tb and 2tb disks
22:09 puddles When I was looking into gluster I missed the part where distributed replicas just do distribute on top of replica.
22:09 dbruhn with a distributed volume you want symmetrical bricks, not just between replication pairs, but across the distributed portion as well.
22:10 dbruhn the base is actually distributed, and the replica is an extension
22:10 dbruhn it's not on top
22:10 puddles It's not layered?
22:10 dbruhn nope
22:10 dbruhn You are missing a bit of how gluster actually works
22:10 puddles Quite a few bits.
22:10 dbruhn it's not raid
22:10 dbruhn Quick breakdown
22:11 dbruhn say you have a distributed and replicated volume with 4 bricks on 4 servers
22:11 cjanbanan joined #gluster
22:11 dbruhn when the gluster fuse client (or NFS running in front of the fuse client) connects to a cluster
22:11 dbruhn it is actually communicating with all of the servers in the cluster
22:12 dbruhn distributed is actually a function called DHT (Distributed Hash Table), this is how gluster decides to divide the files up across the distributed portion of the volume
22:12 puddles Yes, I'm going through NFS to one of the servers.
22:13 jporterfield joined #gluster
22:13 dbruhn still works the same way, the fuse client just runs behind NFS
22:13 dbruhn still worked the same way NFS just runs on top of the fuse client
22:14 dbruhn when you write to a replica gluster actually writes to all of the peers or pairs of replication at the same time
22:14 puddles So with 4 servers the NFS head actually sends the data to each peer?
22:15 dbruhn DHT creates a hash baed on the name of the file
22:15 dbruhn yes it does, the server that you are connected to is just also a client of the volume
22:15 dbruhn that hash is how gluster decides where to put the data
22:16 puddles Meaning a file with a specific name will always go to the same brick?
22:16 dbruhn say you have a two brick volume that is distributed
22:16 dbruhn the first couple characters of the hashed name are the reference, all things starting with 0-4 will got to brick 1 and 5-9 will go to brick 2
22:17 dbruhn that's how it does distributed
22:17 dbruhn and in theory yes to your question
22:17 dbruhn it's a bit more detailed than that, I can't remember if the path is part of that hash
22:17 puddles See, I thought it was being more intelligent about where to put the blocks.
22:18 puddles path needs to be
22:18 puddles Otherwise you would get hash collisions.
22:18 dbruhn there isn
22:18 dbruhn there isn't a meta data database
22:18 dbruhn but anyway
22:18 puddles Yep, fundamentally different than I thought.
22:19 dbruhn Do you get how the distribution works? now?
22:19 puddles Yes
22:19 puddles I mean at a grade school level.
22:19 dbruhn ok, so replication actually writes to all members of a replication group symmetrically
22:19 puddles Whoever owns that hash writes it?
22:20 dbruhn the hash is just there to know what server to look at first in the cluster when you are reading or writing a file
22:20 dbruhn if a brick is full a pointer to another brick is created
22:20 dbruhn this causes a double lookup, so that can be a bit expensive
22:20 puddles Another good piece of info.
22:21 puddles In a huge system that could be very costly.
22:21 dbruhn that's why you don't want goofy sized bricks
22:21 puddles There's all this talk of 100 brick volumes.
22:21 dbruhn because the system is actually fairly good at distributing the data fairly evenly
22:21 dbruhn even if there isn't a direct correlation to how it gets there
22:21 dbruhn 100 brick volumes are ez mode
22:22 puddles But that means a mostly full 100 brick volume would have a huge overhead for every write.
22:22 dbruhn only if you have full bricks you are trying to write to
22:22 dbruhn also, gluster in distributed and replicated mode is not a block level deal at all
22:23 dbruhn stripping will split the files over the bricks, but it's slow
22:23 dbruhn distributed deals with files on a whole file level
22:24 puddles Ooh, sorry I disregarded the hash thing.  So you're saying that a hash should go to brick1, but brick1 is full, so it finds another brick.
22:24 dbruhn yep
22:25 puddles But the hash split sends files to random bricks.
22:25 Peanut__ joined #gluster
22:25 dbruhn not random
22:25 puddles Essentially random.
22:25 puddles Based on a bit compare probably.
22:25 dbruhn not at all
22:26 puddles from the hash of the filename
22:26 puddles to the bit mask of the brick
22:26 puddles I mean based on random filenames.
22:26 JonathanD joined #gluster
22:26 dbruhn gluster uses extended attributes as part of the bricks file system
22:26 dbruhn the hash information is part of that extended attributes
22:27 dbruhn s/that/those/
22:27 glusterbot What dbruhn meant to say was: the hash information is part of those extended attributes
22:27 dbruhn gluster always knows that a file that starts with a hash of 0, 1, 2, 3, 4 in a two brick distributed volume will always write those files to brick 1
22:28 puddles Yes, that's what I was getting at.
22:28 dbruhn have you actually tested gluster at all yet?
22:28 puddles Yes, I'm running it.
22:29 puddles I just don't have anything on it.
22:29 puddles I mean, I put some test files up to see which brick they went to.
22:29 dbruhn ok there is a directory called .glusterfs at the root of your bricks
22:29 puddles Yes
22:30 dbruhn getfattr -m . -d -e hex /path/to/file/on/brick/foo1
22:30 radez_g0n3 joined #gluster
22:30 dbruhn that will show you the extended attributes
22:30 dbruhn this blog has a ton of good suggest matter
22:30 dbruhn http://joejulian.name/blog/category/glusterfs/
22:30 cjanbanan joined #gluster
22:31 portante joined #gluster
22:32 abyss^ joined #gluster
22:32 puddles I was actually just looking at one of the posts on there.
22:32 dbruhn gotta run, might be back in a bit
22:34 haomaiwa_ joined #gluster
22:34 ThatGraemeGuy joined #gluster
22:35 Cenbe joined #gluster
22:36 a2 joined #gluster
22:37 wrale joined #gluster
22:40 kkeithley joined #gluster
22:44 isthereabigddoso joined #gluster
22:44 isthereabigddoso seems a lot of the network is split and i'm alone in #gluster elsewhere...
22:44 isthereabigddoso @undocumented options
22:44 glusterbot isthereabigddoso: Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
22:46 JonathanS joined #gluster
22:51 asku joined #gluster
22:52 jclift joined #gluster
22:56 hflai joined #gluster
22:57 puddles This thing is splitting like crazy.
22:58 XpineX joined #gluster
23:03 edong23 joined #gluster
23:03 dbruhn joined #gluster
23:04 dbruhn puddles, back if you have more questions
23:07 ProT-0-TypE joined #gluster
23:07 XpineX_ joined #gluster
23:07 cjanbanan joined #gluster
23:17 psyl0n joined #gluster
23:20 tru_tru joined #gluster
23:26 psyl0n joined #gluster
23:37 eryc joined #gluster
23:37 hagarth joined #gluster
23:37 16WAATW8V joined #gluster
23:37 morse joined #gluster
23:37 msvbhat joined #gluster
23:37 cyberbootje joined #gluster
23:40 jag3773 joined #gluster
23:46 m0zes joined #gluster
23:46 uebera|| joined #gluster
23:46 purpleidea joined #gluster
23:46 divbell_ joined #gluster
23:46 tjikkun_work joined #gluster
23:46 jiffe98 joined #gluster
23:51 psyl0n joined #gluster
23:51 psyl0n joined #gluster
23:53 Cenbe joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary