Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 purpleidea phox: that's kinda racist
00:31 bronaugh purpleidea: no, really?
00:32 purpleidea ya
00:38 asias joined #gluster
00:39 edong23 joined #gluster
01:11 itisravi_ joined #gluster
01:45 bala joined #gluster
01:54 bharata-rao joined #gluster
02:20 F^nor joined #gluster
02:35 vpshastry joined #gluster
02:37 vpshastry left #gluster
02:44 _NiC joined #gluster
02:44 hflai joined #gluster
02:44 nasso joined #gluster
02:44 gluslog joined #gluster
02:44 Bluefoxicy joined #gluster
02:44 micu2 joined #gluster
02:44 Technicool joined #gluster
02:44 cyberbootje joined #gluster
02:44 atrius` joined #gluster
02:46 ninkotech joined #gluster
02:55 kanagaraj joined #gluster
03:03 kanagaraj joined #gluster
03:09 jag3773 joined #gluster
03:14 shylesh joined #gluster
03:22 harish joined #gluster
03:30 davinder joined #gluster
03:32 manik joined #gluster
03:33 itisravi_ joined #gluster
03:33 shubhendu joined #gluster
03:34 a2_ joined #gluster
03:34 DV joined #gluster
03:34 t35t0r joined #gluster
03:34 nullck joined #gluster
03:34 Technicool joined #gluster
03:34 kanagaraj joined #gluster
03:34 inodb joined #gluster
03:35 mistich1 joined #gluster
03:35 t35t0r joined #gluster
03:35 edong23 joined #gluster
03:38 badone joined #gluster
03:39 Shri joined #gluster
03:42 itisravi joined #gluster
03:44 sgowda joined #gluster
03:46 manik joined #gluster
03:47 satheesh1 joined #gluster
03:53 jag3773 joined #gluster
03:59 jesse joined #gluster
04:02 mohankumar joined #gluster
04:04 RameshN joined #gluster
04:05 shyam joined #gluster
04:13 bala joined #gluster
04:17 manik joined #gluster
04:20 manik joined #gluster
04:24 kPb_in_ joined #gluster
04:26 dusmant joined #gluster
04:26 jag3773 joined #gluster
04:27 vpshastry joined #gluster
04:33 aravindavk joined #gluster
04:39 lalatenduM joined #gluster
04:42 DV joined #gluster
04:44 rjoseph joined #gluster
04:58 saurabh joined #gluster
05:01 ppai joined #gluster
05:27 nshaikh joined #gluster
05:30 psharma joined #gluster
05:31 shyam joined #gluster
05:36 rastar joined #gluster
05:39 meghanam joined #gluster
05:39 meghanam_ joined #gluster
05:53 harish joined #gluster
05:55 lalatenduM joined #gluster
05:58 ndarshan joined #gluster
05:58 anands joined #gluster
06:00 satheesh1 joined #gluster
06:04 Shri joined #gluster
06:06 bala joined #gluster
06:08 rgustafs joined #gluster
06:14 kshlm joined #gluster
06:26 jtux joined #gluster
06:27 kPb_in_ joined #gluster
06:28 mohankumar joined #gluster
06:33 eseyman joined #gluster
06:42 davinder joined #gluster
07:01 ctria joined #gluster
07:06 vimal joined #gluster
07:11 mooperd_ joined #gluster
07:16 keytab joined #gluster
07:17 ndarshan joined #gluster
07:20 ThatGraemeGuy joined #gluster
07:20 ThatGraemeGuy_ joined #gluster
07:27 lalatenduM joined #gluster
07:28 KORG joined #gluster
07:28 KORG Guys, can anyone please comment this: https://bugzilla.redhat.co​m/show_bug.cgi?id=1017215
07:28 glusterbot <http://goo.gl/3v0PmL> (at bugzilla.redhat.com)
07:29 glusterbot Bug 1017215: high, unspecified, ---, amarts, NEW , Replicated objects duplicates
07:45 vincent_vdk joined #gluster
07:49 ndarshan joined #gluster
07:58 rgustafs joined #gluster
08:15 mgebbe_ joined #gluster
08:23 mark___ joined #gluster
08:24 mark___ HI Gluster people
08:25 mark___ I have two questions that I need some help with
08:27 mark___ First, I'm trying to get gluster to start on boot on a debian squeeze system.  I found a good article here http://joejulian.name/blog/glusterfs-volumes-​not-mounting-in-debian-squeeze-at-boot-time/ and I've followed the instructions.  It mostly works, however the volume does not mount.  I.e. The line flw1-int:/gv0 /var/cache/flooting glusterfs defaults,_netdev 0 0 does not work.  If I type mount -a it works fine.  So I can simply add a moun
08:27 mark___ to rc.local and that should fix it, however I wondered if there was something that I'm missing?
08:27 glusterbot <http://goo.gl/t6PY4> (at joejulian.name)
08:31 mark___ Second, I have 4 systems configured in a pool with with replicas set to 2.  So I hope that means that if any one server goes down I should still see a consistent file system across the remaining servers?  Anyway, that's not what I see.  If one server fails the file system is no longer accessible.  I assume there is a step that I'm missing in setting up the volumes?
08:32 shane_ can you ,,(paste) the output of "gluster volume info"?
08:32 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
08:33 KORG|2 joined #gluster
08:33 al joined #gluster
08:33 mark___ Volume Name: gv0
08:33 mark___ Type: Distributed-Replicate
08:33 mark___ Volume ID: 55ec7458-e8f6-45ff-9b96-06cf29856428
08:34 mark___ Status: Started
08:34 mark___ Number of Bricks: 2 x 2 = 4
08:34 mark___ Transport-type: tcp
08:34 mark___ Bricks:
08:34 mark___ Brick1: flw1-int:/export/brick1
08:34 mark___ Brick2: flw3-int:/export/brick1
08:34 mark___ Brick3: flg1-int:/export/brick1
08:34 mark___ Brick4: fls1-int:/export/brick1
08:34 mark___ sorry - just read the bit about paste :(
08:34 shane_ heh, that's ok
08:34 shane_ yeah, that looks right
08:35 shane_ and when any one server goes down you can no longer access the mounted gluster volume from clients at all?
08:36 mark___ that's right.  well the file system is there, but the files are missing
08:36 mark___ do i need to sync something first?
08:36 shane_ no, you shouldn't.
08:37 shane_ i'm a mere mortal, there will be others who can provide better guidance
08:37 mark___ thanks shane, me too.  i'll play around and see what I can find.
08:37 vshankar joined #gluster
08:37 shane_ but i know they'll want to see your client log and gluster volume info in fpaste
08:37 mark___ ok
08:42 ndarshan joined #gluster
08:43 harish joined #gluster
08:44 ngoswami joined #gluster
08:46 Gugge joined #gluster
08:46 Technicool joined #gluster
08:55 KORG joined #gluster
09:10 tryggvil joined #gluster
09:14 mark___ I've had a look in the logs and this seems interesting.  Are there any particular logs that I should paste beside this one? http://pastebin.com/T939779A
09:14 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:15 Lethalman joined #gluster
09:16 Lethalman hi, what is the client doing in gluster 3.4 other than transmitting its own files?
09:16 Lethalman I mean, I read somewhere in gluster 3.2 the client took care of replication
09:16 Lethalman is it like that in gluster 3.4 as well? where can I read about this?
09:17 mark___ @paste
09:17 glusterbot mark___: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
09:18 mark___ Pasted to paste.ubuntu.com http://paste.ubuntu.com/6221583/
09:18 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
09:22 harish_ joined #gluster
09:26 raghu joined #gluster
09:29 StarBeast joined #gluster
09:37 satheesh2 joined #gluster
09:44 tru_tru joined #gluster
09:49 shubhendu joined #gluster
09:50 vpshastry left #gluster
09:51 chirino joined #gluster
09:53 mooperd_ joined #gluster
09:54 shyam left #gluster
09:56 glusterbot New news from newglusterbugs: [Bug 1016000] Implementation of object handle based gfapi extensions <http://goo.gl/y8Xo7P>
09:59 dusmant joined #gluster
10:12 vpshastry joined #gluster
10:13 vpshastry left #gluster
10:15 ccha2 does self heal repair inode locks on a replicate volume ?
10:19 satheesh joined #gluster
10:22 vpshastry joined #gluster
10:24 kwevers joined #gluster
10:25 pkoro joined #gluster
10:29 lalatenduM joined #gluster
10:31 shyam joined #gluster
10:32 tryggvil joined #gluster
10:36 aravindavk joined #gluster
10:37 ndarshan joined #gluster
10:38 RameshN joined #gluster
10:43 ricky-ticky joined #gluster
10:52 Shri joined #gluster
10:55 satheesh1 joined #gluster
10:56 hagarth joined #gluster
11:03 aravindavk joined #gluster
11:03 ndarshan joined #gluster
11:06 dusmant joined #gluster
11:10 anil joined #gluster
11:13 jtux joined #gluster
11:15 rgustafs joined #gluster
11:18 edward2 joined #gluster
11:19 chirino joined #gluster
11:26 glusterbot New news from newglusterbugs: [Bug 1018176] Memory leak in gluster samba vfs / libgfapi <http://goo.gl/NqGtF7>
11:27 vpshastry joined #gluster
11:31 anil joined #gluster
11:31 chirino joined #gluster
11:36 tryggvil joined #gluster
11:39 F^nor joined #gluster
11:40 alan_ joined #gluster
11:46 nullck joined #gluster
11:46 Guest66051 HI, I upgrade gluster from 3.3 to 3.4 and now I have files (not all) with "Input/output error". All gluster brick and servers up.
11:46 Guest66051 problem like this https://access.redhat.com/site/solutions/350793
11:46 glusterbot Title: ls -l from glusterfs client gives Input/output error when one node of 2 node RHS is down - Red Hat Customer PortalRed Hat Customer Portal (at access.redhat.com)
11:55 edward2 joined #gluster
11:56 glusterbot New news from newglusterbugs: [Bug 987555] Glusterfs ports conflict with qemu live migration <http://goo.gl/SbL8x> || [Bug 1018178] Glusterfs ports conflict with qemu live migration <http://goo.gl/oDNTL3>
11:58 andreask joined #gluster
12:07 itisravi_ joined #gluster
12:16 andreask joined #gluster
12:21 Shri Hi Anyone know, the exact changes need to do in cinder config files to enable Gluserfs as backend for cinder volume in openstack setup ?
12:21 Shri so create cinder volume command use Glusterfs for creating volumes
12:26 glusterbot New news from newglusterbugs: [Bug 1001502] Split brain not detected in a replica 3 volume <http://goo.gl/YdVQqE>
12:29 ndarshan joined #gluster
12:29 anil joined #gluster
12:35 lpabon joined #gluster
13:03 dbruhn joined #gluster
13:04 bala joined #gluster
13:04 bennyturns joined #gluster
13:13 johnmark ndevos: ping
13:19 DV__ joined #gluster
13:23 rwheeler joined #gluster
13:25 partner i guess sleeping over the night didn't shed any light to the 3.3.1 client unable to mount 3.4.0 server volume?
13:25 H__ left #gluster
13:52 kaptk2 joined #gluster
13:59 rwheeler joined #gluster
14:05 tryggvil joined #gluster
14:07 jruggiero joined #gluster
14:10 ndk joined #gluster
14:11 jag3773 joined #gluster
14:13 vpshastry joined #gluster
14:13 wushudoin joined #gluster
14:15 tryggvil joined #gluster
14:18 RameshN joined #gluster
14:25 lpabon joined #gluster
14:31 sprachgenerator joined #gluster
14:33 jruggiero left #gluster
14:38 [o__o] left #gluster
14:40 [o__o] joined #gluster
14:47 vpshastry joined #gluster
14:51 LoudNoises joined #gluster
14:54 jclift_ joined #gluster
15:00 blook joined #gluster
15:01 blook hi gluster experts,
15:01 blook perhaps someone in here is able to give me a hint with some behavior I'm struggling with
15:02 blook bluster volume 'name' heal info displays on my setup that some grids should be healed
15:02 blook Number of entries: 1
15:02 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:02 blook Number of entries: 1
15:02 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:02 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook Number of entries: 1
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook <gfid:eee4bca0-ac43-4d09-b726-bc3f93a971b3>
15:03 blook woops hang
15:03 blook sorry
15:03 blook its just one line :)
15:04 blook when i dig into the .glusterfs dir on the affected bricks i controlled the inode of the hardlink
15:05 blook afterwards i searched the complete brick directory  for the specific inodes and i get on both bricks (replica 2) the same files and same hard links on .glusterfs
15:06 blook so my question is, why is gluster showing me, that this gfid should be healed…. :(
15:06 blook ?
15:06 blook I'm using glusterfs 3.4 on debian squeeze
15:06 blook thanks in adwance
15:07 blook -w+v
15:09 vpshastry joined #gluster
15:11 Lethalman blook, go on the export directory of one of your servers and delete the copy that you don't want
15:12 Lethalman blook, however, if it's in heal <vol> info, it will heal them at some point
15:12 Lethalman blook, if it's in heal-failed or split brain try deleting those files but it may result in data loss of one file
15:12 Lethalman blook, also try reading the shd log
15:13 blook thx Lethalman, but its just one file…..the md5sum on both replica bricks is the same
15:13 Lethalman blook, afaik deleting one of those gfid should force healing, of course backup the stuff :P
15:14 Lethalman blook, or deleting the file
15:15 semiosis blook: use a ,,(paste) site!
15:15 glusterbot blook: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
15:15 blook Lethalman, ok i see :) but why oh why is this so broken :) i mean why is gluster complaining about "should be healed", if its totally fine (from my point of view :) )
15:17 semiosis blook: take a look at the afr ,,(extended attributes) on the file
15:17 glusterbot blook: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
15:17 semiosis blook: that's how glusterfs tracks if files need to be healed (or are split-brained)
15:24 blook http://paste.debian.net/56143/
15:24 glusterbot Title: debian Pastezone (at paste.debian.net)
15:25 blook trusted.gfid is also the same on both files on both bricks which build a replica pair
15:27 al joined #gluster
15:31 sammmm joined #gluster
15:31 abradley left #gluster
15:34 blook semiosis, Lethalman any last hints for me? did you see the pastebin stuff…..i have to go offline in 10 minutes…..if not thanks for your kind help anyway :)
15:35 semiosis blook: yeah i gave you hints which you seem to have ignored
15:35 semiosis look into the afr ,,(extended attributes)
15:35 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
15:36 blook semiosis, i did not i had a look on it and the attributes are the same  trusted.gfid is set to the same value on both bricks
15:36 semiosis ,,(pasteinfo)
15:36 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:36 semiosis and note that i didnt say gfid attributes, i said afr attributes
15:37 semiosis read that article for an explanation
15:38 mark___ joined #gluster
15:42 blook semiosis, http://paste.debian.net/56149/
15:42 glusterbot Title: debian Pastezone (at paste.debian.net)
15:43 RobertLaptop joined #gluster
15:47 rwheeler joined #gluster
15:50 blook semiosis, ok i think your hint was helpful
15:54 mark___ left #gluster
15:59 zerick joined #gluster
16:02 blook semiosis, thank you for your help man, i have to investigate now why the afr counter isn't set to zero for this file - bye
16:04 mooperd_ joined #gluster
16:11 aliguori joined #gluster
16:15 badone joined #gluster
16:19 ccha2 does glusterfs client write something in the /tmp ?
16:19 ccha2 I have these lines
16:19 ccha2 glusterfs  1634      root   10u      REG              202,5        0        160 /tmp/tmpfUJKMUK (deleted)
16:19 ccha2 from lsof
16:23 semiosis @later tell blook you can just zero the afr attribs on one of the copies and glusterfs will sync from that one to the other (non-zero afr) copy
16:23 glusterbot semiosis: The operation succeeded.
16:24 Mo___ joined #gluster
16:27 glusterbot New news from newglusterbugs: [Bug 1018308] GlusterFS installation on CentOS 6.4 fails with "No package rsyslog-mmcount available." <http://goo.gl/46kz8H>
16:34 ferringb joined #gluster
16:38 quique_ joined #gluster
16:52 bala joined #gluster
16:53 noob21 joined #gluster
16:53 noob21 gluster: how does one turn off a volume option again?  i can't find the docs for it
16:54 ThatGraemeGuy joined #gluster
16:57 bennyturns joined #gluster
16:59 zaitcev joined #gluster
17:06 ferringb noob21: gluster volume set
17:06 noob21 ferringb: looks like it's gluster vol reset <vol> <option>
17:06 noob21 for some silly reason i can't find that in the admin guide
17:08 ferringb the guide is a bit tricky to follow
17:08 ferringb sidenote: if I wanted to manually override the volfile in use- change the xlatr stack- what's the best way?
17:09 ferringb manually mangling the files- last I knew- resulted in gluster just overwriting them which isn't quite what I want
17:09 * ferringb is effectively bridging two gluster xlatr stacks- specifically layering a replicate on top
17:09 ferringb migration step basically
17:10 ferringb preferably w/out having to do underwater copies
17:10 ferringb best way I can figure is to just mangle the volfile for that directly, but I'm looking for comments on it
17:15 semiosis ferringb: there's a hook, let me try to find more info
17:16 ferringb thanks
17:16 * ferringb found that once upon a time, but has had zip-all luck finding it since
17:16 ferringb honestly, if I could just rewrite the trusted and nontrusted volfile, and have it proceed from there- that would be lovely.  I suspect not however.
17:21 H__ joined #gluster
17:21 semiosis cant find it either
17:21 semiosis maybe JoeJulian remembers
17:21 ferringb hmm
17:21 semiosis but idk if he's around
17:21 ferringb probably is in my irc logs, since that name looks damn familiar
17:21 ferringb ...if I had logs.  damn it.
17:22 semiosis ah right, irc logs
17:22 semiosis they're in the /topic
17:22 mooperd_ joined #gluster
17:23 ferringb heh
17:24 ferringb no dice
17:24 semiosis i tried the search feature on both of the log sites and neither one worked very well
17:24 ferringb yep
17:24 ferringb just going to shut down the test stack, modify volfiles, and see if it leaves the new one alone
17:24 semiosis i dont have time to search each page, but if you do, then look for comments by jdarcy
17:24 ferringb ...and then break the fingers of anyone who does an online op while it's doing migration
17:24 semiosis he's the one that first explained how to use the hook
17:27 SpeeR joined #gluster
17:30 SpeeR Is there a recommended Linux flavor for Gluster? It appears the OS requirements page has been removed
17:39 nullck joined #gluster
17:46 blutdienst joined #gluster
17:51 blutdienst hi guys. i have a question; in the official redhat training docs (rh436) it states that "[...] and the mount point that is used must be unique across the entire trusted storage pool". i cannot find any reference anywhere else that this is a requirement  and alot of examples do not use unique mount points
17:53 badone joined #gluster
17:56 Remco blutdienst: That sounds like the server name also counts
17:56 Remco In which case it makes sense
17:58 blutdienst Remco: hmmm but that is "kind of" (read: very) obvious, if they mean that the peer name also counts to that "unique mount point" :|
17:59 blutdienst in their examples they always use node1:/exp1 node2:/exp2
17:59 Remco Could be that was a requirement long ago
18:00 Remco Perhaps still a good thing to do so it's easier to keep them apart
18:00 kkeithley no, that wasn't ever a requirement that I know of, certainly not in the time since I've been involved with gluster
18:01 blutdienst yeah, i wouldnt care about it at all if there wasn't a a certain test which tells you to "mount the bricks on all nodes under /brick" ;p
18:03 blutdienst and if you symlink the mounted /brick to /brick_n1 resp. /brick_n2, then "volume create replica 2 node1:/brick_n1 node2:/brick_n2" you end up seeing "node1:/brick and node2:/brick_n2" in volume info
18:04 blutdienst and i am not sure if that is ok or not. or just use node1/brick and node2:/brick during volume create
18:05 nullck joined #gluster
18:06 blutdienst like in this example the mount point is not "unique" if you don't count the nodename to the identifier http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
18:06 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
18:06 Remco Most people in here don't use unique mountpoints that way from what I can tell
18:13 davinder joined #gluster
18:22 nullck joined #gluster
18:34 vpshastry joined #gluster
18:51 * ferringb stretches
18:51 ferringb semiosis: manually mangling volfiles has worked, 'cept I can't get it to bring up the new brick
18:53 ferringb specifically, how does one got about debugging gluster refusing to start a brick?
18:53 semiosis glusterd & brick logs on the server that has the brick
18:53 ferringb not seeing anything
18:53 ferringb that brick is fresh/new; no xattr's in addition
18:53 ferringb frankly looks like that node doesn't know it has that brick
18:54 ferringb it's in the configs however (including the vol bricks files)
18:54 semiosis ferringb: try restarting glusterd or doing a volume start force
18:54 ferringb already have
18:54 semiosis well hmm
18:55 ferringb yeah
18:55 ferringb looking at the mgmt source atm
19:24 sac`away joined #gluster
19:26 rcheleguini joined #gluster
19:30 blook joined #gluster
19:34 glusterbot New news from resolvedglusterbugs: [Bug 950083] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/tajoiQ>
19:49 tty joined #gluster
19:52 tty Hi all, i tested gluster and is very well made. But, i have a question. This the scenario: 2 servers 2 bricks per server when one brick get 100% usage some of my cp get no left space (gluster 3.4.1)
19:52 tty there any workaround for that ?
19:58 blook2nd joined #gluster
20:04 glusterbot New news from resolvedglusterbugs: [Bug 819130] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/GfSUw>
20:21 LoudNoises joined #gluster
20:45 jesse joined #gluster
20:57 sac`away joined #gluster
21:07 zerick joined #gluster
21:19 bronaugh left #gluster
21:28 glusterbot New news from newglusterbugs: [Bug 1003184] EL5 package missing %_sharedstatedir macro <http://goo.gl/Yp1bL1>
21:55 rwheeler joined #gluster
22:01 badone joined #gluster
22:11 zerick joined #gluster
22:17 nasso joined #gluster
22:43 sac`away joined #gluster
23:07 jag3773 joined #gluster
23:39 MrNaviPacho joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary