Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:33 johnmilton joined #gluster
00:36 DV joined #gluster
00:37 RameshN joined #gluster
01:01 haomaiwa_ joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:59 nangthang joined #gluster
02:01 haomaiwa_ joined #gluster
02:12 armyriad joined #gluster
02:17 shyam joined #gluster
02:28 klfwip joined #gluster
03:01 haomaiwa_ joined #gluster
03:02 vmallika joined #gluster
03:17 lalatenduM joined #gluster
03:37 ramteid joined #gluster
03:40 atinm joined #gluster
03:47 shubhendu joined #gluster
03:48 sakshi joined #gluster
03:51 itisravi joined #gluster
03:54 David_H_Smith joined #gluster
04:00 nbalacha joined #gluster
04:01 haomaiwa_ joined #gluster
04:03 kdhananjay joined #gluster
04:14 PaulCuzner joined #gluster
04:26 gem joined #gluster
04:27 RameshN joined #gluster
04:33 atinm joined #gluster
04:38 Manikandan joined #gluster
04:39 jiffin joined #gluster
04:42 mowntan joined #gluster
04:42 aspandey joined #gluster
04:43 ashiq joined #gluster
04:49 aravindavk joined #gluster
04:49 jiffin joined #gluster
04:52 ndarshan joined #gluster
04:57 Manikandan joined #gluster
05:00 David_H__ joined #gluster
05:01 haomaiwa_ joined #gluster
05:07 karthik___ joined #gluster
05:20 gem joined #gluster
05:21 sakshi joined #gluster
05:23 skoduri joined #gluster
05:24 kotreshhr joined #gluster
05:33 ppai joined #gluster
05:36 spalai joined #gluster
05:39 hgowtham joined #gluster
05:40 jiffin1 joined #gluster
05:41 gowtham joined #gluster
05:42 atinm joined #gluster
05:43 spalai left #gluster
05:49 jiffin1 joined #gluster
05:50 R0ok_ joined #gluster
05:51 poornimag joined #gluster
05:55 ahino joined #gluster
05:56 jiffin1 joined #gluster
05:59 jiffin joined #gluster
06:01 haomaiwang joined #gluster
06:01 gowtham joined #gluster
06:02 rafi joined #gluster
06:03 jiffin joined #gluster
06:08 Bhaskarakiran joined #gluster
06:09 mhulsman joined #gluster
06:10 overclk joined #gluster
06:11 anil_ joined #gluster
06:12 btspce joined #gluster
06:12 jiffin joined #gluster
06:13 btspce Hello. I need help tuning a gluster setup for kvm guests
06:17 jiffin joined #gluster
06:19 mhulsman joined #gluster
06:19 [Enrico] joined #gluster
06:19 vmallika joined #gluster
06:21 jtux joined #gluster
06:21 ahino joined #gluster
06:30 btspce joined #gluster
06:35 joshin left #gluster
06:39 jiffin joined #gluster
06:46 javi404 joined #gluster
06:50 [diablo] joined #gluster
06:52 Saravanakmr joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 Saravanakmr joined #gluster
07:02 jiffin joined #gluster
07:10 ariowner joined #gluster
07:13 Saravanakmr joined #gluster
07:17 mbukatov joined #gluster
07:20 Debloper joined #gluster
07:25 jiffin joined #gluster
07:28 David_H_Smith joined #gluster
07:30 k-ma joined #gluster
07:33 jiffin1 joined #gluster
07:33 ctria joined #gluster
07:35 Javezim joined #gluster
07:36 jiffin joined #gluster
07:37 ryllise joined #gluster
07:37 fsimonce joined #gluster
07:38 armyriad joined #gluster
07:38 unforgiven512 joined #gluster
07:38 ivan_rossi joined #gluster
07:38 unforgiven512 joined #gluster
07:39 kdhananjay joined #gluster
07:40 cristian joined #gluster
07:40 jiffin1 joined #gluster
07:42 hackman joined #gluster
07:44 shruti joined #gluster
07:47 jiffin1 joined #gluster
07:53 atalur joined #gluster
07:54 jri joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 harish_ joined #gluster
08:06 Slashman joined #gluster
08:13 DV joined #gluster
08:16 vmallika joined #gluster
08:18 deniszh joined #gluster
08:19 ahino joined #gluster
08:20 armyriad joined #gluster
08:28 farblue joined #gluster
08:32 spalai joined #gluster
08:34 atalur_ joined #gluster
08:34 itisravi joined #gluster
08:34 kdhananjay joined #gluster
08:35 Wizek_ joined #gluster
08:35 aspandey_ joined #gluster
08:35 Wizek joined #gluster
08:45 Gaurav_ joined #gluster
08:49 kshlm joined #gluster
08:51 farblue hi all. I had another total wipeout of my gluster cluster setup over the weekend - that’s 2 in 3 weeks :(
08:51 farblue everything froze
08:52 farblue had to restart 2 servers to get things back
08:52 farblue looking at the gluster volume status it seems multiple servers lost their TCP ports
08:53 farblue the only thing I can see in the logs is complaints about not being able to write to log files
08:53 farblue could it be an issue with log-rotate or something?
08:56 farblue I could really do with some basic guidance on how to track down what is going wrong as I’m pretty new to Gluster and I’m sure I’m not getting the experience of the average gluster user
09:01 haomaiwa_ joined #gluster
09:01 EinstCrazy joined #gluster
09:03 Rasathus joined #gluster
09:04 Wizek joined #gluster
09:04 jiffin joined #gluster
09:19 robb_nl joined #gluster
09:26 karnan joined #gluster
09:31 auzty joined #gluster
09:35 mdavidson joined #gluster
09:37 farblue could my issues with GlusterFS come from using ext4 for the bricks?
09:38 farblue if I wanted to convert my bricks to XFS what would be the process?
09:43 jiffin joined #gluster
09:45 mhulsman joined #gluster
09:48 RameshN joined #gluster
09:48 gem joined #gluster
09:49 atalur joined #gluster
09:49 post-factum farblue: xfs is recommended fs for bricks
09:49 post-factum farblue: you shouldn't try to convert ext4 to xfs. just create new fs instead
09:50 farblue yeah, I found that out after having setup my cluster based on a recent redhat article that suggested ext4
09:50 farblue I figured I could drop one brick out the cluster at a time and reformat it then get it to heal when i add it back
09:50 ahino joined #gluster
09:51 farblue but then I tried running `heal full` and got the following error when trying to see the stats:
09:51 farblue Gathering count of entries to be healed on volume OrcaShared has been unsuccessful on bricks that are down. Please check if all brick processes are running.
09:51 farblue but I can’t see which processes are not running
09:51 farblue so I’m not confident of dropping bricks without knowing they will rebuild
09:51 kotreshhr joined #gluster
09:53 farblue I assume the self-heal daemon is ‘glustershd’
09:54 hchiramm joined #gluster
09:54 hchiramm_ joined #gluster
10:01 haomaiwa_ joined #gluster
10:01 aspandey_ joined #gluster
10:01 TvL2386 joined #gluster
10:09 hackman joined #gluster
10:14 farblue I’ve just noticed my brick log files are filling up with this:
10:14 farblue 2016-04-04 10:13:02.816482] I [dict.c:473:dict_get] (-->/usr/lib/x86_64-linux-gnu/libglus​terfs.so.0(default_getxattr_cbk+0xab) [0x7f08068a582b] -->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.9/xla​tor/features/marker.so(marker_getxattr_cbk+0xcb) [0x7f07fac8e7cb] -->/usr/lib/x86_64-linux-gnu/li​bglusterfs.so.0(dict_get+0x93) [0x7f08068965a3] ) 0-dict: !this || key=() [Invalid argument]
10:14 farblue ^
10:14 glusterbot farblue: ('s karma is now -129
10:14 farblue what on earth does all that mean?
10:18 post-factum farblue: shouldn't you update to 3.7.10 first?
10:18 ramky joined #gluster
10:19 farblue I’ve just tried that one one server and I’m still getting that error
10:19 farblue floods of it
10:20 arcolife joined #gluster
10:22 farblue I’m seeing quite a lot of these: “Mismatching xdata in answers of 'LOOKUP’”
10:32 kkeithley1 joined #gluster
10:32 kkeithley1 left #gluster
10:41 foster joined #gluster
10:50 robb_nl joined #gluster
10:53 kdhananjay joined #gluster
10:54 ira joined #gluster
10:57 farblue post-factum: so if I reformat by bricks as XFS, is there anything I should copy off beforehand - such as the .glusterfs folder?
10:58 post-factum farblue: you should remove brick correctly before reformatting
11:01 jiffin joined #gluster
11:01 farblue so volume remove-brick <volname> <brickname>
11:01 haomaiwa_ joined #gluster
11:03 RameshN joined #gluster
11:04 robb_nl joined #gluster
11:05 overclk joined #gluster
11:06 kotreshhr joined #gluster
11:07 farblue it won’t let me
11:07 farblue “Remove brick incorrect brick count of 1 for disperse 5”
11:11 itisravi joined #gluster
11:15 vmallika joined #gluster
11:15 gem joined #gluster
11:21 spalai joined #gluster
11:22 post-factum farblue: correct, you cannot remove brick from disperse volume
11:23 johnmilton joined #gluster
11:25 bluenemo joined #gluster
11:31 aspandey joined #gluster
11:31 ryllise joined #gluster
11:32 spalai left #gluster
11:34 Bhaskarakiran joined #gluster
11:39 jiffin joined #gluster
11:40 bluenemo joined #gluster
11:42 DV joined #gluster
11:42 cyberbootje joined #gluster
11:45 nishanth joined #gluster
11:48 haomai___ joined #gluster
11:48 ppai joined #gluster
11:54 jiffin joined #gluster
11:56 farblue hmm, ok, so I’ve somewhat broken things
11:57 farblue gluster-server won’t start at all and won’t log any errors either
12:01 haomaiwa_ joined #gluster
12:03 spalai joined #gluster
12:08 kkeithley1 joined #gluster
12:13 ppai joined #gluster
12:13 farblue when setting up a new volume, what are the benefits of using LVM rather than just creating a filesystem directly on a device?
12:17 ariowner joined #gluster
12:23 scobanx joined #gluster
12:24 scobanx Hi, Just upgrade my cluster from 3.7.9 to 3.7.10, I see that gluster v get v0 cluster.op-version is 30707 should I change it to 30710?
12:25 scobanx farblue: some features like snapshot works only with LVM
12:32 kotreshhr left #gluster
12:42 nishanth joined #gluster
12:50 theron joined #gluster
12:59 nbalacha joined #gluster
12:59 theron joined #gluster
13:01 haomaiwa_ joined #gluster
13:06 bwerthmann joined #gluster
13:11 farblue scobanx - thanks :)
13:11 bwerthma1n joined #gluster
13:11 spalai joined #gluster
13:12 farblue is the general opinion to have have fewer volumes with more directories or more volumes each with less directories?
13:13 theron joined #gluster
13:17 kkeithley1 joined #gluster
13:19 nottc joined #gluster
13:29 lord4163 joined #gluster
13:30 graper joined #gluster
13:35 beeradb_ joined #gluster
13:38 plarsen joined #gluster
13:39 BitByteNybble110 joined #gluster
13:39 plarsen joined #gluster
13:41 farblue If I need redundency 2 on my filesystem (2 nodes can be lost) is it generally recommended to go with a 3-replica volume or if I’ve got extra nodes available are there advantage to a 4 or 5 replica? e.g. distributed processing of incoming reads and writes?
13:49 post-factum replica more than 3 is very expensive in terms of traffic
13:49 farblue I did figure that
13:51 farblue I have 5 servers and I’ve been trying to understand how the replica3 arbiter works. given the arbiter only stores the directory tree and metadata, is it fair to say you could use 1 arbiter node for 2 separate replica-2+arbiter volumes without overly stressing the arbiter node?
13:52 farblue I currently have a 3+2 dispersed setup but it seems quite flakey and as there’s a lot less info on the web about managing such a setup I’m thinking of moving to something else
13:53 farblue although I guess, technically, replica-2+arbiter, while handling split-brain, won’t handle loss of 2 nodes in the cluster
13:53 nbalacha joined #gluster
13:56 scobanx I just reinstalled a server from a 78x(16+4) distributed replicated volume, I the rejoined it to cluster, heal takes nearly 90 minutes for 2TB data.Can someone explain me how heal works? Does a master server read all the chunks recalculates lost chunk and write it to new installed server?
13:57 scobanx distributed disperse volume sorry for typo
13:59 bennyturns joined #gluster
14:01 haomaiwa_ joined #gluster
14:05 scobanx Also how can I decrease heal time? Any settings related with bandwidth usage?
14:05 graper left #gluster
14:07 farblue I really don’t know for sure but my understanding is that once a damaged file is identified then yes, data needs to be fetched from a subset of the other nodes in order to rebuild the file. I believe this is orchastrated from the node performing the heal
14:08 amye joined #gluster
14:09 Saravanakmr joined #gluster
14:12 kpease joined #gluster
14:13 ahino joined #gluster
14:19 kpease joined #gluster
14:19 kpease joined #gluster
14:23 atinm joined #gluster
14:28 EinstCrazy joined #gluster
14:30 robb_nl joined #gluster
14:30 skoduri joined #gluster
14:35 coredump joined #gluster
14:37 bennyturns joined #gluster
14:42 bennyturns joined #gluster
14:59 d0nn1e joined #gluster
15:01 haomaiwa_ joined #gluster
15:04 d-fence joined #gluster
15:05 spalai joined #gluster
15:08 shyam joined #gluster
15:09 gbox joined #gluster
15:18 Gaurav_ joined #gluster
15:19 mhulsman joined #gluster
15:19 Manikandan joined #gluster
15:20 ahino joined #gluster
15:27 farblue I found some errors in my logs regarding not being able to write to log files. Do I need to do something to exclude glusters logs from log-rotate or handle things differently?
15:28 timotheus1_ joined #gluster
15:35 shyam joined #gluster
15:37 gbox farblue: Did you check available space, permissions, etc?  Are there compressed, dated logfile archives in /var/log/glusterfs?
15:38 farblue space and permissions are not the problem and there are compressed and numbered log files in the /var/log/glusterfs folder
15:38 farblue cli.log, cli.log.1, cli.log.2.gz etc
15:44 farblue I’m just trying to work out why over the weekend my gluster filesystem just fell over
15:44 gbox farblue:  Yeah, should be working.  Are the current .log files growing in size?  Which file had the errors?
15:44 gbox farblue:  Ah so you've got bigger problems too.
15:45 gbox farblue: fell over?
15:46 farblue the servers seemed to freeze up and then the fuse clients hung and all the services making use of the glusterfs filesystem died
15:47 farblue I ended up having to restart the gluster server daemons on each server, stop all services and unmount and remount the fuse client mountpoints
15:47 farblue I’m now just trying to understand why everything blew up
15:49 farblue I *think* the brick services fell over, causing the glusterd services to lock up and drop connections to the fuse clients
15:51 farblue my brick logs are chock full of errors like this: “glusterfs/3.7.9/xlator/featur​es/marker.so(marker_getxattr” … “0-dict: !this || key=() [Invalid argument]”
15:55 bennyturns joined #gluster
15:55 skoduri Manikandan, any idea about above marker_getxattr error?
15:56 farblue it appears to be related to failing to get xattr data
15:56 Manikandan skoduri, yeah
15:56 farblue there’s loads in the logs
15:56 farblue but there’s a massive spike of them followed by some [Too many open files] errors at the point things start to go south
15:57 Manikandan EINVAL errors can be safely ignored, it is because we have a dirty xattr not marked properly while building the ancestry
15:57 Manikandan We have patch that fixes this issue
15:58 Manikandan farblue, yeah it floods up the logs
15:58 farblue if it’s not related to why my filesystem keeps falling over it’s pretty hard to see the important stuff through the noise :9
15:59 Manikandan farblue, this issue is fixed a week ago
15:59 Manikandan I am trying to look at the patch but my internet is too slow to load :-/
15:59 farblue heh
15:59 farblue I look forward to updating :)
16:00 farblue ok, then I’ll ignore them for now while trying to diagnose my issues
16:00 Manikandan farblue, can you give(PM) your mail id, so that I can drop you the patches tomorrow that fixes this issue
16:01 farblue thank you for the offer but I’m just using the PPA build for ubnutu, not building from src
16:01 haomaiwang joined #gluster
16:01 Manikandan farblue, then you should wait for the next upcoming minor release
16:01 Manikandan :P
16:02 farblue yeah :) I just updated to 3.7.10. I guess 3.7.11 will have the patch :)
16:02 Manikandan farblue, yeah :)
16:02 gbox Manikandan:  What version includes the patch?  What was the bug?
16:02 farblue and for now I’ll ignore the xattr errors
16:02 Manikandan farblue, cool
16:02 Manikandan gbox, I am checking
16:03 farblue does anyone know what can cause ‘too many open files’ errors?
16:04 gbox farblue: the OS has a file handle limit: ulimit
16:04 farblue according to sysctl I can have 1635164 files open which seems plenty
16:05 muneerse joined #gluster
16:06 gbox farblue: cat /proc/sys/fs/file-max
16:06 farblue same number
16:06 farblue 1635164
16:07 gbox All my nodes have different value
16:07 gbox Based on what?  memory?  Did you check all of your nodes?
16:08 farblue that’s the default set by ubuntu I think
16:08 gbox farblue:  Maybe.  I remember it was set way too low by default a few releases ago (on RHEL anyway).  Supposed to be a security fix or something.
16:09 farblue could it be related to my bricks being EXT4 rather than XFS?
16:10 gbox farblue:  It could.  ext4 is less tested with gluster than xfs although a lot of people use ext4 with gluster
16:11 farblue when I set up the system I mainly followed an article by rackspace and they used ext4 - and only later found it it wasn’t really recommended
16:11 gbox farblue:  but Manikandan said this is a known issue.  Search bugzilla
16:11 farblue Manikandan said the xattr errors in the logs was a known issue, not the running out of file handles and falling over
16:11 farblue unless I miss-understood something
16:14 farblue so many things seem flakey or broken for me and my glusterfs install I’m not really confident in it at the moment
16:14 Manikandan farblue, gbox I am sorry, I am in a hurry, could you drop a mail to gluster-devel@gluster.org or gluster-users@gluster.org
16:14 Manikandan and we can follow up the thread there
16:16 farblue Manikandan: sure, although I’ve not worked out the problems yet really :)
16:18 farblue maybe it’s just because I’m not used to the tools but debugging gluster problems seems very opaque
16:18 farblue the heal functionality, for instance - most of the feedback seems broken so I’m having to guess that when the numbers change when I call ‘heal info’ it means it is working through the volume and fixing things
16:19 farblue If I try to do ‘heal statistics’ I just get an error
16:19 farblue complaining one or more bricks is down
16:19 farblue (which doesn’t appear to be the case)
16:19 Hesulan joined #gluster
16:23 Hesulan joined #gluster
16:29 scobanx_ joined #gluster
16:31 scobanx_ How can I increase the disperse heal speed?
16:37 gem joined #gluster
16:40 dgandhi joined #gluster
16:41 dgandhi joined #gluster
16:43 dgandhi joined #gluster
16:45 dgandhi joined #gluster
16:45 atalur joined #gluster
16:46 dgandhi joined #gluster
16:47 spalai joined #gluster
16:47 ivan_rossi left #gluster
16:48 nishanth joined #gluster
16:48 ahino joined #gluster
16:48 dgandhi joined #gluster
16:51 dgandhi joined #gluster
16:53 hackman joined #gluster
17:01 haomaiwa_ joined #gluster
17:01 muneerse joined #gluster
17:01 Rasathus_ joined #gluster
17:16 dlambrig_ joined #gluster
17:21 jdarcy joined #gluster
17:23 cpetersen joined #gluster
17:25 shubhendu joined #gluster
17:26 cpetersen Hey guys, my NFS-GANESHA enabled Gluster cluster had a brick fail.  When the brick came back up, it healed just fine, but the share export was missing.  What would be the best way to re-export the share without tearing down the cluster?
17:26 cpetersen Cross-posting in Ganesha.
17:31 ahino joined #gluster
17:33 spalai left #gluster
17:44 Hesulan joined #gluster
17:44 gem joined #gluster
17:48 Rasathus joined #gluster
17:49 skoduri|afk cpetersen, try service nfs-ganesha restart?
17:50 gbox joined #gluster
17:56 scobanx_ joined #gluster
18:00 atalur joined #gluster
18:01 haomaiwa_ joined #gluster
18:02 jri joined #gluster
18:19 bluenemo joined #gluster
18:31 jbrooks joined #gluster
18:33 cpetersen that did it
18:34 cpetersen skoduri|afk: Odd that the export would disappear after a pretty standard failure like that.
18:38 deniszh joined #gluster
18:41 dlambrig_ joined #gluster
18:41 Rasathus_ joined #gluster
18:42 Hesulan joined #gluster
18:51 spalai joined #gluster
18:51 spalai left #gluster
18:54 skoduri joined #gluster
19:01 haomaiwa_ joined #gluster
19:06 dlambrig_ joined #gluster
19:10 Hesulan joined #gluster
19:18 dscastro joined #gluster
19:18 dscastro kkeithley: hello, i've got a question for you: can i use glusterfs behind load balancers?
19:20 btspce joined #gluster
19:26 haomaiw__ joined #gluster
19:29 post-factum dscastro: you cannot and you do not need to
19:30 dscastro post-factum: i'm looking for a solution of NFS shares since azure doesn't support VIP's
19:31 shyam joined #gluster
19:34 mhulsman joined #gluster
19:37 post-factum dscastro: you need ganesha ha then
19:37 post-factum dscastro: i've played with nfsv3 by ganesha and keepalived, and it seems to work
19:38 dscastro post-factum: that's my issue, ganesha relies on pacemaker and it needs a set of vip's
19:45 robb_nl joined #gluster
19:47 ghenry joined #gluster
19:54 haomaiwa_ joined #gluster
20:02 haomaiwa_ joined #gluster
20:04 robb_nl joined #gluster
20:06 post-factum dscastro: ganesha does not care much about ha solution, in fact. if you are able to support ip failover and (if needed) to interact with ganesha via dbus, it is up to you what ha to use
20:07 mzink joined #gluster
20:09 dscastro post-factum: you are saying that vip and pacemaker is not required?
20:13 amye joined #gluster
20:18 gnulnx joined #gluster
20:19 gnulnx I have a 4 node cluster.  Is it safe to remove one of the nodes temporarially (either down it completely, or by some other means)?
20:22 post-factum dscastro: virtual ip is required, ha solution could be any
20:22 post-factum gnulnx: it depends on volume layout
20:23 dscastro post-factum: nah.. virtual ip is not a option on Azure Cloud
20:24 gnulnx post-factum: `gluster volume create gv0 replica 4 ...`
20:25 shaunm joined #gluster
20:26 btspce hello. I seek advice on how tune a gluster setup for qcow2 files. 4 kvm hosts in a distributed replicated 8 x 2 = 16 setup
20:31 post-factum gnulnx: then yes
20:31 _nixpani1 joined #gluster
20:32 _nixpani1 joined #gluster
20:33 jlp1 joined #gluster
20:41 dlambrig_ joined #gluster
20:49 bennyturns joined #gluster
20:52 mhulsman joined #gluster
20:55 dgandhi joined #gluster
21:01 haomaiwang joined #gluster
21:03 edong23 joined #gluster
21:28 misc joined #gluster
21:40 coredump joined #gluster
21:55 mzink_gone joined #gluster
22:01 haomaiwang joined #gluster
22:01 btspce anyone here ?
22:03 shyam joined #gluster
22:10 gbox joined #gluster
22:25 Rasathus joined #gluster
22:34 MugginsM joined #gluster
22:35 MugginsM so, glusterfs 3.7.9 client against 3.6.9 servers, yay or nay?
22:35 Hesulan joined #gluster
22:35 MugginsM we're having real trouble with replication not working well and wondering if that's the cause
22:36 misc joined #gluster
22:39 MugginsM (I cleverly rolled out 3.7 to everywhere before discovering that the servers didn't have 3.7 packages because they're Ubuntu Precise)
22:44 ariowner joined #gluster
23:01 misc joined #gluster
23:04 haomaiwa_ joined #gluster
23:16 misc joined #gluster
23:18 natarej joined #gluster
23:28 misc joined #gluster
23:30 PaulCuzner left #gluster
23:32 haomaiwa_ joined #gluster
23:42 misc joined #gluster
23:59 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary