Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 schmidmt1 joined #gluster
00:18 jim`` I've lost one node in a 2 way replica setup of gluster, one node is still online with good files
00:18 jim`` but the volumes won't come up
00:18 jim`` is it possible to force the volume online with only one node alive?
00:19 Technicool jim, the volume should be online with one node still, can you clarify what you mean by volume won't come up?  it should have been up already correct?
00:23 jim`` quite right, it was up
00:23 jim`` then I restarted glusterfsd and glusterd on the remaining host and now it doesn't seem to become available to network clients
00:25 jim`` When I restart the service I get: 0-webcontent-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
00:25 jim`` and:  0-nfs-nfsv3: Volume is disabled: webcontent
00:46 yinyin joined #gluster
00:49 elyograg jim``: are things working well enough that 'gluster volume info' works?
00:50 jim`` yes, displayed all the volumes and bricks on both the dead and alive node
00:50 jim`` *displays
00:50 elyograg jim``: can you get that output, put it on a paste site, and give us the link?
00:50 elyograg fpaste.org or dpaste.org are good choices.
00:51 Technicool jim, can you telnet to 24007 and 24009 on the good host?
00:53 jim`` 24007 yes, 24009 no
00:53 Technicool ps ax | grep glu shows any gluster processes running?
00:53 Technicool glusterfs processes specifically
00:54 jim`` 9832 ?        Ssl    0:01 /usr/sbin/glusterd -p /var/run/glusterd.pid
00:54 jim`` 9896 ?        Ssl    0:00 /usr/sbin/glusterfs -f /var/lib/glusterd/nfs/nfs-server.vol -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
00:54 Technicool is iptables running?
00:54 jim`` yes
00:54 Technicool lsof -i :24007-24100
00:55 kevein joined #gluster
00:56 jim`` I'll put that in the paste as well
00:57 jim`` http://dpaste.org/Om67d/
00:57 glusterbot Title: dpaste.de: Snippet #214846 (at dpaste.org)
00:58 Technicool jim, if you try to stop (or start) the volume on the "good" node, what happens?
00:59 Technicool assuming you get nothing back from these either?
00:59 Technicool showmount -e <good node>
00:59 Technicool rpcinfo -p <good node>
01:00 jim`` starting the volume hangs for a long time and appears to do nothing
01:00 jim`` logs look like it's trying to talk to the dead node
01:00 Technicool are you starting it from the server directly or remotely?
01:00 jim`` no exports from showmount, looks like it's not getting as far as starting the nfs daemon
01:01 jim`` directly
01:01 Technicool have you tried to stop with --force?
01:01 Technicool the volume doesn't look like its running
01:01 jim`` haven't tried that
01:01 yinyin joined #gluster
01:02 jim`` will give it a go
01:02 Technicool otherwise you would have at least one brick on a port > 24007
01:03 jim`` [jim@webcontent02 ~]$ sudo gluster volume stop webcontent force
01:03 jim`` Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
01:03 jim`` operation failed
01:03 Technicool try the other way with "start"
01:03 jim`` same result
01:04 Technicool have you bounced glusterd?
01:04 jim`` just did that to try it again :)
01:04 jim`` did seem to be failing too quickly
01:04 yinyin_ joined #gluster
01:04 jim`` it's hanging on the forced stop command now
01:05 Technicool did the pid refresh when you restarted?
01:07 Technicool you can try a killall gluster{,fs,fsd}, add a -9 if you have to
01:07 jim`` stop force just finished, but gluster volume info still reports the volume as started
01:08 Technicool is the other node back up yet?
01:09 jim`` pid does change when I kill it
01:10 jim`` and the other node?
01:10 jim`` webcontent01 is dead, inaccessible
01:10 jim`` webcontent02 is alive (and otherwise) well
01:11 jim`` wondering if it might be easier to rebuild the cluster
01:11 jim`` as I still have all the files
01:11 jim`` though I'd quite like to figure out what is going on here, incase this happens again
01:11 jim`` (cloud instance dies)
01:12 Technicool well, there are a few ways to go about it
01:12 Technicool one would be a replace brick with a new instance
01:13 jim`` have tried that
01:13 Technicool but having the volume not come back online isn't expected
01:13 Technicool or go offline in the first place even
01:13 jim`` hrm
01:13 jim`` indeed, there does seem to be some underlying problem
01:13 elyograg my thought at this moment would be to move everything in /var/log/glusterfs to a safe location, kill all the gluster processes, start it up, and look at what you get in the log.
01:13 Technicool you have a number of endpoint not connected errors, anything from dmesg look interesting?
01:14 jim`` nothing of note in dmesg
01:14 jim`` presumably the endpoint not connecting is the working node trying to contact the dead node?
01:15 Technicool depends on where the logs you pasted are from, but i think its actually talking about self
01:15 jim`` elyograg : certainly can do it, although what I pasted in was a full restart of the process
01:15 Technicool you are starting as root?
01:15 jim`` Technicool : it did have itself added as a peer for a little while
01:16 Technicool jim, i don't follow, can you paste the gluster peer status?
01:16 Technicool is this in AWS?
01:16 jim`` http://dpaste.org/kY3oi/
01:16 glusterbot Title: dpaste.de: Snippet #214849 (at dpaste.org)
01:17 jim`` it's on rackspace
01:17 Technicool ok
01:17 Technicool the peer output is definitely incorrect
01:17 jim`` the ip listed in that paste is the dead peer first
01:17 Technicool you should have n-1 entries total
01:17 jim`` and the replacement peer I've tried to bring up second
01:17 Technicool bring up a second meaning a new instance, correct?
01:18 jim`` that's right
01:18 jim`` different hostname, different IP
01:18 Technicool ok, first things first, lets try to peer detach the new IP
01:18 jim`` http://dpaste.org/5RtWR/
01:19 glusterbot Title: dpaste.de: Snippet #214850 (at dpaste.org)
01:19 Technicool before doing anything else, are things working now?
01:19 Technicool .73 is the dead node, correct?
01:19 jim`` correct
01:20 Technicool ok, that's fine then
01:20 Technicool and you never cloned the nodes?
01:20 jim`` not while they've been configured with gluster
01:21 Technicool perfect
01:21 jim`` they come from a common image, but it's just a base cloud type image
01:21 Technicool mostly was concerned with whether gluster was installed previously, it wasn't so were good there
01:22 jim`` and to answer your other question, neither nfs nor gluster clients can currently connect
01:22 Technicool ok, try to restart normally now
01:23 jim`` okay, it's restarted
01:23 * Technicool crosses fingers, throws salt over shoulder, avoids black cats etc
01:23 jim`` no change I'm afraid
01:23 Technicool but, i crossed my fingers!
01:23 jim`` me too :P
01:23 Technicool ok, try again with --force
01:24 Technicool oh, thats the problem...replicated finger crossing...
01:24 * Technicool knows he isn't funny
01:24 jim`` I thought it was only a problem if one of us goes offline?
01:24 Technicool lol
01:24 Technicool nice
01:24 jim`` which command am I trying with --force?
01:24 Technicool start first
01:25 jim`` [jim@webcontent02 ~]$ sudo gluster volume start webcontent force
01:25 jim`` Starting volume webcontent has been successful
01:25 Technicool since we don't see anything from showmount we can assume the volume is actually dead, despite the output of volume info
01:25 Technicool showmount working now?
01:25 jim`` nothing listed
01:25 jim`` and gluster and nfs clients still don't read it
01:26 Technicool hrm
01:26 Technicool any clients still think they are connected?
01:27 jim`` they're getting Transport endpoint is not connected errors
01:27 jim`` the NFS clients just hang, suspect that's because they're mounted with the 'soft' option
01:27 Technicool try killing the clients for now
01:27 jim`` so it just waits on failure
01:29 Technicool with clients gone, --force stop the volume, then shutdown (not restart) glusterd
01:29 jim`` okay, clients are unmounted
01:29 Technicool ps ax, make sure no gluster processes are running, kill them if they are
01:30 jim`` all done
01:30 Technicool did the volume stop work?
01:30 Technicool or think it did anyway
01:31 jim`` indeed, it came back with success
01:31 Technicool ok, lets see if it was a dirty liar or not
01:31 Technicool start gluster
01:31 Technicool gluster volume info should show volume as stopped
01:32 jim`` it does now show them as stopped
01:32 Technicool well thats a step in the right direction hopefully
01:33 jim`` does seem positive
01:33 jim`` so try restarting now?
01:33 jim`` well, starting the vols?
01:33 Technicool yes
01:33 Technicool one only
01:34 jim`` reports started
01:34 jim`` gluster volume info shows started too
01:35 Technicool can you mount from the server?
01:36 Technicool mount -t nfs -o vers=3 localhost:/<volume you started> /<mount path>
01:36 jim`` just tried it over gluster, no dice, will try nfs
01:37 jim`` though showmount is still reporting no nfs
01:37 Technicool if gluster didn't work nfs likely won't either
01:37 JoeJulian Have you tried the telnet to the port thing yet?
01:37 Technicool JoeJulian, he could get to 24007 before but not 24009
01:37 JoeJulian is glusterfsd running?
01:38 Technicool not showing previously via ps ax
01:38 jim`` both are reachable now
01:38 Technicool makes sense
01:38 Technicool jim``, both are reachable now meaning...?
01:38 jim`` sorry, both ports
01:39 jim`` JoeJulian : yes, both gluster services are running
01:39 Technicool but nothing from showmount?  is portmapper/rpcbind running?
01:39 JoeJulian There's a lot of scrollback. Is this 3.3.1?
01:39 jim`` 3.2.x
01:39 jim`` 3.2.5-7
01:40 jim`` rpcbind is running, and nothing from showmount
01:40 JoeJulian ~pasteinfo | jim``
01:40 glusterbot jim``: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
01:40 Technicool 111, 24007 - 24100, 34865-34867 all open?
01:41 jim`` hrm, mounting via gluster works, but then errors out when the directory is listed (or otherwise accessed)
01:41 JoeJulian Technicool: showmount wouldn't care locally about open ports.
01:41 JoeJulian jim``: Check the client log. Sounds like splitbrain.
01:42 Technicool JoeJulian, wasn't thinking for showmount just in general
01:42 Technicool JoeJulian, one node is gone entirely
01:43 Technicool would split brain still be an issue in that case?
01:43 Technicool (since there isn't another node to compare to)
01:46 JoeJulian No,
01:48 Technicool JoeJulian, you still want to do live install session at Cascadia?
01:48 JoeJulian Yes
01:49 Technicool do you need expenses paid?
01:49 JoeJulian I hope not.
01:52 jim`` Technicool : ports: http://dpaste.org/8KeYi/
01:52 glusterbot Title: dpaste.de: Snippet #214852 (at dpaste.org)
01:52 jim`` I'm wondering if this is a good opportunity to start again and upgrade to 3.3, it was on the list to do
01:53 jim`` believe 3.3 handles stuff like this a bit better anyway, being able to force replace-brick operations and suchlike ?
01:57 plarsen joined #gluster
01:58 Technicool jim``, from the paste, it doesnt look like the gluster nfs ports are open?  It was working before the node failed ,correct?
01:59 jim`` correct
01:59 jim`` would guess whatever is stopping this from working otherwise it preventing it from getting as far as starting nfs
02:02 Technicool jim, what happens if you create a dummy volume?
02:02 JoeJulian truncate /var/log/glusterfs/nfs.log and restart glusterd. Then paste the nfs.log file.
02:05 jim`` http://dpaste.org/36FFF/
02:05 glusterbot Title: dpaste.de: Snippet #214853 (at dpaste.org)
02:05 jim`` nfs.log ^^
02:05 jim`` Technicool : will try
02:06 Technicool jim. grep client-0 /etc/glusterd/vols/<volume name>/*fuse.vol
02:07 Technicool is that the dead node or the one you are working on?
02:07 Technicool volume in this case is webcontent
02:07 jim`` it's the one I'm working on, dead node is inaccessible
02:07 Technicool just making sure
02:08 jim`` [jim@webcontent02 ~]$ sudo grep -R client-0 /etc/glusterfs/gluster*
02:08 jim`` [jim@webcontent02 ~]$ sudo grep -R client-0 /etc/glusterfs/
02:08 jim`` [jim@webcontent02 ~]$
02:08 jim`` nothing there
02:09 Technicool as a colleague of mine used to quip...
02:09 Technicool unpossible
02:09 Technicool try /var/lib/glusterd
02:09 Technicool thought 3.2.5 still used /etc
02:09 jim`` http://dpaste.org/Pv27M/
02:09 glusterbot Title: dpaste.de: Snippet #214855 (at dpaste.org)
02:10 Technicool sorry should have added -A5
02:10 jim`` test volume seems to mount ok via gluster client
02:11 jim`` [jim@puppet ~]$ sudo mount -t glusterfs webcontent02:/test /tmp/gluster/
02:11 Technicool can you do a df -h or does it hang like before?
02:11 jim`` [jim@puppet ~]$ sudo touch /tmp/gluster/moo
02:11 jim`` [jim@webcontent02 ~]$ ls -la /exports/test/
02:11 jim`` total 8
02:11 jim`` drwxr-xr-x 2 root root 4096 Dec 11 02:11 .
02:11 jim`` drwxr-xr-x 7 root root 4096 Dec 11 02:05 ..
02:11 jim`` -rw-r--r-- 1 root root    0 Dec 11 02:11 moo
02:11 Technicool that works too
02:12 jim`` that's interesting
02:12 jim`` the webcontent volume has started working over gluster too
02:13 Technicool im like a geenus
02:13 jim`` apparently so
02:14 jim`` you think creating a new volume un-stuck it?
02:14 Technicool any change in NFS connectivity for either volume?
02:14 jim`` nfs is still a no
02:14 jim`` interestingly mount.nfs: mounting webcontent:/webcontent failed, reason given by server:
02:14 jim`` No such file or directory
02:14 Technicool jim, no, if that worked i have no idea why
02:14 Technicool i really just wanted to test if ANY volume would work
02:15 Technicool when you created the volume, you used the fqdn?
02:16 m0zes when mounting nfs, what are your options?
02:16 jim`` no, shortname
02:17 Technicool hmm, full name listen in the volumes, that might just be how its done now though
02:17 Technicool listen^listed
02:17 jim`` I refer to the test volume, expect when I created the production volumes I used the fqdn
02:18 Technicool what was the IP listed for client-0 from the grep command ^
02:20 Technicool ip or hostname
02:21 jim`` [jim@webcontent02 ~]$ sudo grep -R client-0 /var/lib/glusterd/
02:21 jim`` that grep command?
02:21 jim`` can't see any IPs or hostnames listed
02:21 Technicool add  -A5
02:22 Technicool ie, grep -A5
02:23 JoeJulian @ext4
02:23 glusterbot JoeJulian: Read about the ext4 problem at http://goo.gl/PEBQU
02:24 jim`` Technicool : think this is the pertinent bit http://dpaste.org/W0JEQ/
02:24 glusterbot Title: dpaste.de: Snippet #214856 (at dpaste.org)
02:24 JoeJulian That "connection to  failed" always throws me off since there's no hostname.
02:25 Technicool is webcontent02 client zero on all volumes or just webcontent?
02:25 JoeJulian ext4
02:25 Technicool jim``, JoeJulian most likely has your answer ^^
02:25 JoeJulian The hang, the missing nfs....
02:25 jim`` it's ext3
02:25 JoeJulian same differece.
02:26 JoeJulian s/ce/nce/
02:26 glusterbot What JoeJulian meant to say was: same difference.
02:26 JoeJulian It's the same structure change (would effect ext2 too if anybody still used it) that was backported into stable kernels and never should have been.
02:27 jim`` hrm
02:27 jim`` I think I'm on an earlier kernel
02:27 jim`` 2.6.32-71.el6.x86_64
02:27 JoeJulian rats.
02:32 JoeJulian 01's the one that's down, right/
02:32 JoeJulian ?
02:32 jim`` correct
02:32 * JoeJulian can't seem to type anymore...
02:32 jim`` brb, might grab something caffinated
02:33 jim`` s/caffinated/caffeinated
02:33 jim`` bah, can't even type the substitute properly
02:33 jim`` brb :P
02:33 JoeJulian next question will be if "ps ax | grep nfs.log" shows that it's actually running.
02:35 JoeJulian Well there's an interesting inconsistency. http://dpaste.org/W0JEQ/ shows a short hostname but the nfs log shows the full fqdn.
02:35 glusterbot Title: dpaste.de: Snippet #214856 (at dpaste.org)
02:36 jim`` [jim@webcontent02 ~]$ ps ax | grep nfs.log
02:36 jim`` 18429 pts/2    S+     0:00 tail -f /var/log/glusterfs/bricks /var/log/glusterfs/cli.log /var/log/glusterfs/etc-glusterfs-glusterd.vol.log /var/log/glusterfs/etc-gluste​rfs-glusterd.vol.log-20121118 /var/log/glusterfs/etc-gluste​rfs-glusterd.vol.log-20121125 /var/log/glusterfs/etc-gluste​rfs-glusterd.vol.log-20121202 /var/log/glusterfs/etc-gluste​rfs-glusterd.vol.log-20121209 /var/log/glusterfs/geo-replication /var/log/glusterfs/geo-replication-slaves
02:36 jim`` /var/log/glusterfs/nfs.log /var/log/glusterfs/tmp-webcontent-.log
02:36 jim`` 19262 ?        Ssl    0:01 /usr/sbin/glusterfs -f /var/lib/glusterd/nfs/nfs-server.vol -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
02:37 jim`` 19502 pts/0    S+     0:00 grep nfs.log
02:37 jim`` excuse the spam, that's a yes
02:40 JoeJulian @ports
02:40 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
02:41 JoeJulian You can probably ignore that. I'm just refreshing my memory.
02:42 jim`` is http://download.gluster.org/pub/glu​ster/glusterfs/3.3/3.3.1/EPEL.repo/ considered stable?
02:42 glusterbot <http://goo.gl/lXWWr> (at download.gluster.org)
02:42 m0zes wasn't there an issue at one time if kernel nfs started before gluster-nfs that this sort of thing would happen?
02:43 jim`` kernel nfs is disabled, but I did have that problem previously
02:44 JoeJulian jim``: yes, but this ,,(yum repo) gets critical updates that don't go into that one.
02:44 glusterbot jim``: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
02:46 jim`` thanks, will add that instead
02:48 JoeJulian How about an fpaste of rpcinfo
02:49 jim`` http://dpaste.org/F2FQI/
02:49 glusterbot Title: dpaste.de: Snippet #214857 (at dpaste.org)
02:49 avati joined #gluster
02:49 m0zes has anyone used rdma in 3.3.1? iirc there was a bug in 3.3.0 where rdma was missing. is it there in 3.3.1?
02:51 bharata joined #gluster
02:52 jim`` I'm going to build up some 3.3.1 boxes while troubleshooting this
02:53 jim`` would be nice to find the problem, but it needs to be back online in about 4 hours
02:54 jim`` and the sooner it's back the sooner I can sleep :P
03:01 JoeJulian jim``: been there, done that.
03:02 JoeJulian m0zes: It's not missing. It turns out that the documentation states that it's not supported, but that's in reference to it being considered "in tech preview" stage by Red Hat.
03:03 JoeJulian Which means it works just as well as it did pre-3.3 but Red Hat's not going to support it yet because they haven't allocated any resources to providing qa or support for it.
03:04 m0zes JoeJulian: okay. thanks, I was thinking of re-enabling rdma mounts for my homedirs in 3.2.7, with the hope of upgrading to 3.3.1 in about a week and a half. I wanted to make sure my work wouldn't be in vain :)
03:05 JoeJulian at least that's the understanding I came to when spoke with Raghavendra about it.
03:05 hchiramm_ joined #gluster
03:06 * m0zes wishes he had proper 'dev' fileservers to test upgrades like this.
03:06 JoeJulian He did say that there are "some things" (no bug ids though) that will be fixed in 3.3.2.
03:07 JoeJulian I hear that.
03:07 m0zes maybe, just maybe, I'll get a chance to play with 3.3.1 on my old fileservers before they are repurposed.
03:19 __Bryan__ joined #gluster
03:26 wushudoin joined #gluster
03:34 hchiramm_ joined #gluster
03:46 sripathi joined #gluster
04:02 jim`` woo, restoring files to the new cluster
04:10 __Bryan__ joined #gluster
04:12 sripathi joined #gluster
04:13 vpshastry joined #gluster
04:19 hchiramm_ joined #gluster
04:21 jim`` that's all working nicely on 3.3.1 now
04:23 jim`` thanks Technicool / JoeJulian / everyone else who my chat buffer doesn't scroll up to
04:23 jim`` now to sleep, and get up in 3 hours to make sure it works when the first users logon :P
04:23 berend joined #gluster
04:29 __Bryan__ joined #gluster
04:34 __Bryan__ left #gluster
04:49 rastar joined #gluster
04:56 hagarth joined #gluster
05:17 glusterbot New news from resolvedglusterbugs: [Bug 764654] Getting zero-byte files in a replicated folder <http://goo.gl/fgNku>
05:20 hchiramm_ joined #gluster
05:21 bala joined #gluster
05:38 bulde joined #gluster
05:41 overclk joined #gluster
05:47 theron joined #gluster
05:47 glusterbot New news from resolvedglusterbugs: [Bug 809982] truncation of offset in self-heal <http://goo.gl/gA0ll> || [Bug 799856] Native client hangs when accessing certain files or directories <http://goo.gl/RjtnP>
05:56 raghu joined #gluster
06:01 vimal joined #gluster
06:01 troy___ joined #gluster
06:02 troy___ Hi everyone!
06:02 nhm Good morning. :)
06:02 troy___ I have a question if someone can help me please
06:03 troy___ I need a Distributed Replicated GlusterFS volume with two nodes only
06:03 troy___ server1 ... Brick1 & Brick2
06:03 troy___ Server2 Brick2 & Brick3
06:03 troy___ Im failed to do that my self
06:03 troy___ will appreciate any help
06:03 nhm troy___: I had a test setup like that that worked pretty well.  I'm afraid I don't really remember how I set it up though.
06:04 troy___ all bricks are xfs mounted volumes
06:04 troy___ nhm: any help of guidelines ??
06:04 JoeJulian So what part are you having trouble with?
06:05 troy___ I will appreciate if you can direct me to the right direction .. for search etc
06:05 JoeJulian @rtfm
06:05 ramkrsna joined #gluster
06:05 ramkrsna joined #gluster
06:05 glusterbot JoeJulian: Read the fairly-adequate manual at http://goo.gl/E3Jis
06:05 troy___ JoeJulian: formeted .. mounted all bricks
06:05 troy___ JoeJulian: don't know how to create the volume
06:05 nhm JoeJulian: I'm not sure if you sleep, you always seem to be up. :)
06:06 JoeJulian I don't sleep enough.
06:06 troy___ JoeJulian: normal commands like .. gluster create volume replica2 strip 2 trans ... don't work
06:06 JoeJulian @stripe
06:06 glusterbot JoeJulian: Please see http://goo.gl/5ohqd about stripe volumes.
06:06 tryggvil joined #gluster
06:06 JoeJulian You'll need to define "don't work" better than that.
06:06 nhm JoeJulian: Me too.  Startup life is tough.
06:07 troy___ I need something replicated and striped through two nodes only
06:07 JoeJulian You probably don't need or want stripe.
06:07 troy___ can you please suggest ..
06:08 troy___ I want to be able to increase the disk size when required and I want them to backedup at the same time for any black day
06:09 troy___ 500GB + 500 GB on server 1 & 500GB + 500 GB on second server
06:09 JoeJulian gluster volume create myvolume replica 2 server1:/data/brick1/brick server2:/data/brick1/brick server1:/data/brick2/brick server2:/data/brick2/brick
06:09 troy___ users should have 1TB and that one 1TB replicated as well
06:09 JoeJulian your bricks are mounted (in this example) in /data/brick1 and /data/brick2
06:09 troy___ yes that is right
06:10 troy___ whay aditional /brick folder ?
06:11 JoeJulian That way, when your machine boots up and server1:/data/brick1 fails to mount, your servers don't just start happily replicating server2:/data/brick1 to your root filesystem.
06:11 troy___ let me try this .. I am setting up new servers for test
06:12 troy___ hmmm
06:12 troy___ right
06:12 JoeJulian Or, in a case that actually happened to me recently, when xfs hits a bug and unmounts itself...
06:13 troy___ thank you so much for your advice ...
06:13 troy___ let me try this out
06:13 JoeJulian You're welcome.
06:13 nhm JoeJulian: unmounts itself? :)
06:13 troy___ I am in a middle of creating new servers
06:13 JoeJulian nhm: yep, if something goes wrong with xfs, that's it's failsafe.
06:14 JoeJulian In my case, it was a failing hard drive.
06:14 nhm JoeJulian: for some reason I thought it remounted read-only.  Maybe that was ext4.
06:18 glusterbot New news from resolvedglusterbugs: [Bug 879079] Impossible to overwrite split-brain file from mountpoint <http://goo.gl/dxuFN>
06:20 avati joined #gluster
06:24 glusterbot New news from newglusterbugs: [Bug 879078] Impossible to overwrite split-brain file from mountpoint <http://goo.gl/eR0Ki>
06:33 rastar left #gluster
06:34 rastar joined #gluster
06:45 theron_ joined #gluster
06:47 theron_ joined #gluster
06:51 rgustafs joined #gluster
06:54 glusterbot New news from newglusterbugs: [Bug 881685] VM's were not responding when self-heal is in progress <http://goo.gl/ntb3Q>
06:59 troy___ JoeJulian: Failed to perform brick order check. Do you want to continue creating the volume?  (y/n)
07:00 troy___ JoeJulian: gluster volume create gv0 replica 2 gfs1:/export/brick1/brick gfs2:/export/brick1/brick gfs1:/export/brick2/brick gfs2:/export/brick2/brick
07:00 JoeJulian ~pastestatus | troy
07:00 glusterbot troy: Please paste the output of "gluster peer status" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
07:01 JoeJulian ^ Do that on both servers.
07:01 troy___ [root@VRZ-GFS1 /]# gluster peer status Number of Peers: 1  Hostname: gfs2 Uuid: 45156866-51d2-4f00-922f-fe428d61078b State: Peer in Cluster (Connected)
07:01 troy___ [root@VRZ-GFS2 /]# gluster peer status Number of Peers: 1  Hostname: 10.0.2.239 Uuid: 3be1c09d-f23c-4228-ad67-580f3775a647 State: Peer in Cluster (Connected)
07:02 JoeJulian from gfs2 probe gfs1
07:02 troy___ Probe successful
07:03 JoeJulian now try again
07:03 troy___ same error msg
07:03 troy___ Failed to perform brick order check. Do you want to continue creating the volume?  (y/n) n
07:07 JoeJulian fpaste the last 30 lines of /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
07:10 troy___ sorry being a noob ... but how to do that paste
07:10 troy___ Paste #259234
07:10 JoeJulian Paste the link to the page that was created.
07:12 troy___ http://fpaste.org/bQA7/
07:12 glusterbot Title: Viewing Paste #259235 (at fpaste.org)
07:12 troy___ thanks :)
07:14 JoeJulian hmm, doesn't even show the attempted volume creation.
07:15 JoeJulian make sure gfs1 and 2 hostnames resolve correctly on both servers.
07:15 troy___ both are virtual machines .. running on same servers
07:16 troy___ I updated hosts file for host name
07:16 troy___ [root@VRZ-GFS1 /]# ping gfs2 PING gfs2 (10.0.2.246) 56(84) bytes of data. 64 bytes from gfs2 (10.0.2.246): icmp_seq=1 ttl=64 time=0.194 ms
07:17 JoeJulian regardless, the hostnames have to resolve correctly on both servers. That error is telling you that the cli is interpreting the pair of servers as existing on the same host.
07:17 troy___ [root@VRZ-GFS2 /]# ping gfs1 PING gfs1 (10.0.2.239) 56(84) bytes of data. 64 bytes from gfs1 (10.0.2.239): icmp_seq=1 ttl=64 time=0.173 ms
07:18 troy___ have I done something wrong installing gluster-server ?
07:18 ngoswami joined #gluster
07:19 troy___ I installed gluster, glluster-fuse, gluster-server
07:19 troy___ on both machines
07:19 JoeJulian without pasting the results here, ping BOTH servers from BOTH servers and make sure they're resolving correctly.
07:19 mooperd joined #gluster
07:24 mohankumar joined #gluster
07:28 troy___ JoeJulian: thank you so much man ...
07:28 Nevan joined #gluster
07:28 troy___ I updated both servers ... modified hosts file .... inserted localhost entry
07:28 troy___ removed the peers
07:28 troy___ re added them
07:28 troy___ and wolume created like a charm
07:28 troy___ .....................
07:29 JoeJulian awesome
07:29 troy___ JoeJulian: Thank you!!
07:29 troy___ now what I got is ... striped & replicated one volume accross two servers
07:29 hagarth joined #gluster
07:29 JoeJulian I hope not.
07:30 troy___ hmmm
07:30 JoeJulian @stripe
07:30 glusterbot JoeJulian: Please see http://goo.gl/5ohqd about stripe volumes.
07:30 JoeJulian Unless you specified stripe, you got a distributed replicated volume, which is what you really want.
07:30 troy___ yeh I checked your blog
07:30 troy___ right right
07:30 troy___ distributed & replicated
07:30 JoeJulian +1
07:31 troy___ not striped .. sorry forth that
07:31 JoeJulian That's okay. A lot of people have a problem leaving that mindset behind.
07:31 troy___ and I am one of them ..
07:31 troy___ :)
07:31 JoeJulian You'll get there.
07:32 troy___ I hope so ... I was introduced with GlusterFS two days ago .. and today I have to implement it on our prod .. servers ... :)
07:33 JoeJulian lol
07:33 JoeJulian Nothing like being thrown into the fire, eh?
07:33 troy___ yeh right ... but Im glad that I discoverd this channel ... and you helped me with that
07:34 JoeJulian So where do you work?
07:34 troy___ an IT company ... bay area ...
07:34 troy___ near SFO
07:35 JoeJulian Ah, that one... ;)
07:36 troy___ lolz
07:36 troy___ yeh that one
07:36 JoeJulian That's like saying you're a Barista at a Coffee Stand near Seattle.
07:36 bulde :-p
07:37 vincent_vdk Hi, is anyone here using GLuster to store VM images (disks)
07:37 troy___ hahah a .... San Francisco --> California __ US
07:37 JoeJulian Hey bulde, why no updates in gerrit or bugzilla on the ext4 bug?
07:38 JoeJulian Yep, lots of people do that vincent_vdk
07:38 troy___ JoeJulian: Thanks man ... catch you next time ...
07:38 bulde JoeJulian: complex to answer that one :-) simply said, the patch posted fixes issues in Fuse mountpoint, but NFS mount needed some reviews
07:38 troy___ bye
07:39 JoeJulian bulde: But that was months ago.
07:40 JoeJulian Is it not going to make this release?
07:40 vincent_vdk JoeJulian: interesting. I was wondering if performance is OK
07:40 bulde JoeJulian: exactly, thats why its complex to answer... after that we shuffled some guys here and there, it took a hit
07:40 vincent_vdk i thought that Gluster was meant to store smaller files
07:40 bulde JoeJulian: personally, want to see it make it in 3.3.2 and 3.4.0
07:40 bulde JoeJulian: will catch up in next 20mins... heading to have lunch now
07:41 JoeJulian vincent_vdk: That's what it is, yes. Ok. What I do is have small boot images and mount gluster volumes within my vms for all my data. 3.4.0 that's in qa testing has some awesome features that will improve that immensely. qemu-kvm has added direct volume support.
07:41 JoeJulian bulde: Have a good one. I'm going to bed.
08:02 ekuric joined #gluster
08:04 16WABG6RF joined #gluster
08:05 SpeeR joined #gluster
08:08 MinhP joined #gluster
08:09 jim`` joined #gluster
08:10 helloadam joined #gluster
08:23 puebele joined #gluster
08:24 bauruine joined #gluster
08:26 bulde JoeJulian: sure, will update you on that this week
08:31 sgowda joined #gluster
08:34 nissim joined #gluster
08:35 nissim Hello everyone
08:37 vincent_vdk JoeJulian: nice to hear that. Thx
08:37 nissim I am looking for some help configuring gluster in a way that I wont lose too much storage but still gain high availabily and performance
08:37 vincent_vdk it would be nice in combination with oVirt
08:38 nissim Can any one help :) ??
08:40 nissim I will give you some background on my environment, I am running OpenStack Essex on 5 nodes (each with 12 cores + 64GB + 2TB-SSD).
08:42 nissim I know how to integrate gluster with openstack, I just want to better understand how can I create a distribute volume that will enjoy high performance & high availability
08:42 nissim anyone??
08:46 kshlm joined #gluster
08:46 kshlm joined #gluster
08:48 glusterbot New news from resolvedglusterbugs: [Bug 820518] Issues with rebalance and self heal going simultanously <http://goo.gl/LUq7S>
08:54 dobber joined #gluster
09:02 vpshastry1 joined #gluster
09:03 gbrand_ joined #gluster
09:06 bulde1 joined #gluster
09:14 hagarth joined #gluster
09:14 e_vila joined #gluster
09:18 manik joined #gluster
09:18 glusterbot New news from resolvedglusterbugs: [Bug 859581] self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs <http://goo.gl/60bn6>
09:19 rgustafs joined #gluster
09:28 mooperd joined #gluster
09:29 saz joined #gluster
09:31 DaveS joined #gluster
09:35 guest2012 joined #gluster
09:37 guest2012 Hello, I am looking for a CLI command to see if there is a volume management of any kind (brick-replace, rebalance, etc...) running. Is anything like that available in 3.3.0?
09:37 guest2012 *volume management operation
09:48 glusterbot New news from resolvedglusterbugs: [Bug 875860] Auto-healing in 3.3.1 doesn't auto start <http://goo.gl/U4mpv>
09:53 Norky joined #gluster
09:59 mooperd joined #gluster
10:14 e_vila hi guys! i would like to use gluster with some nfs clients, i have in mind a gluster cluster with two servers with replication to avoid the single point of failure. now, does nfs behaves in the same way as with gluster native clients? i mean, if i restart on of the servers in the gluster pool, the clients wont lose the nfs mount?
10:16 guest2012 I don't think so, I have seen people using some failover software in order to obtain that (i.e. vrrp, heartbeat, ...)
10:17 e_vila thanks!
10:37 bulde joined #gluster
10:43 guest2012 I tried 'gluster peer probe <host>' from a random server in the cluster and it didn't work. I detached that server, then issued the probe from the first server in the cluster, and it worked. Is this expected?
11:06 toruonu joined #gluster
11:07 toruonu are there any whitepapers or blogs about VM performance on glusterfs and has there been any plans for deduplication and wether that could even be possible. I'm guessing that if you run the bricks on volumes that have deduplication (*cough* zfs *cough*), then in theory you'd get some of the dedup benefits
11:08 toruonu I'm contemplating moving VM's to gluster and that'd be around 300 VM's or more and am contemplating what kind of volume to create for it for it to actually perform admirably
11:11 guest2012 toruonu: I've seen opposed opinions about VMs on gluster
11:12 toruonu got recommendations on what to use instead? one option of course would be to create a node with loads of disk, solaris and zfs and export a deduplicated volumes from there with iSCSI or NFS or what not...
11:14 ninkotech_ joined #gluster
11:14 guest2012 toruonu, you may want to consider http://www.gluster.org/community/d​ocumentation/index.php/Planning34
11:14 glusterbot <http://goo.gl/4yWrh> (at www.gluster.org)
11:15 toruonu ah so dedup is nice to have in 3.4
11:16 toruonu but VM image store is planned in core features… well we use OpenVZ images, but they are somewhat managed by the same libraries I think
11:16 quillo joined #gluster
11:20 vpshastry joined #gluster
11:23 morse joined #gluster
11:23 tryggvil joined #gluster
11:26 SteveCooling Sorry if this is well documented, but what Yum repo should I use for Gluster 3.3 in production? Running RHEL 6 64bit
11:27 ndevos ~yum repo | SteveCooling
11:27 glusterbot SteveCooling: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
11:28 ndevos SteveCooling: or see http://www.redhat.com/storage/ for a Red Hat supported version
11:28 glusterbot Title: Red Hat | Red Hat Storage Server | Scalable, flexible software-only storage (at www.redhat.com)
11:30 nueces joined #gluster
11:31 SteveCooling thanks
11:34 tryggvil joined #gluster
11:38 tryggvil joined #gluster
11:46 hagarth joined #gluster
11:48 guest2012 Gluster keeps telling me "Replace brick is already started for volume", but I aborted it. Why?
11:49 guest2012 log line: E [glusterd-replace-brick.c:304:g​lusterd_op_stage_replace_brick] 0-: Replace brick is already started for volume
11:55 kwevers_ joined #gluster
11:58 guest2012 nice. I restarted gluster on one of the nodes who were saying that replace was already started, and the response to the replace request became "... rebalance is in progress. Please retry after completion"
11:58 guest2012 I did a rebalance, a couple of months ago. Pretty sure it was complete, though...
12:02 nightwalk joined #gluster
12:03 H__ scary
12:04 layer3switch joined #gluster
12:05 guest2012 I'm going to stop the volume and restart it. If you think that's crazy please speak now :)
12:08 H__ i think tha tis crazy
12:09 * H__ <- mere gluster user here, so don't listen to any advice :-D
12:13 GLHMarmot joined #gluster
12:16 edward1 joined #gluster
12:20 guest2012 Replace brick is in progress on volume storage. Please retry after replace-brick operation is committed or aborted
12:21 guest2012 nice.
12:21 primusinterpares joined #gluster
12:25 glusterbot New news from newglusterbugs: [Bug 886041] mount fails silently when talking to wrong server version (XDR decoding error) <http://goo.gl/8CIQD>
12:26 manik joined #gluster
12:33 Rammses joined #gluster
12:35 kkeithley1 joined #gluster
12:49 shireesh joined #gluster
12:51 hagarth joined #gluster
12:55 glusterbot New news from newglusterbugs: [Bug 884327] Need to achieve 100% code coverage for the utils.py module <http://goo.gl/T4vbT>
13:12 tryggvil joined #gluster
13:19 rastar left #gluster
13:27 theron joined #gluster
13:40 guest2012 for what matters, I had to manually stop gluster on all nodes. After restarting it I was able to launch the brick replacement again.
13:43 shireesh joined #gluster
13:45 toruonu I currently see this for volume heal info grepping Number: http://fpaste.org/BTXO/
13:45 glusterbot Title: Viewing Number of entries: 103 Number of ent ... umber of entries: 0 Number of entrie ... r of entries: 3 Number of entries: 3 (at fpaste.org)
13:46 toruonu forget it :P it seems the node I just rebooted didn't have glusterd in startup … hence the increasing number...
13:50 rgustafs_ joined #gluster
13:56 guest2012 got the following: E [inode.c:396:__inode_unref] (-->/usr/lib/glusterfs/3.3.0/xlator/protoc​ol/server.so(resolve_gfid_entry_cbk+0x74) [0x7f984cdea7f4] (-->/usr/lib/libglusterfs.so.0(loc_wipe+0x3c) [0x7f9850f79a9c] (-->/usr/lib/libglusterfs.so.0(inode_unref+0x29) [0x7f9850f8d2d9]))) 0-: Assertion failed: inode->ref
13:56 guest2012 while replacing brick
13:56 balunasj joined #gluster
13:57 chirino joined #gluster
13:57 aliguori joined #gluster
13:58 guest2012 now the whole volume is not available, clients hang while trying to access any content
14:08 quillo joined #gluster
14:13 guest2012 'gluster volume status' returns with no output
14:20 hagarth joined #gluster
14:25 tryggvil joined #gluster
14:31 obryan joined #gluster
14:35 DRMacIver joined #gluster
14:35 DRMacIver Hi. I'm currently trying to figure out how gluster combines hard linking and deletion. Any pointers on where I should start reading?
14:36 DRMacIver (Basically we've got a bunch of files which are hard linked all over the place. I'm trying to figure out what the mechanism which decides when underlying disk storage is going to be reclaimed is)
14:39 guest2012 DRMacIver, have you already read http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/ ?
14:39 glusterbot <http://goo.gl/j981n> (at joejulian.name)
14:40 DRMacIver guest2012: I haven't, thanks. I don't think we've upgraded this system to 3.3 yet though - we're in the process of rolling out upgrades, but this one is... a bit large so we're waiting for when we can schedule some downtime.
14:40 DRMacIver (I'll have to check what version we're actually running there)
14:42 DRMacIver Hm. This article is making me nervous about how heavily we're using hard linking on an older version of gluster. :)
14:49 theron left #gluster
14:52 shireesh joined #gluster
14:55 stopbit joined #gluster
15:05 spn joined #gluster
15:07 spn joined #gluster
15:11 * jdarcy o_O
15:12 jdarcy Actually more like %_@ at this point.
15:12 DRMacIver $_@?
15:12 hagarth jdarcy: :)
15:15 theron joined #gluster
15:16 * guest2012 -.-'
15:17 bennyturns joined #gluster
15:19 gbr joined #gluster
15:23 spn joined #gluster
15:26 Ramereth joined #gluster
15:26 stigchri1tian joined #gluster
15:26 rastar joined #gluster
15:26 jbrooks joined #gluster
15:32 Ramereth joined #gluster
15:32 stigchri1tian joined #gluster
15:44 wushudoin joined #gluster
15:56 puebele joined #gluster
16:07 guest2012 as per http://lists.gnu.org/archive/html/g​luster-devel/2012-10/msg00050.html, new methods for brick replacement should be available. Can anyone confirm these new methods are the supported procedure in 3.3.1?
16:07 glusterbot <http://goo.gl/r04zC> (at lists.gnu.org)
16:10 aliguori joined #gluster
16:27 gbr joined #gluster
16:30 daMaestro joined #gluster
16:39 plarsen joined #gluster
16:43 dobber joined #gluster
16:45 gbr joined #gluster
17:05 16WABG6RF left #gluster
17:07 blubberdi joined #gluster
17:08 blubberdi Hi, can someone please tell me why `gluster peer probe c42.4-2k.blubberdi.org` returns "Usage: peer probe <HOSTNAME>" but c42.4-2k.blubberdi.org resolves to an ip.
17:09 schmidmt1 The fuse client connects to all gluster servers to maintain consistency across the fs, How does the nfs client maintain consistency?
17:09 ndevos blubberdi: the "4-2k" causes it to fail, it should work when you use an ip-address
17:10 blubberdi ndevos: is "-" the problem or is the sub.sub.domain to deep?
17:11 ndevos blubberdi: the starting "4"
17:11 ndevos blubberdi: see http://review.gluster.org/4017
17:11 glusterbot Title: Gerrit Code Review (at review.gluster.org)
17:12 blubberdi ndevos: Thank you very much for your fast answer and the link!
17:12 ndevos blubberdi: you're welcome!
17:13 ndevos schmidmt1: that is not a responsibility of the nfs-client, but of the nfs-server (the gluster-nfs-server is actualy a glusterfs-client too)
17:14 schmidmt1 Is split-braining more likely with nfs?
17:15 schmidmt1 We've had trouble with split braining, I'm wondering if nfs could be the culprit?
17:15 ndevos I doubt there is a difference
17:16 schmidmt1 Thanks.
17:16 ndevos but note that the nfs-client does not automatically fail-over to an other nfs-server when one goes down
17:16 schmidmt1 Yea
17:21 mooperd joined #gluster
17:24 bauruine joined #gluster
17:28 theron joined #gluster
17:32 Mo__ joined #gluster
17:38 raghu joined #gluster
17:50 schmidmt1 left #gluster
18:00 mooperd joined #gluster
18:10 gbr joined #gluster
18:26 bdperkin joined #gluster
18:27 bdperkin joined #gluster
18:31 rwheeler joined #gluster
18:33 mooperd joined #gluster
18:47 cyberbootje joined #gluster
18:50 lh joined #gluster
18:51 mooperd joined #gluster
18:52 nightwalk joined #gluster
18:52 mooperd joined #gluster
19:00 wN joined #gluster
19:25 nissim joined #gluster
19:26 andreask joined #gluster
19:33 theron joined #gluster
19:43 hattenator joined #gluster
19:57 gbrand_ joined #gluster
20:02 m0zes anybody have any further info on readdirplus? I have the 3.7 kernel, patched with the fuse readdirplus patch. now how do I actually make use of it?
20:07 tc00per joined #gluster
20:34 sjoeboo joined #gluster
20:37 wN joined #gluster
20:39 theron joined #gluster
20:43 bdperkin joined #gluster
21:39 joshcarter joined #gluster
21:50 joshcarter anyone here know why libglusterfsclient isn't in the default build? (i.e., is it stale the point of not working?)
21:51 theron joined #gluster
21:52 semiosis joshcarter: it was abandoned a while ago (although i think there's been some recent effort by the devs to update it)
21:53 badone joined #gluster
21:53 joshcarter I'm aware of booster, which seems kind of hack-ish, and aside from that, is there a way to build a glisters client into a process?
21:53 joshcarter (glisters? auto-spell-correct wtf)
21:53 semiosis booster was deprecated long ago
21:54 semiosis afaik the recommended "normal" way to use glusterfs is to make a fuse client mount point & point your app there
21:54 joshcarter sure, totally understandable default.
21:54 semiosis now that being said, there have been tight integrations with openstack swift, hadoop, and qemu/kvm
21:54 joshcarter I'm looking to improve the efficiency/latency of a specific application.
21:55 semiosis but those are imho "major" developments
21:55 semiosis not quick importing of a lib
21:55 semiosis joshcarter: and you have good reason to believe the latency/efficiency is dominated by context switching?
21:55 semiosis it's usually not, rather usually network & disk latencies dominate
21:56 lh joined #gluster
21:56 joshcarter in this case, I'm using QDR infiniband and solid state drives, and I'm bounded by the FUSE client's CPU use.
21:56 semiosis wow!  ok :)
21:57 joshcarter Even with IO threads in the client, the client just pegs at 130% CPU and I can't go faster. Most of my gluster servers are only running at 30%.
21:58 joshcarter so, I'd like to split up the gluster client into one-per-app process.
21:58 semiosis many fuse mounts?
21:58 joshcarter also a possibility, sure.
21:58 joshcarter at some point, however, it smells wrong.
21:58 joshcarter ;)
21:59 semiosis yeah but you could do it today
21:59 joshcarter I'm less concerned about that -- trying to build a platform that might last a number of years.
22:00 kkeithley1 yes, libglusterfsclient has been resurrected, and renamed libgfapi.
22:00 semiosis oh nice!
22:00 kkeithley1 Avati's tree on github has the latest bits last I knew
22:01 joshcarter kkeithley: oh, cool!
22:01 Alpinist joined #gluster
22:01 joshcarter is it 3.3-only?
22:01 kkeithley1 yeah, for the most part
22:02 kkeithley1 Probably more like 3.4
22:02 joshcarter ok, that's fine. thanks for the tip. (heading to github...)
22:05 semiosis fwiw, i see newer commits to api/ in glusterfs master than avati's fork on github
22:05 semiosis https://github.com/gluster​/glusterfs/tree/master/api
22:05 glusterbot <http://goo.gl/U0KQZ> (at github.com)
22:05 semiosis 13 days ago, vs 3 months ago for...
22:05 semiosis https://github.com/avati/glusterfs/tree/master/api
22:05 glusterbot <http://goo.gl/gyi8G> (at github.com)
22:06 semiosis joshcarter: full disclosure, this is still unreleased
22:06 semiosis but i'm sure any feedback you have would be appreciated
22:06 joshcarter semiosis: yep, totally understand.
22:07 joshcarter agreed on commit history, I was just looking at that too.;
22:08 joshcarter I'm ok with bleeding edge today if it causes less bleeding about 9-12 months out. :)
22:22 theron joined #gluster
22:42 hchiramm_ joined #gluster
22:52 TSM2 joined #gluster
23:16 hchiramm_ joined #gluster
23:16 chacken3 joined #gluster
23:27 zwu joined #gluster
23:39 tryggvil joined #gluster
23:42 bauruine joined #gluster
23:47 tryggvil joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary