Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 cherrysuckle joined #gluster
00:25 Klas joined #gluster
01:01 saltsa joined #gluster
01:48 haomaiwang joined #gluster
02:10 Javezim Can someone sanity check me, we currently are running Gluster 3.8 with the following volume - http://paste.ubuntu.com/23473773/          Now I am wanting to add an arbiter. I've got a new machine with 11 Folders named /arbiter1/gv0/brick{1,2,3,4,5,6,7,8,9,10,11}   This is the command I would use to add this? - http://paste.ubuntu.com/23473776/
02:10 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
02:10 Javezim Then all Metadata will replicate over to the new arbiter machine and hopefully bye bye split brains
02:11 Gambit15 joined #gluster
02:22 kramdoss_ joined #gluster
02:33 haomaiwang joined #gluster
02:46 JoeJulian Javezim: looks right to me.
03:01 nbalacha joined #gluster
03:18 prth joined #gluster
03:22 magrawal joined #gluster
03:24 Lee1092 joined #gluster
03:28 plarsen joined #gluster
03:29 farhorizon joined #gluster
03:44 plarsen joined #gluster
03:52 shubhendu joined #gluster
03:55 atinm joined #gluster
03:57 itisravi joined #gluster
04:07 k4n0 joined #gluster
04:17 buvanesh_kumar joined #gluster
04:28 ppai joined #gluster
04:34 sanoj joined #gluster
04:47 poornima joined #gluster
04:48 rafi joined #gluster
04:49 karthik_us joined #gluster
04:50 shubhendu joined #gluster
04:50 ankitraj joined #gluster
04:55 k4n0 joined #gluster
05:02 ndarshan joined #gluster
05:04 ashiq joined #gluster
05:06 jiffin joined #gluster
05:07 nthomas_ joined #gluster
05:08 RameshN joined #gluster
05:14 prasanth joined #gluster
05:16 sanoj joined #gluster
05:19 om2 joined #gluster
05:21 kdhananjay joined #gluster
05:23 bkunal joined #gluster
05:28 ndarshan joined #gluster
05:29 apandey joined #gluster
05:46 kotreshhr joined #gluster
05:57 satya4ever joined #gluster
06:00 jkroon joined #gluster
06:04 skoduri joined #gluster
06:15 susant joined #gluster
06:18 prth joined #gluster
06:19 hgowtham joined #gluster
06:20 buvanesh_kumar_ joined #gluster
06:28 hchiramm joined #gluster
06:33 Philambdo joined #gluster
06:34 itisravi joined #gluster
06:35 Muthu joined #gluster
06:45 nthomas_ joined #gluster
06:47 Bhaskarakiran joined #gluster
06:51 [diablo] joined #gluster
06:51 RameshN joined #gluster
06:54 Alghost joined #gluster
06:55 rastar joined #gluster
07:01 mhulsman joined #gluster
07:09 derjohn_mob joined #gluster
07:17 Muthu joined #gluster
07:18 derjohn_mob joined #gluster
07:19 itisravi_ joined #gluster
07:28 buvanesh_kumar_ joined #gluster
07:31 jtux joined #gluster
07:41 itisravi joined #gluster
07:42 mbukatov joined #gluster
07:43 ndarshan joined #gluster
07:47 hackman joined #gluster
07:54 buvanesh_kumar joined #gluster
07:54 TvL2386 joined #gluster
07:56 masber joined #gluster
07:59 cherrysuckle joined #gluster
08:03 tdasilva joined #gluster
08:06 coreping joined #gluster
08:14 ivan_rossi joined #gluster
08:16 witsches joined #gluster
08:17 aravindavk joined #gluster
08:19 RameshN joined #gluster
08:23 Javezim Added the arbiter but I can see in the logs a lot of these - http://paste.ubuntu.com/23474628/
08:23 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
08:23 Javezim anyone know what they are?
08:24 percevalbot joined #gluster
08:31 jri joined #gluster
08:36 fsimonce joined #gluster
08:37 karthik_us joined #gluster
08:47 riyas joined #gluster
08:48 toredl joined #gluster
08:52 devyani7 joined #gluster
08:52 flying joined #gluster
09:02 mhulsman joined #gluster
09:06 jiffin1 joined #gluster
09:07 mhulsman joined #gluster
09:10 witsches joined #gluster
09:11 mhulsman1 joined #gluster
09:19 arc0 joined #gluster
09:21 mhulsman joined #gluster
09:25 Slashman joined #gluster
09:26 ivan_rossi i had a "terminator accident": management commands to a volume have been inadvertedly sent to all the peers simultaneously (terminator group broadcast)
09:27 rafi Javezim: itisravi can help you
09:28 rafi ivan_rossi: what management command did you run
09:28 ivan_rossi aftre a few of them some stalled and not even a volume status returns a:
09:28 ivan_rossi Another transaction is in progress for (thevol) Please try again after sometime.
09:28 witsches joined #gluster
09:29 ivan_rossi it has been in this state since friday, so it is clearly a permanent condition.
09:30 ivan_rossi rafi: gluster vol profile ...
09:30 DV joined #gluster
09:31 ivan_rossi and similar, not particularly invasive stuff.
09:31 ivan_rossi what is the suggested way out of this situation.
09:31 ivan_rossi vollume is a production one, with a few clients using it but w. long-running jobs
09:32 rafi ivan_rossi: all of the comments are now returned, right ? at least by cli timeout
09:32 ivan_rossi volume seems to work OK
09:32 ivan_rossi rafi: what do you mean exactly?
09:32 rafi ivan_rossi: you just want to clear the stale locks , right ?
09:33 rafi ivan_rossi: if that is the case try gluster volume clear-locks command
09:34 rafi ivan_rossi: you would get more info from the help
09:34 ivan_rossi I just want to be ble to give management cmds to that vol again
09:34 rafi ivan_rossi: wait a minute
09:35 ivan_rossi is it the scenario for the vol-clear cmd?
09:38 mhulsman1 joined #gluster
09:40 ppai joined #gluster
09:45 mhulsman joined #gluster
09:48 Wizek joined #gluster
09:50 Wizek joined #gluster
09:50 zat1 joined #gluster
09:51 mhulsman1 joined #gluster
09:52 [diablo] joined #gluster
09:52 jiffin1 joined #gluster
09:52 zat1 joined #gluster
09:53 witsches joined #gluster
09:55 ndevos ivan_rossi: I think you need to restart a certain (or more?) glusterd processes, that should not affect existing mounts
09:56 ndevos ivan_rossi: I do not know how to find which glusterd holds the volume lock, maybe atinm or kshlm can point you to that
09:56 kshlm GlusterD logs should have the information.
09:56 kshlm Something along the lines 'could obtain lock on UUID, lock held by UUID'
09:56 ankitraj joined #gluster
09:57 ndevos ah, right! and the "lock held by UUID" would point to the glusterd process that failed to release the lock
10:00 ivan_rossi i did find which gluster is holding the lock:
10:00 ivan_rossi 2016-11-14 09:46:11.937065] W [glusterd-locks.c:572:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/​3.8.5/xlator/mgmt/glusterd.so(+0x1f4f3) [0x7fba7a03b4f3] -->/usr/lib/x86_64-linux-gnu/glusterfs/​3.8.5/xlator/mgmt/glusterd.so(+0x1edb4) [0x7fba7a03adb4] -->/usr/lib/x86_64-linux-gnu/glusterfs/​3.8.5/xlator/mgmt/glusterd.so(+0xc72ed) [0x7fba7a0e32ed] ) 0-management: Lock for hisap-prod-1 held by dc07cddd-b320-4cff-94a3-eb12e4
10:00 glusterbot ivan_rossi: ('s karma is now -166
10:00 satya4ever joined #gluster
10:00 ivan_rossi and fromn the pool list i can see it is the localhot
10:01 ivan_rossi dc07cddd-b320-4cff-94a3-eb12e4e4e51alocalhost                    Connected
10:01 ivan_rossi it is just a matter of restarting the gluster service?
10:01 ndevos ok, I think that restarting the glusterd on that system is sufficient, but kshlm should probably confirm that
10:02 kshlm That should be it.
10:02 kshlm ivan_rossi, Verify that it's the same uuid everywhere.
10:02 kshlm It could be that the broadcast you did earlier lead to more stale locks elsewhere.
10:03 kshlm And you restart glusterds for all uuids found.
10:03 ivan_rossi i will have a look now
10:06 ivan_rossi DAMN! one peer reports a lock fro a different peer. so it looks a split brain or something. what now?
10:06 kshlm You restart the 2 glusterds.
10:07 p7mo joined #gluster
10:07 ivan_rossi kshlm: sorry did not read last. beginning restarts..
10:12 ivan_rossi That did it.
10:12 ivan_rossi kshlm++
10:12 glusterbot ivan_rossi: kshlm's karma is now 4
10:12 ivan_rossi rafi++
10:12 glusterbot ivan_rossi: rafi's karma is now 2
10:12 squizzi_ joined #gluster
10:12 kshlm I thought I had more karma.
10:13 kshlm ivan_rossi, Happy to help.
10:13 ivan_rossi kshlm++
10:13 glusterbot ivan_rossi: kshlm's karma is now 5
10:13 ivan_rossi feeling better now? :-D
10:13 mhulsman joined #gluster
10:13 kshlm ivan_rossi, Yeah.
10:13 kshlm Thanks. :)
10:14 ivan_rossi to you all. as usual
10:14 ndevos kshlm++ karma is kept per channel, you might have more in #gluster-dev :)
10:14 glusterbot ndevos: kshlm's karma is now 6
10:14 kshlm I have more on #gluster-dev, not here.
10:14 kshlm Note to self: Be more active on #gluster
10:15 ivan_rossi now mine is close to 0 kelvin since i posted a log line.
10:15 kshlm ivan_rossi, It's not you.
10:15 kshlm It's poor '('
10:17 buvanesh_kumar_ joined #gluster
10:21 luizcpg joined #gluster
10:22 jkroon joined #gluster
10:28 derjohn_mob joined #gluster
10:28 shubhendu joined #gluster
10:32 anrao joined #gluster
10:33 itisravi joined #gluster
10:34 devyani7 joined #gluster
10:44 itisravi_ joined #gluster
10:45 titansmc left #gluster
10:47 mhulsman1 joined #gluster
10:49 mhulsman2 joined #gluster
10:52 gothos joined #gluster
10:53 gothos Hello! We are having some problems with gluster mount points, thee was some problem with a script checking if gluster still works. We now have no gluster processes running but multiple gluster mount entries in the mtab.
10:54 gothos Any idea how to get rid of them?
10:54 gothos Trying to remount under the same directory gives us: Transport endpoint is not connected
10:55 gothos Using another dir works fine though. We are on 3.8.4
10:56 mhulsman joined #gluster
10:56 ppai joined #gluster
10:58 morse joined #gluster
11:16 mhulsman joined #gluster
11:17 R0ok_ joined #gluster
11:18 mhulsman joined #gluster
11:25 witsches joined #gluster
11:30 anrao joined #gluster
11:33 hchiramm_ joined #gluster
11:36 ppai joined #gluster
11:39 jiffin1 joined #gluster
11:45 witsches joined #gluster
11:49 witsches joined #gluster
11:52 skoduri joined #gluster
11:58 bluenemo joined #gluster
12:01 panina joined #gluster
12:09 skoduri joined #gluster
12:10 kotreshhr left #gluster
12:12 susant left #gluster
12:13 jiffin1 joined #gluster
12:23 mhulsman1 joined #gluster
12:24 mhulsman joined #gluster
12:27 kdhananjay joined #gluster
12:27 Marbug Hi, I have a problem with my glusterfs on one node. GlusterFS can start and even "sync" with the other cluster as replicate brick, but the NFS isn't running. rpcbind is started and if you do rpcinfo -p you see only portmap
12:28 shubhendu joined #gluster
12:28 Marbug last time when I added that last node to the cluster, the gluster on the other host started to do strange things (files couldn't be changed (I/O errors until I touched the file on another node))
12:28 itisravi joined #gluster
12:28 nishanth joined #gluster
12:28 Marbug could it be a problem in a difference in minor version?
12:29 Marbug all my other servers have 3.7.11 and that new node has 3.11.14
12:33 jiffin Marbug: can u do the following
12:33 jiffin gluster v start <volume> force
12:35 Marbug jiffin, I started it when I had those problems last time
12:36 Marbug but what is the "v" for ?
12:37 jiffin volume
12:37 jiffin Marbug:
12:37 jiffin can u check the /var/log/glusterfs/nfs.log
12:37 Marbug mmm
12:37 Marbug yes there are some errors:
12:37 jiffin on that machine
12:37 jiffin okay
12:38 jkroon joined #gluster
12:38 ndarshan joined #gluster
12:39 Marbug but those errors just show that gluster can't connect to the NFS (I suppose the problem miust be somewhere with RPC)
12:39 Marbug jiffin, the error: http://apaste.info/B8I4q
12:40 Marbug port 24007 is open in the firewall btw, as are all the others (same ports are set open as the other nodes)
12:40 Marbug (same rules actually)
12:43 shaunm joined #gluster
12:43 Marbug http://apaste.info/2f6MW we just added an ip, and the port range 49152:49300 has being taken wide enough
12:43 Marbug there is only 1 volume
12:47 coreping left #gluster
12:48 rastar joined #gluster
12:52 geertn joined #gluster
12:53 geertn my cluster has entries in gluster volume heal gvp1 info but it doesn't start healing them.
12:53 nbalacha joined #gluster
12:54 jiffin Marbug: what all about normal fuse client
12:55 johnmilton joined #gluster
12:55 jiffin can u mount the volume in that node?
12:58 Marbug jiffin, you mean mounting it from another node?
12:59 jiffin Marbug: i mean mount that volume in the same node
12:59 jiffin using fuse / glusterfs native
12:59 Marbug well that I can't because the NFS isn't running
12:59 Marbug and I mean the glusterFS NFS
13:00 jiffin Marbug: try the following command
13:00 Marbug and gluster-fuse is also installed
13:00 Marbug also glusterfsd can't be started, it doesn't show any status information from the init.d and no error either
13:05 jiffin u mean in o/p of gluster volume status
13:05 Marbug well 1 sec, I was just making a print
13:05 Marbug http://apaste.info/vztm3
13:06 k4n0 joined #gluster
13:06 tomaz__ joined #gluster
13:06 Marbug oh yeah, I removed the brick btw a while ago because it made some problems, but it was all the same, there wasn't any pid or so for the NFS on localhost
13:07 B21956 joined #gluster
13:09 rafi1 joined #gluster
13:12 witsches joined #gluster
13:19 jiffin Marbug: u can see that nfs-server and Self-heal Daemon is not runnig on that node
13:19 jiffin Marbug: try to restart glusterd
13:20 jiffin on that node and check the o/p of volume status commnad
13:20 Marbug what do you mean by o/p jiffin ?
13:20 plarsen joined #gluster
13:21 Marbug untill now I have restarted the glusterd multiple times and it didn't make a change, I also upgraded from 3.7.14 to .17 but nothing changed either
13:21 Marbug a spam of every 3 seconds shows me the following in glusterd.vol:
13:21 Marbug [2016-11-14 13:20:50.164622] I [MSGID: 106006] [glusterd-svc-mgmt.c:323:glu​sterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd.
13:21 Marbug [2016-11-14 13:20:50.164675] I [MSGID: 106006] [glusterd-svc-mgmt.c:323:glu​sterd_svc_common_rpc_notify] 0-management: glustershd has disconnected from glusterd.
13:22 Marbug mmmmmm it changed for one and the errors stopped sspamming, but then I got the following error:
13:22 Marbug [2016-11-14 13:21:32.168406] W [socket.c:596:__socket_rwv] 0-nfs: readv on /var/run/gluster/44a25b1793d​463c33e3a150a50eac9f6.socket failed (Invalid argument)
13:22 Marbug [2016-11-14 13:21:32.168468] W [socket.c:596:__socket_rwv] 0-glustershd: readv on /var/run/gluster/d3112422fc4​9edf05a660d3bf6a3a292.socket failed (Invalid argument)
13:25 jiffin Marbug: that is the issue
13:26 jiffin but i don't know y it got failed
13:26 jiffin is /var/run/gluster folder is present?
13:27 Marbug yes it is! and there are 3 .socket files and 1 .sock file
13:27 Marbug the 2 showed .socket files are present
13:29 jiffin Marbug: hmm
13:29 jiffin selinux?
13:31 Javezim itisravi Thanks, Basically we added an arbiter to our cluster this morning , we've noticed the arbiters load sky rocket whilst it replicates all the metadata over. I've noticed a tonne of errors in the logs - http://paste.ubuntu.com/23474628/
13:31 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:33 Marbug no selinux is enabled jiffin :)
13:34 rafi joined #gluster
13:34 itisravi Javezim: yeah increasing the replica count does cause a spike. But is the syncing still going on?
13:34 Javezim How can I tell? I mean the load is huge!
13:35 itisravi Javezim: what is the data size on the data brick?
13:35 Javezim Yeah its been sitting at the same for ages
13:35 Javezim Isn't increasing
13:35 rafi joined #gluster
13:35 itisravi Javezim: is it a lot of small files?
13:35 Javezim Its mixed really, but overall yes there are a lot of small files
13:36 itisravi If you see the glustershd logs on the nodes and they print 'Completed xxx self-heal on xxx' kind of messages, it means the healing is in progress.
13:38 * itisravi is not sure about the acl related errors though.
13:38 Javezim Well Glustershd logs seem to have a few errors - http://paste.ubuntu.com/23475642/
13:38 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:38 itisravi are you using posix acls?
13:38 Javezim Nope
13:39 jiffin Marbug: can u clean up all the .socket files from /var/run/gluster
13:39 itisravi hmm that looks like shd is having trouble talking to the bricks.
13:39 jiffin and restart glusterd
13:40 jiffin only .socket files
13:40 itisravi Javezim: does gluster volume status show all bricks as up and running?
13:41 Javezim Yes
13:41 Javezim Everything showing as Y for up
13:42 Javezim The ACL Errors appear to have stopped some while ago
13:42 Javezim But the glustershd errors are still coming
13:43 Marbug okido jiffin. I added a brick to the node, and the brick is syncing, is it better to wait till it's finished or doesn't it matter ?
13:43 Javezim and the arbiter isn't really growing at all
13:43 Javezim makes me think its not replicating metadata anymore
13:43 jiffin Marbug: do it after the sync
13:44 itisravi Javezim: what does gluster vol heal volname info say?
13:45 Javezim itisravi Spews out a tonne of gfid values
13:46 Marbug jiffin, crap I need to wait a day or so then :)
13:46 d0nn1e joined #gluster
13:47 witsches joined #gluster
13:47 itisravi Javezim: these values are printed on the  data bricks?
13:47 Marbug I have one of the latest problem now too, on a node I'm making a tar, and the following is occuring again: "tar ...: file changed as we read it" on all files. I have set the following option as described on someone having that problem too, but it didn't change either
13:47 itisravi (i.e. not the arbiter brick)
13:47 Javezim itisravi Query, if I do a gluster volume status <vol> detail, The INODE Count should match on the data brick to the arbiter brick?
13:47 Marbug gluster volume set share cluster.consistent-metadata on
13:47 plarsen joined #gluster
13:49 itisravi Javezim: yes it should. All files present in data brick must also be present in arbiter.
13:50 itisravi atleast after the heal completes.
13:50 Javezim itisravi Hmmm its only about half
13:50 Javezim and its not getting larger
13:50 itisravi Javezim: can you try restarting glustershd on the nodes?
13:51 Javezim Uhm how do i do that? And are we talking the data bricks or the arbiter ?
13:52 itisravi Javezim: you can do a gluster volume start volname force.
13:52 itisravi that restarts shd on all nodes.
13:53 itisravi Javezim: you can also trigger heals from the client. Just do a `find .|xargs stat > /dev/null`from the mount.
13:55 shyam joined #gluster
13:55 gothos left #gluster
13:58 B21956 joined #gluster
14:02 Javezim I ran the force command, now am just getting '
14:02 Javezim Another transaction is in progress for gv0. Please try again after sometime.' everytime i try a gluster volume gv0 status
14:03 ira joined #gluster
14:04 squizzi joined #gluster
14:06 itisravi ugh, try restarting glusterd: 'service glusterd restart' on all nodes.
14:10 bartden joined #gluster
14:10 bartden hi, i want to use glusterfs … which amount of storage data should i provide for a volume of 2TB … just 2TB?
14:12 nix0ut1aw joined #gluster
14:12 unclemarc joined #gluster
14:20 rafi joined #gluster
14:23 vbellur joined #gluster
14:23 hagarth joined #gluster
14:36 _fortis joined #gluster
14:37 skylar joined #gluster
14:42 witsches joined #gluster
14:44 Javezim itisravi I restarted the Arbiter machine and wallah - http://paste.ubuntu.com/23475906/
14:44 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:45 itisravi Javezim: great.
14:45 msn joined #gluster
14:46 itisravi Javezim: did the messages get clipped? It has to have sinks=2 in the end.
14:47 itisravi 2==the arbiter brick.
14:47 Javezim Nope
14:47 Javezim Just sinks=
14:47 itisravi strange.
14:48 flyingX joined #gluster
14:49 Javezim Arbiter is growing now though again
14:49 itisravi Is the Free Inodes count reducing on the arbiter brick?
14:49 itisravi ok
14:49 Javezim Loads come down from 180+ to about 6
15:08 Philambdo joined #gluster
15:11 [diablo] joined #gluster
15:15 johnc1 joined #gluster
15:17 johnc1 Hi All, i'm using GlusterFS version 3.7.11. After intitial geo-replication session, sometimes after day or 2. One of the brick fails to replicated and is stuck at history crawl state.
15:18 geertn I'm also stuck with 10k+ to be healed gfid's on my arbiter
15:18 geertn already restarted the arbiter
15:18 johnc1 The error i can see in the logs are: incomplete sync, retrying changelogs. log_raise_exception <top>: FAIL:
15:27 jbrooks joined #gluster
15:35 jtux joined #gluster
15:38 farhorizon joined #gluster
15:39 skoduri joined #gluster
15:42 satya4ever joined #gluster
15:48 shaunm joined #gluster
15:49 bartden Hi is it possible to store snapshots of a SSD on a SATA disk? Should i add a SSD PV and a SATA PV into a VG? Do i need to tell glusterfs to store snapshots in a certain PV of a VG?
15:56 wushudoin joined #gluster
15:56 haomaiwang joined #gluster
15:59 wushudoin| joined #gluster
16:01 logan- joined #gluster
16:03 vbellur joined #gluster
16:16 circ-user-Yoajv joined #gluster
16:17 Caveat4U joined #gluster
16:21 oytun joined #gluster
16:22 oytun Hello everyone. We are migrating to AWS EFS from our GlusterFS setup.
16:22 oytun DO you have any tips to transfer files from GF to EFS?
16:22 wushudoin joined #gluster
16:22 oytun It will take hours to copy them to EFS in a client server.
16:23 oytun and GlusterFS servers  store the files encrypted, so we can't simply just copy, we need to use the client to decrypt etc. (or is there a way to copy decrypted files inside a GlusterFS server?)
16:38 RameshN joined #gluster
16:47 snehring joined #gluster
17:13 hchiramm joined #gluster
17:20 derjohn_mob joined #gluster
17:32 jiffin joined #gluster
17:33 hchiramm joined #gluster
17:35 ppai joined #gluster
17:39 Caveat4U joined #gluster
17:40 ankitraj joined #gluster
17:46 Gnomethrower joined #gluster
17:46 hackman joined #gluster
17:46 oytun anyone with GlusterFS > AWS EFS migration experience?
18:01 calisto joined #gluster
18:10 Caveat4U joined #gluster
18:15 mhulsman joined #gluster
18:27 nishanth joined #gluster
18:40 BitByteNybble110 joined #gluster
18:41 Caveat4U joined #gluster
18:41 calisto joined #gluster
18:44 nathwill joined #gluster
18:46 Caveat4U joined #gluster
19:04 Philambdo joined #gluster
19:13 Caveat4U joined #gluster
19:26 nivek joined #gluster
19:33 Muthu joined #gluster
19:37 leafbag joined #gluster
19:39 Slashman joined #gluster
19:51 rwheeler joined #gluster
19:53 Philambdo joined #gluster
19:58 rwheeler joined #gluster
20:11 Caveat4U joined #gluster
20:11 annettec joined #gluster
20:12 MidlandTroy joined #gluster
20:21 kpease joined #gluster
20:25 kpease_ joined #gluster
20:29 calisto joined #gluster
20:33 arpu joined #gluster
21:00 leafbag joined #gluster
21:11 mhulsman joined #gluster
21:11 zat joined #gluster
21:41 Caveat4U joined #gluster
21:43 Caveat4U joined #gluster
22:01 leafbag joined #gluster
22:07 dnorman joined #gluster
22:24 bhakti joined #gluster
22:27 vbellur joined #gluster
22:42 Caveat4U joined #gluster
22:54 elastix joined #gluster
22:54 Wizek joined #gluster
23:00 Klas joined #gluster
23:02 leafbag joined #gluster
23:11 Amdintel joined #gluster
23:16 Caveat4U joined #gluster
23:27 Caveat4U joined #gluster
23:38 nathwill joined #gluster
23:43 Caveat4U joined #gluster
23:56 Caveat4U_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary