Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 deeville Jefferson
00:13 deeville joined #gluster
00:20 Pupeno joined #gluster
00:49 atrius_ so.. i seem to have gotten myself wedged somehow right at the start.. i tried to create my first volume and it failed saying the other node wasn't in the proper status. now whenever i try and create it again it just insists the brick is already part of a volume.. but no volumes are available.
00:50 atrius_ is there anything to do short of destroying the VM and starting over? nuking the config directory didn't seem to help any
00:52 atrius` nm.. sorted it.. used setfattr -x trusted.glusterfs.volume-id to remove the attribute :)
01:13 DV_ joined #gluster
01:18 siel joined #gluster
01:26 topshare joined #gluster
01:45 harish joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
02:04 diegows joined #gluster
02:30 haomaiwa_ joined #gluster
02:36 sputnik13 joined #gluster
02:37 meghanam joined #gluster
02:37 meghanam_ joined #gluster
02:38 bala joined #gluster
02:54 bharata-rao joined #gluster
02:54 topshare joined #gluster
03:01 sputnik13 joined #gluster
03:01 meghanam joined #gluster
03:01 meghanam_ joined #gluster
03:09 hagarth joined #gluster
03:12 ira joined #gluster
03:18 haomaiwa_ joined #gluster
03:33 32NAAOCZT joined #gluster
03:50 rejy joined #gluster
03:55 rjoseph joined #gluster
04:01 XpineX_ joined #gluster
04:03 sputnik13 joined #gluster
04:03 freemanbrandon joined #gluster
04:27 apscomp joined #gluster
04:59 doekia joined #gluster
05:18 Pupeno_ joined #gluster
05:20 sputnik13 joined #gluster
05:32 bala joined #gluster
05:45 Pupeno joined #gluster
05:51 haomaiw__ joined #gluster
06:09 mat1010_ joined #gluster
06:09 ur joined #gluster
06:09 abyss^^_ joined #gluster
06:10 Philambdo1 joined #gluster
06:10 hybrid5121 joined #gluster
06:10 sickness_ joined #gluster
06:11 juhaj joined #gluster
06:11 johnmwilliams__ joined #gluster
06:11 twx joined #gluster
06:12 side_control joined #gluster
06:12 suliba_ joined #gluster
06:13 eightyeight joined #gluster
06:14 johnmark joined #gluster
06:14 Kins_ joined #gluster
06:15 RobertLaptop joined #gluster
06:17 dockbram joined #gluster
06:17 rturk|afk joined #gluster
06:18 Rydekull joined #gluster
06:21 michaellotz joined #gluster
06:23 cmtime joined #gluster
06:26 RobertLaptop joined #gluster
06:31 Fen2 joined #gluster
06:34 Slydder joined #gluster
06:44 Kins joined #gluster
06:45 Slydder morning all
06:46 Slydder ndevos: I had an idea last night that just blew my mind. gonna try something today. hope it works.
06:49 ricky-ti1 joined #gluster
06:57 rturk|afk joined #gluster
06:59 meghanam__ joined #gluster
06:59 morse_ joined #gluster
07:00 UnwashedMeme1 joined #gluster
07:01 skippy_ joined #gluster
07:01 sijis_ joined #gluster
07:02 tty00_ joined #gluster
07:03 ctria joined #gluster
07:06 Debolaz joined #gluster
07:06 Debolaz joined #gluster
07:06 Diddi joined #gluster
07:09 _zerick_ joined #gluster
07:11 bjornar joined #gluster
07:16 ccha joined #gluster
07:26 cjanbanan joined #gluster
07:33 Thilam joined #gluster
07:33 freemanbrandon joined #gluster
07:35 fsimonce joined #gluster
07:42 ndevos Slydder: ideas are good, I hope the result are promising too
07:43 hchiramm_ joined #gluster
07:45 glusterbot New news from newglusterbugs: [Bug 1127457] Setting security.* xattrs fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1127457>
07:52 Slydder ndevos: yeah. lsyncd watches local copy of gluster data (not the mount point) for changes and rsyncs the changes inside 3 seconds or so to a local copy for actual use. it also watches the local copy for changes and then writes those changes to the gluster mount point for replication. this removes the fuse lookup and write overhead completely from the equation.
07:56 ramteid joined #gluster
07:58 RobertLaptop joined #gluster
07:58 liquidat joined #gluster
08:06 haomaiwa_ joined #gluster
08:07 Slashman joined #gluster
08:18 fsimonce joined #gluster
08:18 sickness re
08:19 ackjewt joined #gluster
08:19 Andreas-IPO joined #gluster
08:24 rgustafs joined #gluster
08:25 Humble joined #gluster
08:35 ackjewt joined #gluster
08:35 Andreas-IPO joined #gluster
08:44 soumya__ joined #gluster
08:48 gehaxelt joined #gluster
08:49 elico joined #gluster
08:59 Slashman joined #gluster
09:00 vimal joined #gluster
09:00 tryggvil joined #gluster
09:01 fsimonce joined #gluster
09:08 tryggvil joined #gluster
09:08 bharata-rao joined #gluster
09:10 fsimonce joined #gluster
09:10 Pupeno joined #gluster
09:16 glusterbot New news from newglusterbugs: [Bug 1130307] MacOSX/Darwin port <https://bugzilla.redhat.co​m/show_bug.cgi?id=1130307>
09:17 DV_ joined #gluster
09:24 fsimonce joined #gluster
09:34 harish joined #gluster
09:41 fsimonce joined #gluster
09:53 Andreas-IPO joined #gluster
09:53 bharata_ joined #gluster
10:14 haomai___ joined #gluster
10:20 fsimonce joined #gluster
10:21 Slashman joined #gluster
10:27 elico left #gluster
10:31 fsimonce joined #gluster
10:56 michaellotz joined #gluster
10:59 Thilam hi everyone, do you know when 3.5.3 packages will be released for debian systems ?
11:07 kkeithley1 joined #gluster
11:07 tomased joined #gluster
11:12 fsimonce joined #gluster
11:16 Yossarianuk joined #gluster
11:19 redgoo joined #gluster
11:19 redgoo How can i upgrade my glusterFS from 3.3 to 3.4 or 3.5
11:19 redgoo on debian
11:20 Yossarianuk redgoo: http://download.gluster.org/pub/glust​er/glusterfs/3.5/LATEST/Debian/README
11:20 Yossarianuk as long as you are running wheezy +
11:21 redgoo so if i stop glusterFS on all my machines
11:21 redgoo and then run apt-get install glusterfs-server3.4 shouldbe okay right
11:25 bala joined #gluster
11:27 virusuy joined #gluster
11:27 Yossarianuk redgoo: Sorry that is something I am unaware of (i.e the consequences of upgrading versions.)
11:28 Yossarianuk in fact i'm about to ask a noob question...
11:32 Yossarianuk hi - I am going to setup a glusterfs cluster on at least 2 servers - over 2 different datacentres - I want a master/master cluster  - do I need geo-replication or not ?
11:33 Yossarianuk i.e I want it so that if either 'node' went down the content the cluster served would still be online - any advise is welcomed...
11:34 redgoo thanks YossarianUK - yes stopping nd upgradign works
11:34 redgoo re-your question are you planning to have 1 server in locationA and second server on LocationB
11:35 redgoo and over the internet you want to setup glusterfs replication
11:35 redgoo ??
11:35 Yossarianuk redgoo: that is right
11:35 redgoo that might be a problem in terms of speed
11:35 redgoo personally i would say
11:36 redgoo have 2servers in LocationA setup glusterFS then 2servers in LocationB setup glusterFS and then do the geo-replication
11:36 redgoo read this might help
11:36 redgoo http://blog.gluster.org/category/geo-replication/
11:36 Yossarianuk redgoo: thanks - will have a read - thanks
11:36 fsimonce joined #gluster
11:39 Yossarianuk essentially all I want is to have a volume replicated in the case where a DC went down - I assume geo-replication is what I would need to do. - speed isn't 100% an issue, its not going to be a huge volume...
11:40 Yossarianuk is it possible to use glusterfs/geo-replication and do (for example an hourly sync across)
11:40 edward1 joined #gluster
11:44 LebedevRI joined #gluster
11:45 XpineX_ joined #gluster
11:46 glusterbot New news from newglusterbugs: [Bug 1156003] write speed degradation after replica brick is down <https://bugzilla.redhat.co​m/show_bug.cgi?id=1156003>
11:48 bala joined #gluster
11:54 fsimonce joined #gluster
12:01 Slashman_ joined #gluster
12:05 Fen1 joined #gluster
12:08 plarsen joined #gluster
12:11 bala joined #gluster
12:13 plarsen joined #gluster
12:14 plarsen joined #gluster
12:15 meghanam joined #gluster
12:15 meghanam_ joined #gluster
12:16 glusterbot New news from newglusterbugs: [Bug 1156022] creating hardlinks with perl fails randomly <https://bugzilla.redhat.co​m/show_bug.cgi?id=1156022>
12:18 fsimonce joined #gluster
12:35 bala joined #gluster
12:35 fsimonce joined #gluster
12:42 meghanam_ joined #gluster
12:42 meghanam joined #gluster
12:43 redgoo YossarianUK i dont think you can do geo-replication with only one brick on each side... you need to have two bricks at least on each side
12:43 Yossarianuk redgoo: thanks !
12:43 Yossarianuk will setup a test shortly...
12:46 calisto joined #gluster
12:47 fsimonce joined #gluster
12:48 julim joined #gluster
12:48 B21956 joined #gluster
12:50 B21956 joined #gluster
12:51 theron joined #gluster
12:53 theron_ joined #gluster
12:53 ira joined #gluster
12:55 fsimonce joined #gluster
13:00 Yossarianuk joined #gluster
13:05 haomaiwa_ joined #gluster
13:08 theron joined #gluster
13:08 bennyturns joined #gluster
13:10 cfeller joined #gluster
13:11 DV_ joined #gluster
13:13 rwheeler joined #gluster
13:14 virusuy joined #gluster
13:14 virusuy joined #gluster
13:18 lyang0 joined #gluster
13:23 cjanbanan My filters doesn't seem to be executed whenever vol files are written. Any ideas what could be the reason? If I run them manually I get the expected result, so they work and are executable.
13:27 haomaiwang joined #gluster
13:34 haomai___ joined #gluster
13:37 fsimonce joined #gluster
13:44 XpineX__ joined #gluster
13:48 fsimonce joined #gluster
13:51 freemanbrandon joined #gluster
14:08 bene2 joined #gluster
14:09 topshare joined #gluster
14:10 topshare joined #gluster
14:11 XpineX_ joined #gluster
14:15 topshare joined #gluster
14:17 russoisraeli good morning. Any reason why one side would show that its replica is connected, and the other one not? Restarted both glusterd's. Firewall is not an issue, as telnet probe to 24007 works fine from both sides.
14:23 _dist joined #gluster
14:24 soumya__ joined #gluster
14:24 jobewan joined #gluster
14:26 Debolaz russoisraeli: My experience is that glusterd is extremely unreliable for managing processes/connections. And restarting it usually don't help things.
14:28 wushudoin joined #gluster
14:28 russoisraeli Debolaz - what can I do then? :(
14:28 msmith joined #gluster
14:29 Debolaz Figure out what the problem is from the logs.
14:30 Debolaz I wish the logs would do a better job separating "here's some debug info" from the "OH SHIT THERE'S A PROBLEM HERE" messages. :)
14:30 theron joined #gluster
14:31 theron joined #gluster
14:33 theron_ joined #gluster
14:39 failshell joined #gluster
14:41 fsimonce joined #gluster
14:41 tdasilva joined #gluster
14:45 diegows joined #gluster
14:53 meghanam joined #gluster
14:53 meghanam_ joined #gluster
14:54 theron joined #gluster
14:56 lpabon joined #gluster
15:00 ctria joined #gluster
15:10 fsimonce joined #gluster
15:26 ctria joined #gluster
15:27 chirino joined #gluster
15:36 xavih joined #gluster
15:36 fsimonce joined #gluster
15:38 chirino joined #gluster
15:43 virusuy joined #gluster
15:47 calisto joined #gluster
15:48 fsimonce joined #gluster
16:05 glusterbot New news from resolvedglusterbugs: [Bug 1120815] df reports incorrect space available and used. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1120815>
16:05 _Bryan_ joined #gluster
16:26 MacWinner joined #gluster
16:31 msmith joined #gluster
16:33 haomaiwa_ joined #gluster
16:36 haomai___ joined #gluster
16:37 Humble joined #gluster
16:39 fsimonce joined #gluster
16:43 sputnik13 joined #gluster
16:44 JoeJulian ~ports | russoisraeli
16:44 glusterbot russoisraeli: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
16:52 mbukatov joined #gluster
16:53 mbukatov joined #gluster
16:59 sputnik13 joined #gluster
16:59 lpabon joined #gluster
17:01 sputnik13 joined #gluster
17:02 theron joined #gluster
17:02 JoeJulian Damn. I have files on bricks that I don't want them on and I can't think of any way to make them move.
17:03 ctria joined #gluster
17:12 davidhadas_ joined #gluster
17:25 fsimonce joined #gluster
17:30 kumar joined #gluster
17:32 zerick joined #gluster
17:39 davemc We'd like to find out how you're using GlusterFS.  A short  survey is up at https://t.co/Cosm63qWY6 i. If you could take a could of minutes to run through it, that would be great.
17:39 glusterbot Title: GlusterFS use survey (at t.co)
17:41 ricky-ticky joined #gluster
17:42 johnmark davemc: w00t
17:43 JoeJulian I *used* to use glusterfs for going to conferences... ;)
17:48 failshel_ joined #gluster
17:57 rshott joined #gluster
17:57 failshell joined #gluster
18:02 DV joined #gluster
18:11 hagarth joined #gluster
18:13 ira joined #gluster
18:18 lpabon joined #gluster
18:20 fsimonce joined #gluster
18:28 Pupeno joined #gluster
18:41 fsimonce joined #gluster
18:50 semiosis davemc: submitted.  thanks for putting this survey together.  looking forward to seeing the results
18:52 kedmison joined #gluster
18:52 diegows joined #gluster
18:53 coredump joined #gluster
18:53 kedmison Hi, I've got a Gluster 3.3.2 installation of 4 servers, and I would like to upgrade to 3.5.2.  Can I jump straight from 3.3.x to 3.5.x or should I upgrade through 3.4.x first?
18:53 Pupeno_ joined #gluster
18:54 coredump joined #gluster
18:54 chirino joined #gluster
18:56 _dist kedmison: I can't remember, do you brick directories on 3.3.2 have the .glusterfs directory?
18:56 kedmison yes they do.
18:59 chirino joined #gluster
18:59 _dist kedmison: I'm not 100% on 3.3 -> 3.5 beng safe. I'd recommend you to offline your volume, upgrade from 3.3 --> 3.4, Online, test make sure everything is good/healthy. Then do the same thing for 3.4 --> 3.5 (offline, upgrade, online)
19:01 kedmison ok, thank you.  I was thinking the same thing but before I started I wanted to make sure I wasn't making extra work for myself.
19:06 theron_ joined #gluster
19:07 failshell joined #gluster
19:07 DV joined #gluster
19:08 jbrooks joined #gluster
19:08 Pupeno joined #gluster
19:09 JoeJulian It /should/ be fine to just upgrade.
19:09 JoeJulian @upgrade notes
19:09 freemanbrandon joined #gluster
19:09 glusterbot JoeJulian: I do not know about 'upgrade notes', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
19:10 JoeJulian Shouldn't be anything different from ,,(3.4 upgrade notes)
19:10 glusterbot http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
19:11 JoeJulian kedmison: ^
19:11 kedmison I've read both the 3.4 upgrade notes and the 3.5 upgrade notes. The 3.5 upgrade notes only make reference to upgrading from 3.4.  I was hoping that I could upgrade directly, but couldn't find any references that said 3.3 -> 3.5 was either tested or expected to succeed.
19:12 fsimonce joined #gluster
19:13 JoeJulian I've heard of it being done successfully.
19:14 _dist JoeJulian: thanks for chiming in, we've stayed pretty current so I've never done such a large update
19:14 freemanbrandon joined #gluster
19:15 JoeJulian Both semiosis and I have leapfrogged through the ages.  :)
19:15 kedmison we run gluster here in a pure distribute-only mode so we don't have the ability to upgrade without a service outage.
19:15 JoeJulian true
19:16 JoeJulian Which, in some ways, makes it easier. :D
19:17 ndk joined #gluster
19:17 kedmison joejulian: by leapfrogged, do you mean upgraded past versions (i.e. like a 3.3 -> 3.5 upgrade)?
19:18 JoeJulian we did. It wasn't as easy prior to 3.3 though.
19:18 JoeJulian I think we're both on 3.4 now though.
19:19 JoeJulian There's a lot of bug fixes that didn't get backported to 3.4 that I'm seeing. I wish I could upgrade to 3.5 now.
19:19 portante joined #gluster
19:20 kedmison ok.  I'm starting to have some faith in a 3.3 -> 3.5 upgrade now.  I want to get there for some more reliable remove-brick/add-brick replacement scenarios, plus some rebalancing of the existing load in our cluster.  We're at ~24TB of mid-to-small files and I need to re-build some of the original servers in our gluster cluster.
19:20 _dist JoeJulian: How come you aren't on 3.5 yet?
19:20 JoeJulian And that's why I wish I could upgrade as well.
19:20 JoeJulian production
19:21 JoeJulian This new place is moving off of gluster for hosting vm images.
19:21 _dist JoeJulian: I'm curious if those add-remove brick with open files issue is all fixed up? I haven't seen a gluster update for 3.5.2 in a while (on debian)
19:22 JoeJulian I would wait for 3.5.3 (any day now)
19:22 kedmison wish I could.  we're at 96% utilization right now.
19:23 freemanbrandon joined #gluster
19:23 _dist JoeJulian: Sounds good, also I noticed since on 3.5.2 that the heal info while finally correct for VMs (awesome!) takes like 40 seconds to run (I'm probably exagerating that a bit
19:24 _dist and the new heal stat seems to have the same issue that 3.4.1 did (thinking everything is always healing) I know I should have put bugs in, or there may already be. Finally the wording of "possibly healing" is kinda silly
19:24 JoeJulian _dist: that might make sense if it's 42 seconds.
19:24 semiosis i dont upgrade.  i start fresh & move data over
19:24 semiosis whether it's the OS on my laptop or our production storage cluster
19:25 JoeJulian Ah, right. I forgot about that.
19:25 _dist JoeJulian: All in all, we're running a 3-way replica that maxes out 10gbe when it needs to and everything works great. The heal itself takes forever though (18-20 hours to heal a 1TB volume, even if it's only down for 30 seconds). But that's not an every day event.
19:26 _dist ^^ I think that's because of VMs though, file volumes heal quickly
19:27 brettnem joined #gluster
19:29 ira joined #gluster
19:35 DV joined #gluster
19:37 theron_ joined #gluster
19:43 fsimonce joined #gluster
19:45 zerick joined #gluster
19:46 DV joined #gluster
19:52 fsimonce joined #gluster
19:53 firemanxbr joined #gluster
19:53 freemanbrandon joined #gluster
19:55 DV joined #gluster
20:03 fsimonce joined #gluster
20:09 and` joined #gluster
20:11 neofob joined #gluster
20:11 fsimonce joined #gluster
20:12 DV joined #gluster
20:15 freemanbrandon joined #gluster
20:17 freemanbrandon joined #gluster
20:18 russoisraeli JoeJulian - I've changed the ports in the config file to the old versions, and it was working just fine. Then the connection between the bricks went offline for some time, and went it came back up, I started encountering this issue
20:18 russoisraeli so it's not like it was never working. Was working great and then it happened. No changes.
20:21 JoeJulian Do you know how the RPC handles the port notification so you can be sure that the port changes you made are communicated correctly? (not being facetious or condescending. I haven't looked at that part of the code.)
20:21 side_control joined #gluster
20:24 russoisraeli JoeJulian - no issues... hmmm.. that's a good question... let me check portmapper/RPC...
20:24 russoisraeli I thought that it would be purely a TCP thing on 24007
20:24 JoeJulian not portmapper
20:25 JoeJulian The remote procedure call that the client uses over port 24007 to request the brick port from glusterd.
20:25 theron joined #gluster
20:25 russoisraeli ah... then no idea... I haven't looked at the code or anything... just the admin manual
20:26 JoeJulian but the admin manual doesn't say anything about changing the ports in the config files, does it?
20:27 JoeJulian (another thing I haven't really looked at in years)
20:27 russoisraeli heh, figured that part out on my own, when I upgraded gluster
20:27 russoisraeli :)
20:27 russoisraeli ok, it started working by itself
20:27 russoisraeli better late than never
20:27 russoisraeli oh...you know what I did see
20:27 JoeJulian ?
20:27 russoisraeli A glusterfs process was forever holding a socket open in SYN_SENT state
20:28 russoisraeli i guess it did it when the connection first failed
20:28 russoisraeli but now i don't see this hanging connection attempt anymore
20:28 JoeJulian cool
20:29 russoisraeli I suppose that if I stopped the volume, and restarted it, it would start working faster since it would likely terminate that process
20:29 JoeJulian I would still work on changing your firewall to match the current port ranges and move toward that standard.
20:29 russoisraeli JoeJulian - yeah, guess I could do that... don't mind
20:29 JoeJulian I think it'll save you trouble in the long run.
20:29 russoisraeli soon I will be implementing a bigger gluster system... so then it would be the time
20:31 russoisraeli are you one of the devs or just a hobbyist?
20:31 DV joined #gluster
20:34 kedmison I'm actually in the middle of my 3.3 -> 3.5 upgrade now, and am seeing something similar to what russoisraeli is seeing.
20:34 JoeJulian I'm not a dev, but I'm no hobbiest either. I'm a principal cloud architect at IO.
20:34 kedmison I *think* it's because i'm shutting down and restarting glusterfs daemons so fast that the desired ports (49152 and up) are still being held by the OS in a reserved state...
20:35 MrAbaddon joined #gluster
20:35 kedmison I had observed my gluster processes starting up with ports at 491*6*2, rather than 49152, and so my clients couldn't connect because 49162 wasn't permitted thru my firewall.
20:36 kedmison so I restarted the gluster processes again and my lowest gluster brick came up with port 4915*3*, not 49152.  then I found out that I had a stray glusterfs process from earlier.
20:37 kedmison so I shut my gluster daemons down again, waited a while (3-4 minutes, while I've been typing here), and started them up again.
20:38 Pupeno_ joined #gluster
20:38 JoeJulian Yep, it's dynamically allocated.
20:38 kedmison and now the glusterfs processes for the ten bricks on this server are on the expected 49152-49161 range of ports.
20:39 kedmison dynamically allocated is fine; but nothing actually owned those ports when I started the daemons.
20:41 kedmison I usually statically allocate the ports for the processes I write but I also set the SO_REUSEADDR flag so that I can quickly kill and re-start the processes without having to wait for timers to expire and permit a new process to bind to that port.
20:41 kedmison I wonder if that might be appropriate for these dynamically allocated ports.
20:43 kedmison otherwise, a quick shutdown and restart has brick port numbers migrating around and thus TCP connections potentially colliding with firewall rules that had been fine up until the quick restart.
21:02 jobewan joined #gluster
21:10 bennyturns joined #gluster
21:22 bennyturns joined #gluster
21:25 fsimonce joined #gluster
21:29 XpineX_ joined #gluster
21:30 kr0w left #gluster
21:42 fsimonce joined #gluster
21:47 julim joined #gluster
22:04 VeggieMeat joined #gluster
22:06 tryggvil joined #gluster
22:15 msmith joined #gluster
22:17 bene2 joined #gluster
23:11 theron joined #gluster
23:15 _zerick_ joined #gluster
23:17 lpabon joined #gluster
23:24 n-st joined #gluster
23:24 msmith joined #gluster
23:35 davemc joined #gluster
23:47 cjanbanan joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary