Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 nueces joined #gluster
00:37 jporterfield joined #gluster
00:42 nueces joined #gluster
00:43 jporterfield joined #gluster
00:48 jporterfield joined #gluster
01:27 lpabon joined #gluster
01:33 kevein joined #gluster
01:58 jporterfield joined #gluster
02:05 bstr_ joined #gluster
02:28 harish joined #gluster
02:52 StarBeast joined #gluster
03:05 saurabh joined #gluster
03:12 bharata-rao joined #gluster
03:15 rcoup joined #gluster
03:19 asias joined #gluster
03:29 shubhendu joined #gluster
03:56 vshankar joined #gluster
03:58 zapotah joined #gluster
03:58 zapotah joined #gluster
03:59 itisravi joined #gluster
04:04 ndarshan joined #gluster
04:12 ppai joined #gluster
04:15 DV joined #gluster
04:22 dbruhn joined #gluster
04:27 ppai joined #gluster
04:39 bharata-rao joined #gluster
04:41 davinder joined #gluster
04:42 shruti joined #gluster
04:50 rjoseph joined #gluster
04:54 bala joined #gluster
04:58 vpshastry joined #gluster
04:59 ababu joined #gluster
05:07 nshaikh joined #gluster
05:07 msciciel_ joined #gluster
05:09 sgowda joined #gluster
05:10 ndarshan joined #gluster
05:12 bala joined #gluster
05:13 raghu joined #gluster
05:16 dusmant joined #gluster
05:23 harish joined #gluster
05:24 kshlm joined #gluster
05:24 hchiramm_ joined #gluster
05:24 hagarth joined #gluster
05:28 ajha joined #gluster
05:33 kshlm joined #gluster
05:42 ndarshan joined #gluster
05:43 rcoup left #gluster
05:48 shylesh joined #gluster
05:52 aravindavk joined #gluster
05:58 rgustafs joined #gluster
06:07 psharma joined #gluster
06:12 shubhendu joined #gluster
06:17 lalatenduM joined #gluster
06:23 lalatenduM joined #gluster
06:25 glusterbot New news from newglusterbugs: [Bug 1008301] large NFS writes to Gluster slow down then stop <http://goo.gl/OCGQ6S>
06:27 CheRi joined #gluster
06:35 vshankar joined #gluster
06:37 jtux joined #gluster
06:50 bulde joined #gluster
06:51 anands joined #gluster
06:52 ngoswami joined #gluster
06:59 ctria joined #gluster
07:00 rastar joined #gluster
07:00 shubhendu joined #gluster
07:04 meghanam joined #gluster
07:04 meghanam_ joined #gluster
07:25 shubhendu joined #gluster
07:26 mgebbe___ joined #gluster
07:31 eseyman joined #gluster
07:33 ProT-0-TypE joined #gluster
07:39 ppai joined #gluster
07:46 shubhendu joined #gluster
07:50 andreask joined #gluster
08:14 Norky joined #gluster
08:15 mooperd_ joined #gluster
08:19 manik joined #gluster
08:28 shylesh joined #gluster
08:30 morse joined #gluster
08:36 brahman joined #gluster
08:37 vimal joined #gluster
08:43 hybrid5122 joined #gluster
08:45 al_ joined #gluster
08:49 shubhendu joined #gluster
08:51 sgowda joined #gluster
09:13 rjoseph joined #gluster
09:18 eseyman joined #gluster
09:26 harish_ joined #gluster
09:27 brahman Morning, am I correct in understanding that with the replicate + distribute translators in my graph I could have volume1 on serverA and volume1 on ServerB stay in sync and distributed amongst the 2 nodes?
09:31 dkorzhevin joined #gluster
09:37 davinder joined #gluster
09:44 harish_ joined #gluster
09:45 andreask brahman: with two servers you typically only have a replicated setup ... if you add more servers it will become distributed
09:52 jre1234 joined #gluster
09:57 jporterfield joined #gluster
09:57 zapotah joined #gluster
10:00 shubhendu joined #gluster
10:01 l4v joined #gluster
10:03 bulde1 joined #gluster
10:04 brahman andreask: thanks for that. I will be having more servers and should have specified that. sorry. My aim is to get 1 of the servers in the cluster to be on high availbility storage and the remaining servers be on faster storage but less reliable. All clients will be connecting to the pool of servers running off the fast storage.
10:05 jcsp joined #gluster
10:05 brahman This is still at a POC stage but hopefully we will be able to move foreward using gluster. (as always, perfomance/reliability will be the main devciding factors)
10:05 brahman s/foreward/forward/
10:05 glusterbot What brahman meant to say was: This is still at a POC stage but hopefully we will be able to move forward using gluster. (as always, perfomance/reliability will be the main devciding factors)
10:12 jporterfield joined #gluster
10:21 sgowda joined #gluster
10:22 andreask brahman: how do you plan to connect to only the fast storage servers?
10:23 [o__o] joined #gluster
10:24 rgustafs joined #gluster
10:26 kanagaraj joined #gluster
10:29 ababu joined #gluster
10:45 edward2 joined #gluster
10:47 davinder joined #gluster
10:48 aravindavk joined #gluster
10:51 al joined #gluster
10:55 mbukatov joined #gluster
10:55 jtux joined #gluster
10:57 brahman andreask: I was planning on having a server on each node that needs access and mounting -t glusterfs localhost:/gv0
10:57 brahman If the node goes down it looses access to it's volume anyways...
10:57 brahman Hopefully using localhost will improve speed also?
11:01 lpabon joined #gluster
11:03 davinder joined #gluster
11:12 ababu joined #gluster
11:18 bulde joined #gluster
11:18 kkeithley joined #gluster
11:19 ppai joined #gluster
11:21 jporterfield joined #gluster
11:23 CheRi joined #gluster
11:30 bfoster joined #gluster
11:31 Remco brahman: glusterfs clients make a connection to each node
11:31 shylesh joined #gluster
11:33 sprachgenerator joined #gluster
11:41 andreask joined #gluster
11:41 Elendrys joined #gluster
11:44 ababu joined #gluster
11:50 shylesh joined #gluster
12:00 bennyturns joined #gluster
12:02 ctria joined #gluster
12:03 brahman Remco: Oh. Does this mean that if I "mount -t glusterfs -o ro localhost:/gv0 /mnt" the FUSE fs will be connecting to each node in the cluster?
12:03 Remco Pretty much
12:04 Remco Kinda hard to get files if they are on different servers and you only have a connection to one
12:05 Norky mount will cause the glusterfs client to retrieve a volume file from the initially-specified server (localhost in this case). That volfile contains references to all servers involved in the volume. The client will make and maintain concurrent connections to each one
12:05 brahman Remco: I am using replicate, so the files are on the localhost which is what I assumed would mean the reads/writes would be to localhost and the glusterd would take care of synching the different bricks
12:06 brahman Norky: thanks for the extra details. All starting to make sense. :)
12:06 B21956 joined #gluster
12:06 Norky I might be explaining that imperfectly, my own understanding is not complete :)
12:08 manik joined #gluster
12:08 sticky_afk joined #gluster
12:08 stickyboy joined #gluster
12:17 bulde joined #gluster
12:22 dusmant joined #gluster
12:22 hagarth joined #gluster
12:27 brahman Remco, Norky: With this new information in mind, is there a way to stop glusterfs over fuse from accessing one of my gluster nodes? in my setup, I am planning to use Glusterfs to expose filesystem1. filesystem1 will be on node1 (slow,redundant storage) and also on node2, node3 (fast non redundant storage). ideally I would like to force all reads/writes to hit the fast storage first.
12:29 CheRi joined #gluster
12:31 shylesh joined #gluster
12:36 nullck joined #gluster
12:38 Norky err, I don't think that will work. What might work is doing geo-replication from the fast machines to the slow
12:49 alan_ joined #gluster
12:52 mattf joined #gluster
12:53 Guest63419 Hi, can some one help me stop (force stop) volume rebalancing and start brick removing? After server rebooting my system work unstable and I can not find why.
12:58 rcheleguini joined #gluster
13:05 jdarcy joined #gluster
13:06 vshankar joined #gluster
13:07 chirino joined #gluster
13:11 brahman Norky: looking into geo-replication.
13:19 ctria joined #gluster
13:27 asias joined #gluster
13:48 kaptk2 joined #gluster
13:57 emp_ joined #gluster
13:59 emp_ Hi all. I am having a issue on a gluster 3.3.3 system with two nodes
13:59 emp_ How can I tell if a sync in currently happening
14:00 jclift joined #gluster
14:00 davinder joined #gluster
14:01 stickyboy joined #gluster
14:04 tziOm joined #gluster
14:08 emp_ Everything I currently look at tell me the filesystem is fine but when I try and manipulate stuff on the filesystem I get huge IO lag
14:16 DV joined #gluster
14:18 emp_ Any assistance would be appreciated
14:32 brahman Is it possible to reset the network port in use for a given brick. I deleted a volume, detached all peers but when I recreated my bricks/volume the network port was 24010 and not 24009.
14:36 emp_ brahman - maybe under /var/lib/glusterd/vols/gv0/bricks/
14:37 brahman emp_: yep, just found listen-port= :) thanks
14:37 vshankar joined #gluster
14:40 bugs_ joined #gluster
14:48 jclift left #gluster
14:53 neofob joined #gluster
14:53 jbrooks joined #gluster
14:54 jclift joined #gluster
14:55 jclift left #gluster
14:56 zapotah joined #gluster
14:56 zapotah joined #gluster
14:56 jclift joined #gluster
15:01 stickyboy joined #gluster
15:01 stickyboy joined #gluster
15:04 sprachgenerator joined #gluster
15:09 hagarth joined #gluster
15:13 mooperd__ joined #gluster
15:13 premera_j joined #gluster
15:15 glusterbot` joined #gluster
15:18 kaptk2 Why am I getting Permission denied errors when trying to boot a virtual machine from gluster?
15:19 kaptk2 I look at the permissions and they are 600 which isn't correct but they just get changed back to that if I try to manually change them
15:19 kaptk2 Here is a log excerpt: http://fpaste.org/39885/79344655/
15:19 glusterbot Title: #39885 Fedora Project Pastebin (at fpaste.org)
15:21 kaptk2 I'm using gluster 3.4 on Fedora 19
15:21 mboden77 joined #gluster
15:22 neofob i've been running 3.4 on debian with 3.11 kernel since labour day, so far so good
15:27 mboden77 Hello everyone. I just started running glusterfs 3.2.7 (debian wheezy) and I have been googling a lot about how to handle a split brain and only found few scripts which seemed to require me to shutdown glusterd, is there any official way how to handle a split brain that doesn't require stopping glusterd?
15:27 failshell joined #gluster
15:28 \_pol joined #gluster
15:31 mboden77 ~
15:31 zerick joined #gluster
15:33 Technicool joined #gluster
15:34 \_pol_ joined #gluster
15:46 kaptk2 joined #gluster
15:47 TuxedoMan joined #gluster
15:48 dbruhn mboden77, you should upgrade to 3.4, or at least 3.3.2
15:48 dbruhn and here is the link to fix it in 3.3
15:48 dbruhn http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
15:48 glusterbot <http://goo.gl/FPFUX> (at joejulian.name)
15:49 B21956 left #gluster
15:52 kaptk2 joined #gluster
15:53 mboden77 Ok. I will try that. Thanks.
15:54 kkeithley @ppa|mboden77
15:54 kkeithley @ppa
15:54 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
16:00 TuxedoMan left #gluster
16:01 shylesh joined #gluster
16:02 cnoffsin joined #gluster
16:08 cnoffsin hey guys is there a recommended set of nfs mount options when local mounting your gluster share via nfs in gluster 3.4 ?
16:11 verdurin_ joined #gluster
16:20 eagle1 joined #gluster
16:20 eagle1 hi
16:20 glusterbot eagle1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:23 cnoffsin the reason I ask about nfs local mount options in 3.4 is we are getting repeated stale file handles in nfs.log: [2013-08-06 19:12:18.036456] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-splunkshare-client-0: remote operation failed: Stale file handle. Path: /var/run/splunk/dispatch/rt_1375815106.​21.splunksearch2.sciquest.com/info.csv (a00cddb3-0446-4574-a82c-5e1945e4fae8)
16:25 mboden77 joined #gluster
16:25 quique joined #gluster
16:26 Mo__ joined #gluster
16:30 zaitcev joined #gluster
16:36 archetech joined #gluster
16:37 marbu joined #gluster
16:42 mboden77 joined #gluster
16:42 JordanHackworth joined #gluster
16:42 arusso_znc joined #gluster
16:43 vpshastry joined #gluster
16:45 mboden77 glusterfs-3.4 self healed the problem by itself. Thanks for the help.
16:45 Remco You used the ubuntu ppa packages?
16:46 mboden77 No I used the packages from here: deb http://download.gluster.org/pub/glu​ster/glusterfs/3.4/3.4.0/Debian/apt wheezy main
16:46 glusterbot <http://goo.gl/LA9eBy> (at download.gluster.org)
16:46 archetech has anybody installed RHSS 2.1 equivalent with glsuter and centos?
16:46 Remco Ah, sort of manual install then
16:47 mboden77 No I added that line to my sources and just aptitude installed them.
16:48 kkeithley I guess those should be okay, although I'd have thought most people would prefer "real" Ubuntu packages from an Ubuntu ppa.
16:49 Remco I don't install any ubuntu stuff on my debian
16:49 Remco Since I just know it will break things eventually
16:49 Remco The plain packages would work for me
16:50 kkeithley right. I thought mboden77 was running Ubuntu, but I reread and see that he's running wheezy
16:50 Remco :)
16:50 nexus joined #gluster
16:51 kkeithley @yum
16:51 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
16:52 kkeithley archetech: ^^^
16:52 sticky_afk joined #gluster
16:52 stickyboy joined #gluster
16:53 LoudNoises joined #gluster
16:54 archetech cool    anybody running it on centos 6.4 here yet?
16:55 archetech I could use a guide/how to
16:55 \_pol joined #gluster
16:58 davinder joined #gluster
16:59 \_pol_ joined #gluster
17:00 kkeithley 5000+ downloads of glusterfs-3.4.0-x.el6 RPMs from download.gluster.org since Aug 25. I hazard more than a few of those are for CentOS
17:00 kkeithley You should be able to find a setup howto on gluster.org
17:01 eagle1 I'm using it right now on centos 6.4 :)
17:02 Remco http://supercolony.gluster.org/piperma​il/gluster-users/2013-July/036626.html <= Will that work right with 3.4?
17:02 glusterbot <http://goo.gl/Ixp9NM> (at supercolony.gluster.org)
17:04 vpshastry joined #gluster
17:05 archetech eagle1,   so what docs did you use?
17:10 kkeithley Remco: from the flock(2) man page: flock()  does not lock files over NFS.  Use fcntl(2) instead.
17:11 kkeithley IIRC gluster native (FUSE) supports flock locking.
17:11 edward1 joined #gluster
17:11 Remco kkeithley: The problem being that I can't make PHP use something else
17:11 Remco And the native one is too slow
17:12 Remco A simple switch to nfs got me a 400% performance increase
17:13 vpshastry left #gluster
17:13 elyograg http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/
17:13 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
17:14 elyograg Remco: ^^ mostly about using php on gluster.
17:14 Remco Tried most of that, only got a very modest increase
17:16 Remco Basically gluster is very slow for many small files
17:16 Remco Big files saturate lines no problem, small ones have *huge* CPU overhead
17:16 cfeller joined #gluster
17:17 cfeller left #gluster
17:17 Remco If >50% of CPU is spent on gluster plumbing, the application daemon doesn't get to do anything
17:17 kkeithley 3.4 should be better on small files, but you'll have a tough time with php because it does a stat() on every include file. Each stat() triggers a self heal check in gluster.
17:18 elyograg that will be the case with any network file system.  NFS gets around the problem by aggressively caching, which can make things worse (not better) if you're accessing the filesystem simultaneously from multiple machines.
17:18 Remco I do use apc and caching, but it just doesn't cut it compared to NFS
17:18 Remco Mostly read access, so the caching is exactly what is needed
17:19 semiosis Remco: did you turn off stat in apc?
17:19 kkeithley But NFS doesn't do flock() locking, and no amount of wishful thinking is going to change that. This doesn't have anything to do with Gluster, it's an NFS thing.
17:19 Remco Yes, didn't do that much though
17:19 semiosis Remco: what kind of network connects your gluster clients & servers?
17:20 Remco Seems like I'm stuck in between a rock and hard place :(
17:20 Remco semiosis: virtio which does like 16 gbit or something
17:20 eagle1 archetech, i've just installed the repo and the 3.4 rpms with yum
17:20 eagle1 after that, gluster is ready, just create your volume :)
17:20 semiosis Remco: your gluster clients & servers are vms on the same hypervisor?
17:21 Remco Yes
17:21 Remco Small dataset, only 8GB virtual disks
17:22 semiosis hmmm
17:22 jclift left #gluster
17:23 Remco Using gluster the problem is CPU usage. Close to 80% on one core on the client, while both servers do ~50%
17:23 Remco With NFS the usage on the client drops to near zero, while the servers are much lower too
17:24 semiosis imho running the whole cluster (cleint & server machines) on the same hypervisor doesn't yield any valid performance data
17:24 semiosis although it could be helpful for testing failure/recovery scenarios
17:25 archetech eagle1,  are you running the other stuff with glsuter like xfs and ovirt?
17:25 Remco I have no reason to think CPU would be different when they are on different machines
17:25 semiosis Remco: no reason to think they'd be the same either
17:25 eagle1 archetech, yes, glusterfs 3.4 witch ovirt 3.3 on centos 6.4. bricks with xfs
17:26 eagle1 ?well.. i'm trying :D
17:26 andreask joined #gluster
17:26 Remco semiosis: Nowhere near the 400% performance increase with NFS though
17:26 archetech did you use a guide?
17:26 eagle1 nope
17:27 eagle1 i'm using my brain, a lot of reading, googling and try-and-error
17:27 eagle1 right now i'm triying with gluster-rdma
17:27 archetech ugh  ok    I want to use it with cloudstack ideally
17:27 semiosis Remco: i think that nfs appears so dramatically faster because all your vms are on the same hypervisor
17:28 Remco I think it's just because of the client side caching which means gluster doesn't need to work so hard
17:28 semiosis Remco: try adding nfs mount options noac,sync
17:28 semiosis see if that evens things out
17:28 Remco I see the CPU going crazy with gluster, and there are enough cores for every VM
17:29 eagle1 i don't know cloudstack sorry
17:29 semiosis Remco: also, what php app or framework are you using?
17:30 Remco I did my tests on the chive login page
17:30 Remco http://www.chive-project.com/
17:30 glusterbot Title: Chive - Web-based MySQL Admin Interface (at www.chive-project.com)
17:32 archetech theres a few gluster guys over there  they can help me with that part   I 'll google it together
17:33 jcsp joined #gluster
17:34 eagle1 elyograg, many thanks for the link above about php. really usefull
17:34 semiosis Remco: have you optimized your php include path?
17:37 Remco There is almost nothing in there
17:38 Remco And the framework uses an autoloader
17:39 B21956 joined #gluster
17:41 elyograg if switching to NFS made it that much better, then there's something that's actually hitting the filesystem extremely frequently.  The whole point of Joe Julian's blog post is to avoid hitting the filesystem at all.  Basically, get the data loaded once off the disk into RAM somewhere and only go back to the disk on a very infrequent basis.  What I would call infrequent is no more often than once a minute.
17:42 Remco apc with stat off is not making it much better
17:46 arusso joined #gluster
17:59 mbukatov joined #gluster
18:04 semiosis Remco: apc with stat disabled should pretty much eliminate fs ops for loading php files
18:04 semiosis Remco: maybe strace apache (http://edoceo.com/exemplar/​strace-multiple-processes) and see what iops are happening
18:04 glusterbot <http://goo.gl/GnNWo> (at edoceo.com)
18:05 Remco It's nginx
18:05 l4v joined #gluster
18:05 Remco And php-fpm runs on a different box
18:05 semiosis ?!
18:05 Remco nginx is only throwing stuff over at php-fpm
18:05 Remco Not touching any file
18:06 Remco It can easily saturate a gigabit line while handling the php requests
18:06 Remco 'box' being different VM
18:07 Remco So there is just php hitting files
18:10 Remco Best case I get 10% more requests per second with stat off, cache is getting all hits
18:12 dusmant joined #gluster
18:19 B21956 joined #gluster
18:19 elyograg Remco: do you know what files are getting hit on the filesystem? You'd need to find out what those are and find a way to make it use RAM instead.  Which is where semiosis was going with running strace on apache.
18:21 semiosis yeah seems worth a shot tho really idk where this is going
18:21 zapotah joined #gluster
18:22 semiosis performance issues with a mysql db admin interface?  running whole storage cluster on the same vm host?  php-fpm-nginx?
18:26 Remco I'm just testing how many requests per second I can get to see the limits of the system
18:26 Remco No way I would put it in production like this
18:27 kkeithley samppah: were you able to do some testing with those qa1 RPMs?
18:33 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
18:41 edward2 joined #gluster
18:42 chirino joined #gluster
18:43 Remco Hmm, it might be because relative paths are being used and apc has to stat those
18:47 ctria joined #gluster
18:51 B21956 joined #gluster
18:59 \_pol joined #gluster
19:04 StarBeast joined #gluster
19:09 JoeJulian "No way I would put it in production like this" - such negativity... ;)
19:09 Remco It's a test setup, it's not nearly stable enough
19:11 JoeJulian <span style="voice: valleygirl">stable is so last year...</span>
19:16 semiosis http://24.media.tumblr.com/tumb​lr_m1vt421PFF1rpxzngo1_500.jpg
19:16 glusterbot <http://goo.gl/pT5dVt> (at 24.media.tumblr.com)
19:16 * semiosis goes to lunch
19:19 eagle1 i want to use gluster for vm, but i want to avoid striping. i'll put the "growing" thing into a gluster nfs replicated+distributed share, and i will make another volume for the vm (that will not grow over the brick size). it's correct? (gluster 3.4 with ovirt 3.3 on centos 6.4 with gluster-rdma)
19:21 JoeJulian That sounds like one solution. Another possibility would be to add new images to the vm as additional drives and maybe do something like lvm inside the vm.
19:22 eagle1 JoeJulian, thanks for the answer. it's something to know that i'm not totally in error :D
19:22 cnoffsin left #gluster
19:22 eagle1 striping is bad, right?
19:22 JoeJulian @stripe
19:22 glusterbot JoeJulian: Please see http://goo.gl/5ohqd about stripe volumes.
19:22 eagle1 i mean.. even on RAID i like to avoid striping :D
19:22 JoeJulian I'm right there with you.
19:22 eagle1 already read
19:22 eagle1 :)
19:22 eagle1 thanks
19:23 JoeJulian I'm also still concerned with the stability of the stripe translator. Just my opinion though.
19:26 eagle1 I've choosed gluster for a lot of reasons. one of them is that with replicated+distributed i'll access to the files. even in the worst case with a lot of rsync and sweat you can recovery your data
19:26 Remco I always suggest not picking by features but by testing
19:27 JoeJulian There should be a good balance of both.
19:28 JoeJulian Identify your requirements and your wants. Test the software that fulfills your requirements and break ties with your wants.
19:32 eagle1 joined #gluster
19:32 zapotah joined #gluster
19:32 zapotah joined #gluster
19:32 eagle1 Remco, true, i'm testing :)
19:32 samppah kkeithley: yes, everything has been good this far.. probably going to to test volume expanding tomorrow
19:34 jcsp joined #gluster
19:45 ProT-0-TypE joined #gluster
20:09 emp_ joined #gluster
20:13 ProT-0-TypE joined #gluster
20:21 B21956 left #gluster
20:27 cfeller joined #gluster
20:34 jporterfield joined #gluster
20:43 mooperd__ left #gluster
20:47 jporterfield joined #gluster
20:48 andreask joined #gluster
20:55 badone joined #gluster
21:01 tziOm joined #gluster
21:23 [o__o] left #gluster
21:24 ProT-0-TypE joined #gluster
21:25 [o__o] joined #gluster
21:26 bennyturns joined #gluster
21:27 [o__o] left #gluster
21:28 [o__o] joined #gluster
21:30 [o__o] left #gluster
21:32 [o__o] joined #gluster
22:04 ProT-0-TypE joined #gluster
22:25 ProT-0-TypE joined #gluster
22:34 ueberall joined #gluster
22:41 ueberall joined #gluster
23:09 JonnyNomad joined #gluster
23:12 jporterfield joined #gluster
23:15 ueberall joined #gluster
23:15 \_pol joined #gluster
23:27 ueberall joined #gluster
23:42 xymox joined #gluster
23:51 xymox joined #gluster
23:59 xymox joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary