Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 gildub joined #gluster
00:26 akay1 does anyone have any problems with the trashcan on 3.7.2? the .trashcan folder isn't created... i tried creating one and turning the feature on but files arent going there
00:32 DV joined #gluster
00:49 nangthang joined #gluster
00:49 thangnn_ joined #gluster
01:01 mribeirodantas joined #gluster
01:25 davidself joined #gluster
01:28 arcolife joined #gluster
01:46 theron joined #gluster
02:10 DV__ joined #gluster
02:23 TheSeven hm, just tried this out in a real world scenario
02:23 TheSeven created replica1 volume, put some VMs on it
02:23 TheSeven increased the replica count, bang, seconds later the VMs are down
02:28 victori joined #gluster
02:36 DV__ joined #gluster
02:36 Peppaq joined #gluster
02:38 DV joined #gluster
02:39 nangthang joined #gluster
02:50 necrogami joined #gluster
02:50 arcolife joined #gluster
03:01 overclk joined #gluster
03:07 glusterbot News from newglusterbugs: [Bug 1234679] Disperse volume : 'ls -ltrh' doesn't list correct size of the files every time <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234679>
03:36 atinm joined #gluster
03:40 bharata-rao joined #gluster
03:52 TheSeven joined #gluster
03:58 natarej_ I've been looking into implementing gluster, however the small file write performance is a concern. (this in particular http://www.gluster.org/wp-content​/uploads/2014/05/smf-scaling.png )
03:58 natarej_ Is this still a major issue after the 3.7 small file performance optimizations?
03:58 itisravi joined #gluster
04:00 gem joined #gluster
04:03 shubhendu joined #gluster
04:06 SOLDIERz joined #gluster
04:06 kdhananjay joined #gluster
04:11 victori joined #gluster
04:24 sakshi joined #gluster
04:27 RameshN joined #gluster
04:27 poornimag joined #gluster
04:34 dgbaley joined #gluster
04:36 dgbaley Hello. Is there a way to create a volume without having all my peers available? I'm missing 2/8 nodes for a day or so, but would like to move on
04:40 anrao joined #gluster
04:41 nbalacha joined #gluster
04:48 ndarshan joined #gluster
04:50 arao joined #gluster
04:50 zeittunnel joined #gluster
05:03 jiffin joined #gluster
05:09 anil joined #gluster
05:09 pppp joined #gluster
05:11 schandra joined #gluster
05:11 spandit joined #gluster
05:17 hgowtham joined #gluster
05:18 glusterbot News from resolvedglusterbugs: [Bug 1234692] strict-O_DIRECT option implementation is wrong <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234692>
05:20 vimal joined #gluster
05:21 ashiq joined #gluster
05:22 deepakcs joined #gluster
05:31 overclk joined #gluster
05:32 Bhaskarakiran joined #gluster
05:32 d-fence joined #gluster
05:35 atalur joined #gluster
05:36 soumya joined #gluster
05:40 arao joined #gluster
05:42 raghu joined #gluster
05:49 kaushal_ joined #gluster
05:50 gem_ joined #gluster
05:54 karnan joined #gluster
06:01 karnan joined #gluster
06:07 badone joined #gluster
06:11 gem joined #gluster
06:12 jtux joined #gluster
06:16 arao joined #gluster
06:17 chirino joined #gluster
06:24 Bhaskarakiran joined #gluster
06:30 gem joined #gluster
06:32 gildub joined #gluster
06:39 saurabh_ joined #gluster
06:43 maveric_amitc_ joined #gluster
06:46 nangthang joined #gluster
06:49 anrao joined #gluster
06:52 arao joined #gluster
06:54 abrt joined #gluster
07:04 ppai joined #gluster
07:13 elico joined #gluster
07:23 kotreshhr joined #gluster
07:23 akay1 @natarej i'm noticing great small file performance with 3.7.2 and samba-vfs
07:24 nsoffer joined #gluster
07:24 akay1 by great i mean much better than before (ie. usable) compared to what it was (abysmal)
07:25 natarej what sort of workload are you seeing this on?
07:26 natarej akay1, i actually felt a physical sense of relief reading that
07:26 akay1 basically a file server... half of my files are less than 1kb and i might have thousands in any one folder. i have about 15 vm's that are constantly accessing these files
07:27 natarej oh dear
07:27 akay1 haha yeah ive been sick for over a year trying to get decent performance
07:28 natarej i started to look at deploying ceph next week in the lab.  i really don't want to do that.
07:28 natarej i can't really simulate the workloads i want to yet though
07:28 akay1 i was at that point too... but im glad gluster is working well now
07:28 natarej but i can do as close as i can
07:29 akay1 whats your workload?
07:30 zeittunnel joined #gluster
07:33 natarej VPS host
07:34 natarej everything and anything
07:34 poornimag joined #gluster
07:35 anrao joined #gluster
07:40 soumya joined #gluster
07:46 al joined #gluster
07:51 natarej for the lab i've got three machines on 10gbe with a total of 26 disks, so i plan on making a host for each disk and starting from there
08:00 akay1 cool, how big are the disks?
08:10 natarej 240gb to 4tb
08:13 natarej went on a scavenger hunt for disks.  they're from anything we could find.
08:13 natarej even an SSD from a zenbook
08:16 rjoseph joined #gluster
08:20 akay1 oh nice :) thatll get you started in the lab at least
08:23 atalur joined #gluster
08:26 arao joined #gluster
08:30 gem joined #gluster
08:34 ctria joined #gluster
08:38 glusterbot News from newglusterbugs: [Bug 1234768] Disperse volume : NFS mount hung with plain IO <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234768>
08:42 RameshN_ joined #gluster
08:43 sysconfig joined #gluster
08:54 jcastill1 joined #gluster
08:56 Slashman joined #gluster
08:57 nsoffer joined #gluster
08:59 gem joined #gluster
09:05 jcastillo joined #gluster
09:07 sysconfig joined #gluster
09:10 arao joined #gluster
09:13 sysconfig_ joined #gluster
09:37 The_Ball joined #gluster
09:38 The_Ball If you have a volume witch include a replicate 2, is it correct that any expansion will always require a multiple of two more bricks?
09:41 badone joined #gluster
09:42 arao joined #gluster
09:43 raghu joined #gluster
09:44 DV joined #gluster
09:44 poornimag joined #gluster
09:45 DV__ joined #gluster
09:46 gem joined #gluster
09:49 Pupeno joined #gluster
09:51 arao joined #gluster
09:51 ndevos The_Ball: yes, that is correct
09:59 jmarley joined #gluster
10:00 k-ma joined #gluster
10:07 fabiand joined #gluster
10:07 fabiand hey
10:08 fabiand Can somebody tell me if the same problem is experienced:
10:08 fabiand 04:48:56,502 INFO program: Installing : glusterfs-server-3.7.2-1.el7.x86_64                        27/28Error unpacking rpm package glusterfs-server-3.7.2-1.el7.x86_64
10:08 fabiand 04:48:56,504 INFO program:
10:08 fabiand 04:48:56,505 INFO program: error: unpacking of archive failed on file /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py;5588e529: cpio: open
10:08 fabiand 04:48:56,505 INFO program: Installing : glusterfs-geo-replication-3.7.2-1.el7.x86_64               28/28
10:08 fabiand 04:48:56,505 INFO program: error: glusterfs-server-3.7.2-1.el7.x86_64: install failed
10:08 fabiand That happens to me for a few days (2?) now ..
10:08 NTQ joined #gluster
10:10 poornimag joined #gluster
10:13 hchiramm fabiand, true..
10:13 kkeithley_ joined #gluster
10:13 hchiramm fabiand, we are checking on that error
10:13 fabiand hchiramm, huh - good news :)
10:13 ndevos fabiand: you can workaround that with "mkdir -p /var/lib/glusterd/hooks/1/delete/post" and re-install
10:13 fabiand ... that I am not the only one seeing it ...
10:14 fabiand ndevos, right, thanks - But my automation is rather waiting for the pkg fix :)
10:14 fabiand will it be fixed in an update?
10:14 fabiand Sooner, rather than later?
10:14 ndevos yeah, kkeithley_ built corrected packages yesterday, I think
10:14 kokopelli joined #gluster
10:14 ndevos they should become available really soon
10:15 hchiramm ndevos, the latest packages are 3.7.2-2
10:15 kokopelli hi
10:15 glusterbot kokopelli: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:15 hchiramm it also has the same issue i believe
10:15 fabiand oh yes, I see - -2 is there /me rebuilds
10:15 fabiand oh - has it? darn ..
10:15 fabiand Let's see
10:16 hchiramm fabiand, please cross check
10:16 hchiramm may be I missed something
10:16 fabiand hchiramm, doing so right now
10:16 eljrax There's a conversation in #gluster-dev about this right now
10:17 * fabiand moves ..
10:17 fabiand left #gluster
10:17 kokopelli have you ever seen that; client-0: remote operation failed: No such file or directory. Path: <gfid:b98058fb-c85f-402b-9a34-3335d780a3d9>
10:17 kkeithley_ The work-around is to manually `mkdir /var/lib/glusterd/hooks/1/delete/post` before install/upgrade
10:20 atinm joined #gluster
10:25 ndevos kokopelli: yeah, that can happen, even under normal usage, it does not need to be an error
10:27 ndevos kokopelli: that "no such file or directory" is similar to "stale filehandle", some details in https://bugzilla.redhat.com​/show_bug.cgi?id=1228731#c2
10:27 glusterbot Bug 1228731: medium, high, ---, ndevos, MODIFIED , nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful
10:27 kdhananjay joined #gluster
10:28 Dw_Sn joined #gluster
10:28 Dw_Sn Hello, is 3.7.x included the server to server replications ? or still replication is done from the client ?
10:29 ccha joined #gluster
10:32 soumya_ joined #gluster
10:39 meghanam joined #gluster
10:39 glusterbot News from newglusterbugs: [Bug 1234819] glusterd: glusterd crashes while importing a USS enabled volume which is already started <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234819>
10:41 Suckervi1le joined #gluster
10:41 kkeithley_ Dw_Sn: New Style Replication (NSR) is not in 3.7.
10:42 kotreshhr1 joined #gluster
10:42 Dw_Sn kkeithley_: any idea when ? i think it was supposed to be in 3.6
10:43 nbalacha joined #gluster
10:44 kkeithley_ Dw_Sn: you can ask jdarcy when he signs on here or in #gluster-dev. That's his baby.
10:44 Suckervi1le hey all. I set up a replicated 2-node cluster on VMs for testing purposes according to the quick start guide, created the 100 "copy-test" files - they were synced. When i deleted them off the first node they weren't deleted off the second. Why is that? how can i diagnose this problem? i didn't find anything in the docs.
10:46 Suckervi1le also, touching a file on the first node later, it wasn't synced to the second node. According to volume info and peer status the nodes are connected correctly
10:46 Dw_Sn kkeithley_: okay thank you :)
10:46 nbalacha joined #gluster
10:51 kkeithley1 joined #gluster
10:54 DV__ joined #gluster
11:01 arao joined #gluster
11:04 sysconfig joined #gluster
11:08 DV joined #gluster
11:09 ira joined #gluster
11:16 nsoffer joined #gluster
11:17 kokopelli joined #gluster
11:19 kokopellifd joined #gluster
11:21 kokopellifd ndevos : i have too many logs line like that. I've 3 nodes. I saw that just a node has this file, other nodes havent when I looked related links.
11:22 ndevos kokopellifd: what is the full line in the log, or at least the beginning of it?
11:23 kokopellifd ndevos ; [2015-06-23 10:11:35.210545] W [client-rpc-fops.c:2774:client3_3_lookup_cbk] 0-x1-client-0: remote operation failed: No such file or directory. Path: <gfid:b98058fb-c85f-402b-9a34-3335d780a3d9> (b98058fb-c85f-402b-9a34-3335d780a3d9)
11:24 kokopellifd ndevos; all servers are in the loop for self-healing
11:25 kokopellifd ndevos; when i looked with getfatr i took following results;
11:25 kokopellifd trusted.afr.x1-client-0=0x000000050000000200000000
11:25 kokopellifd trusted.afr.x1-client-1=0x000000000000000000000000
11:25 kokopellifd trusted.gfid=0xb98058fbc85f402b9a343335d780a3d9
11:26 kokopellifd but i dont see the gfid others node
11:27 DV__ joined #gluster
11:28 elico joined #gluster
11:28 ndevos kokopellifd: what version of glusterfs is that? I think this patch from 2013 should prevent those logs http://review.gluster.org/6318
11:30 anrao joined #gluster
11:30 kokopelli ndevos ; 3.5.2
11:30 R0ok_ joined #gluster
11:33 Suckervi1le joined #gluster
11:34 ndevos kokopelli: oh, maybe the if-check in client-rpc-fops.c is not really correct... could you file a bug so that we can look into it?
11:34 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
11:34 atinm Gluster Community Bug Triage meeting today at 12:00 UTC
11:35 LebedevRI joined #gluster
11:36 rwheeler joined #gluster
11:36 spalai joined #gluster
11:36 elico joined #gluster
11:37 arao joined #gluster
11:40 atinm bug triaging to happen @ #gluster-meeting
11:41 kokopelli ndevos; i will try to upgrade to 3.7
11:41 kokopelli maybe it will be resolve
11:41 kokopelli thanks
11:42 XpineX joined #gluster
11:50 Ulrar joined #gluster
11:50 kotreshhr joined #gluster
11:50 kanagaraj joined #gluster
11:51 Ulrar Hi, I'm trying to increase the performance.cache-size on a volume but I always get Set volume unsuccessful, and I can't figure out why
11:56 elico joined #gluster
11:56 soumya_ joined #gluster
12:01 zeittunnel joined #gluster
12:02 itisravi joined #gluster
12:05 s19n joined #gluster
12:09 glusterbot News from newglusterbugs: [Bug 1231171] [RFE]- How to find total number of glusterfs client mounts? <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231171>
12:10 jtux joined #gluster
12:10 gildub joined #gluster
12:13 TvL2386 joined #gluster
12:14 SpComb^ joined #gluster
12:16 SpComb^ https://www.mail-archive.com/​users@ovirt.org/msg25215.html oops, ran out of disk space on /var/lib/glusterd some time ago, and glusterd wouldn't boot since it had corrupted glusterd.info and peers/* as empty files... the mail doesn't make it clear, but is this a bug that's been fixed?
12:17 SpComb^ I was able to recover by recreating the affected files by hand using the other peers, but it sounds like whatever glusterd is doing to update those files isn't atomic in the face of write errors like ENOSPC :P
12:19 glusterbot News from resolvedglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
12:19 glusterbot News from resolvedglusterbugs: [Bug 1234296] Quota: Porting logging messages to new logging framework <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234296>
12:24 SpComb^ https://github.com/gluster/glusterfs/b​lob/02ccab9257ab36af281a4a610684a913df​a32d0f/libglusterfs/src/store.c#L353 <-- does fflush() report errors via feof()?
12:24 glusterbot SpComb^: <'s karma is now -14
12:24 SpComb^ that looks suspiscious to me, the `ret = fflush(...)` is completely ignored
12:25 DV__ joined #gluster
12:29 arao joined #gluster
12:29 Ulrar The performance are really horrible, it's taking 40 secondes to generate a page
12:30 Ulrar Can't figure out a way to reduce that
12:37 SpComb^ yup, that code is broken, let me file a bug report..
12:37 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
12:39 glusterbot News from newglusterbugs: [Bug 1231175] [RFE]- How to find total number of glusterfs samba client mounts? <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231175>
12:39 glusterbot News from newglusterbugs: [Bug 1231207] [RFE]- How to find total number of glusterfs fuse client mounts? <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231207>
12:39 glusterbot News from newglusterbugs: [Bug 1234873] glusterfs-resource-agents - volume - voldir is not properly set <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234873>
12:39 glusterbot News from newglusterbugs: [Bug 1234877] Samba crashes with 3.7.2 and VFS module <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234877>
12:49 glusterbot News from resolvedglusterbugs: [Bug 1194640] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194640>
12:53 wkf joined #gluster
12:55 ndevos Ulrar: have you seen the ,,(php) advise?
12:55 glusterbot Ulrar: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
12:55 dgbaley joined #gluster
12:56 Ulrar ndevos: I did, most of that is assuming you actually can touch the code unfortunatly
12:56 Ulrar which I can't
12:58 smohan joined #gluster
12:59 ndevos Ulrar: depending on the access pattern, you could gain performance improvements by mounting over nfs instead of fuse
13:00 Ulrar I tried, looks like I gained 20 seconds but it's still pretty horrible. Noticed I was using a very very old version though, I'm trying to upgrade to 3.6.3
13:00 bennyturns joined #gluster
13:01 Twistedgrim joined #gluster
13:02 ndevos a newer version might improve things, but I doubt it will change much, the stat() calls that are done are very expensive, and that is not going to change
13:03 theron joined #gluster
13:05 * SpComb^ https://bugzilla.redhat.co​m/show_bug.cgi?id=1234891
13:05 glusterbot Bug 1234891: unspecified, unspecified, ---, bugs, NEW , gf_store_save_value() fflush() error-checking bug, leading to corruption of glusterd.info when filesystem is full
13:07 arao joined #gluster
13:10 glusterbot News from newglusterbugs: [Bug 1234891] gf_store_save_value() fflush() error-checking bug, leading to corruption of glusterd.info when filesystem is full <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234891>
13:10 Saravana joined #gluster
13:10 DV__ joined #gluster
13:11 dusmant joined #gluster
13:22 pppp joined #gluster
13:23 kanagaraj joined #gluster
13:25 abrt joined #gluster
13:25 georgeh-LT2 joined #gluster
13:26 kenansulayman joined #gluster
13:28 kenansul- joined #gluster
13:29 arao joined #gluster
13:29 aaronott joined #gluster
13:30 squizzi joined #gluster
13:30 dgandhi joined #gluster
13:31 arcolife joined #gluster
13:33 lexi2 joined #gluster
13:33 firemanxbr joined #gluster
13:38 social joined #gluster
13:48 _Bryan_ joined #gluster
13:57 hamiller joined #gluster
14:02 jiffin joined #gluster
14:07 squizzi joined #gluster
14:09 Ulrar So for mostly read of very small files, should I use the direct io mode or not ?
14:09 Ulrar Doesn't look like it's changing much
14:10 squizzi joined #gluster
14:23 victori joined #gluster
14:26 arao joined #gluster
14:32 apahim joined #gluster
14:34 victori joined #gluster
14:37 RameshN_ joined #gluster
14:46 nsoffer joined #gluster
14:50 zeittunnel joined #gluster
14:53 georgeh-LT2 joined #gluster
14:56 spalai left #gluster
15:01 zeittunnel joined #gluster
15:03 smohan joined #gluster
15:05 bennyturns joined #gluster
15:11 gem joined #gluster
15:12 arao joined #gluster
15:15 hamiller joined #gluster
15:17 Ulrar ndevos: Okay, version 3.6.3 mounted on NFS works like a charm
15:17 Ulrar The version in the debian wheezy repo wasn't working well, even mounted on NFS
15:18 Ulrar I get < 3 seconds generation on each side, which is roughly what we had without glusterfs, so perfect
15:18 Ulrar One of the server is even faster, for some reason
15:19 ndevos Ulrar: ah, nice to hear
15:24 arao joined #gluster
15:25 elico joined #gluster
15:34 Suckervi1le ulrar: you may want to upgrade to jessie
15:36 georgeh-LT2 joined #gluster
15:42 zerick joined #gluster
15:43 kotreshhr left #gluster
15:44 chirino joined #gluster
15:45 nangthang joined #gluster
15:46 s19n left #gluster
15:55 cholcombe joined #gluster
15:55 arcolife joined #gluster
16:01 daMaestro joined #gluster
16:24 soumya_ joined #gluster
16:28 Ulrar Suckervi1le: Not really, the server aren't ours and I don't have the time for that now. And I read jessie is using systemd, I'd rather not :)
16:29 arao joined #gluster
16:31 georgeh-LT2 joined #gluster
16:32 bennyturns joined #gluster
16:48 arao joined #gluster
16:49 maveric_amitc_ joined #gluster
16:51 SOLDIERz joined #gluster
17:05 Rapture joined #gluster
17:11 chirino joined #gluster
17:12 victori joined #gluster
17:13 victori joined #gluster
17:36 apahim_ joined #gluster
17:36 squizzi joined #gluster
17:37 squizzi joined #gluster
17:55 jiffin joined #gluster
17:59 rotbeard joined #gluster
18:00 firemanxbr_ joined #gluster
18:04 JamesToo joined #gluster
18:09 woakes070048 joined #gluster
18:10 calavera joined #gluster
18:10 calavera Hi, I just found this page about a rest-api and I was wondering about its status http://gluster.readthedocs.org/en/latest/Fe​ature%20Planning/GlusterFS%203.7/rest-api/
18:11 glusterbot News from newglusterbugs: [Bug 1235007] Allow only lookup and delete operation on file that is in split-brain <https://bugzilla.redhat.co​m/show_bug.cgi?id=1235007>
18:11 calavera I was also wondering if anyone knows where the code is, I might be able to help with that
18:12 shyam joined #gluster
18:13 calavera oh I found it https://github.com/aravindavk/glusterfs-rest :)
18:26 dusmant joined #gluster
18:32 calavera joined #gluster
18:43 ChrisHolcombe joined #gluster
18:43 ira joined #gluster
18:55 nsoffer joined #gluster
18:57 Rapture joined #gluster
19:00 sage joined #gluster
19:14 hagarth joined #gluster
19:32 NTQ joined #gluster
19:39 TheSeven dammit
19:39 TheSeven adding bricks to increase the replica count is a surefire way to kill a volume
19:39 TheSeven a few seconds after I go from replica1 to replica3, all VMs on that volume die with storage errors
19:43 marcoceppi joined #gluster
19:46 TheSeven and are not resumable
19:46 TheSeven in at least one case the disk image also got corrupted sufficiently to force me to completely reinstall that vm
19:58 calavera joined #gluster
20:00 marcoceppi joined #gluster
20:05 lanning ya, that was probably an I/O issue.  bottlenecked at the client (VM) side.
20:06 lanning what are you using to access the disk image? qemu-gluster or fuse or nfs?
20:09 bennyturns joined #gluster
20:16 squizzi joined #gluster
20:19 marcoceppi joined #gluster
20:37 elico left #gluster
20:39 TheCthulhu joined #gluster
20:41 calavera joined #gluster
20:42 cholcombe joined #gluster
21:01 badone joined #gluster
21:38 PatNarciso joined #gluster
21:39 PatNarciso right.  so.  what irc clients do you'all use?
21:39 * PatNarciso misses mIRC.  might fire up a VPS just for it.  hmmm.
21:47 Ulrar weechat
21:49 jmarley joined #gluster
21:57 wkf joined #gluster
21:59 Pupeno_ joined #gluster
22:16 PatNarciso Ulrar, thanks!  just installed it.  gonna put it in use tomorrow.  reminds me of bitchx so-far.
22:18 cholcombe joined #gluster
22:19 aaronott joined #gluster
22:20 calavera joined #gluster
22:24 cyberbootje joined #gluster
22:31 Ulrar :)
22:35 Rapture joined #gluster
22:44 prg3 joined #gluster
22:52 wkf joined #gluster
23:14 calavera joined #gluster
23:15 gildub joined #gluster
23:45 badone_ joined #gluster
23:45 woakes070048 joined #gluster
23:46 calavera joined #gluster
23:59 amitc joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary