Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 bgpepi joined #gluster
00:22 premera joined #gluster
00:28 brimstone joined #gluster
00:34 glusterbot joined #gluster
00:38 glusterbot joined #gluster
01:18 glusterbot joined #gluster
01:29 _polto_ joined #gluster
01:51 StarBeast joined #gluster
02:01 harish joined #gluster
02:08 harish_ joined #gluster
02:20 calum_ joined #gluster
02:51 jag3773 joined #gluster
02:52 StarBeast joined #gluster
02:53 brimstone joined #gluster
02:54 plarsen joined #gluster
02:55 davidbierce joined #gluster
03:33 psyl0n joined #gluster
03:41 StarBeast joined #gluster
03:50 glusterbot joined #gluster
03:57 bala joined #gluster
04:01 Amanda joined #gluster
04:17 bala joined #gluster
04:42 StarBeast joined #gluster
04:51 shyam joined #gluster
04:54 bala1 joined #gluster
05:28 MiteshShah joined #gluster
06:06 davinder4 joined #gluster
06:12 dylan_ joined #gluster
06:13 StarBeast joined #gluster
06:20 _pol joined #gluster
07:13 krypto joined #gluster
07:44 Rio_S2 joined #gluster
07:48 ctria joined #gluster
07:51 ekuric joined #gluster
07:56 davinder4 joined #gluster
08:06 _polto_ joined #gluster
08:06 _polto_ joined #gluster
08:47 davinder4 joined #gluster
09:00 RedShift joined #gluster
09:05 cyberbootje does anyone have a gluster setup with more than 2TB of virtual machine data?
09:06 ngoswami joined #gluster
09:15 StarBeast joined #gluster
09:47 dylan_ joined #gluster
10:05 psyl0n joined #gluster
10:12 psyl0n joined #gluster
10:55 dylan_ joined #gluster
11:35 davidbierce cyberbootje:  In a single image, no, in VM total VM images, yes, about 40TB between several clusters.
11:38 dylan_ joined #gluster
11:51 samppah davidbierce: nice.. how is it working? using fuse or libgfapi?
11:53 davidbierce Fairly solid.  Mix of Fuse and NFS.
11:53 Rio_S2_ joined #gluster
11:53 samppah ah, okay
11:54 Rio_S2 joined #gluster
11:54 samppah what about performance?
11:54 ekuric joined #gluster
11:58 samppah i working on replace drbd+iscsi solution with glusterfs over fuse..
11:58 davidbierce Performance is reasonable.  For our workloads, not slow, is acceptible. :)  Fuse adds a bit of overhead and a touch of cpu latency the clients, but is stable.  In our setup, running a few 1000 iops and peak throughput of about 400MB/sec.  Scales pretty well, but haven't gone beyond 6 servers in any configuration yet though.
11:59 samppah latency seems bit better on drbd+iscsi and there are some strange issues with fuse which i'm not sure if they are "intended" or am i doing something wrong
11:59 samppah but i hope that libgfapi is fully available soon
11:59 samppah that sounds good :)
12:00 davidbierce libgfapi using the latest libvirt/qemu has worked well for testing
12:01 davidbierce Need to work on better integration with libgfapi in cloudstack before I'd open the floodgates on it for VM usage.
12:01 samppah yeah, too bad it's missing support for snapshots through libvirt and hence also missing from rhev/ovirt for now :(
12:04 davidbierce Yeah, snapshots, another reason to use fuse.  Really depends on the workload on if drdb would be better.  Lots of libgfapi support is kind of exciting though
12:05 davidbierce Will not miss a core worth of load dedicated to context overhead from fuse :)
12:23 verdurin_ joined #gluster
12:24 RameshN joined #gluster
12:33 davidbierce joined #gluster
13:30 diegows joined #gluster
13:30 diegows_ joined #gluster
13:32 davidbierce joined #gluster
14:12 psyl0n joined #gluster
14:31 ababu joined #gluster
14:57 dkorzhevin joined #gluster
15:20 davidbierce joined #gluster
15:43 dbruhn joined #gluster
16:22 _pol joined #gluster
16:28 diegows joined #gluster
16:39 ricky-ti1 joined #gluster
16:47 harish joined #gluster
16:50 gmcwhistler joined #gluster
17:08 _pol joined #gluster
17:58 failshell joined #gluster
18:00 davinder4 joined #gluster
18:02 tqrst joined #gluster
18:05 tqrst does anyone *not* experience memory leaks when rebalancing? I've had this problem since 3.2.x, and 3.4.1 is no exception.
18:06 _polto_ joined #gluster
18:06 tqrst I started rebalancing two days ago and I'm already at ~6G memory usage on some of my servers
18:22 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
18:25 dneary joined #gluster
18:25 dneary joined #gluster
18:52 bgpepi joined #gluster
18:52 glusterbot New news from newglusterbugs: [Bug 985957] Rebalance memory leak <http://goo.gl/9c7EQ>
18:55 _polto_ joined #gluster
18:55 _polto_ joined #gluster
19:05 failshell joined #gluster
19:07 dylan_ joined #gluster
19:14 psyl0n joined #gluster
19:14 achuz joined #gluster
19:16 failshell joined #gluster
19:16 achuz joined #gluster
19:18 davidbierce joined #gluster
20:02 calum_ joined #gluster
20:26 psyl0n joined #gluster
20:29 failshell joined #gluster
20:34 jag3773 joined #gluster
20:36 _polto_ joined #gluster
20:46 pdrakeweb joined #gluster
20:59 cogsu joined #gluster
21:00 psyl0n joined #gluster
21:10 failshell joined #gluster
21:28 failshell joined #gluster
21:43 failshell joined #gluster
22:34 dneary joined #gluster
23:04 failshell joined #gluster
23:07 sticky_afk joined #gluster
23:07 stickyboy joined #gluster
23:10 davidbierce joined #gluster
23:42 _polto_ joined #gluster
23:53 _pol joined #gluster
23:59 gluslog joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary