Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 jdarcy joined #gluster
00:07 Humble joined #gluster
00:43 Humble joined #gluster
01:06 redsolar joined #gluster
01:09 jdarcy joined #gluster
01:10 dustint joined #gluster
01:10 kevein joined #gluster
01:22 _pol joined #gluster
01:53 yinyin joined #gluster
02:19 glusterbot New news from newglusterbugs: [Bug 906238] glusterfs client hang when parallel operate the same dir <http://goo.gl/sPdjr> || [Bug 911361] Bricks grow when other bricks heal <http://goo.gl/oSfTQ> || [Bug 922292] writes fail with invalid argument <http://goo.gl/kpZwI>
02:26 Humble joined #gluster
02:27 yinyin joined #gluster
02:41 mricon left #gluster
02:58 bala joined #gluster
03:13 tryggvil__ joined #gluster
03:25 __Bryan__ joined #gluster
03:28 Ryan_Lane joined #gluster
03:29 yinyin joined #gluster
03:48 Ryan_Lane joined #gluster
03:53 yinyin joined #gluster
03:56 _pol joined #gluster
04:21 timothy joined #gluster
05:08 sahina joined #gluster
05:18 jruggiero joined #gluster
05:18 Rydekull joined #gluster
05:50 shylesh joined #gluster
05:50 msmith_ joined #gluster
05:55 _pol joined #gluster
06:37 harshpb joined #gluster
06:40 pranithk joined #gluster
06:43 yinyin joined #gluster
07:05 ekuric joined #gluster
07:07 bulde joined #gluster
07:18 harshpb joined #gluster
07:31 harshpb joined #gluster
07:31 yinyin joined #gluster
07:36 Ryan_Lane joined #gluster
07:48 hagarth joined #gluster
07:57 harshpb joined #gluster
08:06 hagarth joined #gluster
08:10 soukihei joined #gluster
08:51 yinyin joined #gluster
09:52 yinyin joined #gluster
10:52 yinyin joined #gluster
11:15 eikin joined #gluster
11:49 hateya joined #gluster
11:52 yinyin joined #gluster
12:05 sahina joined #gluster
12:36 tryggvil__ joined #gluster
12:40 bala1 joined #gluster
12:52 jdarcy joined #gluster
12:53 yinyin joined #gluster
12:59 disarone joined #gluster
13:24 shylesh joined #gluster
13:26 manik joined #gluster
13:32 jdarcy joined #gluster
13:53 yinyin joined #gluster
14:08 bala1 joined #gluster
14:28 mweichert joined #gluster
14:30 mweichert hey guys! I'm trying to find a way of achieving an active-active document repository (small files, 500MB total) between my home network and a remote network. Would a gluster replicated volume be suitable for this?
14:32 mweichert the replication can be asynchronous - as long replication is up-to-date within a 15 minute window
14:32 mweichert but I need write-access on both sides
14:32 mweichert ideally, I'd like to minimize the bandwidth used
14:36 mweichert I think a replicated volume (versus geo-replicated) would be ideal if I could somehow pause and resume replication (perhaps using cron, so that I could sync every 15 minutes)
14:50 satheesh joined #gluster
14:53 ProT-0-TypE joined #gluster
14:54 yinyin_ joined #gluster
14:55 timothy joined #gluster
14:56 NuxRo mweichert: gluster looks like overkill for that, i would look at tools such as Unison
14:59 mweichert NuxRo: thanks for the reply. Unison doesn't seem to be maintained further
15:01 mweichert NuxRo: do you know if it's possible to pause replicated volumes? I suppose one way would be to write a script which takes the NIC down, and bring it up every 15 minutes, monitor the volume to ensure proper healing, and then take the NIC down again
15:07 disarone_ joined #gluster
15:08 satheesh1 joined #gluster
15:08 cyberbootje joined #gluster
15:09 NuxRo mweichert: looks like asking for trouble to me :-)
15:09 NuxRo i am successfully using unison for some syncs, no probs
15:09 NuxRo there's is one in EPEL
15:12 cyberbootje joined #gluster
15:15 mweichert ok
15:23 ProT-0-TypE sometimes I have the "No space left on device" even if the volume has free space (but one node is full)
15:24 ProT-0-TypE is there a way to prevent this situation? (like forcing the clients to another brick)
15:32 NuxRo ProT-0-TypE: is that a distributed setup?
15:35 NuxRo http://comments.gmane.org/gmane.co​mp.file-systems.gluster.user/11025
15:35 glusterbot <http://goo.gl/04Qer> (at comments.gmane.org)
15:53 nixpanic joined #gluster
15:53 nixpanic joined #gluster
15:54 yinyin joined #gluster
16:00 shylesh joined #gluster
16:05 ProT-0-TypE NuxRo distributed-replicated
16:06 NuxRo ProT-0-TypE: i recommend you take this to the maoling list, and provide more details about your setup and what you're doing that triggers this#
16:06 mooperd_ joined #gluster
16:07 ProT-0-TypE It seems I have the same problem as the guy in the mailing list
16:11 H__ joined #gluster
16:11 H__ joined #gluster
16:25 cyberbootje joined #gluster
16:29 jdarcy joined #gluster
16:49 stoile joined #gluster
16:54 yinyin joined #gluster
16:55 stefano joined #gluster
17:10 bulde joined #gluster
17:54 satheesh joined #gluster
17:55 yinyin joined #gluster
18:00 _pol joined #gluster
18:04 joehoyle joined #gluster
18:17 koodough joined #gluster
18:23 samppah on gluster.org there is banner "GlusterFS 3.4 is coming!" and it says "QEMU thin-provisioning"... what does thin provisioning mean in this context?
18:26 joffm joined #gluster
18:51 jdarcy joined #gluster
18:55 yinyin joined #gluster
19:03 eiki joined #gluster
19:18 _br_ joined #gluster
19:21 _br_ joined #gluster
19:26 _br_ joined #gluster
19:56 aravindavk joined #gluster
20:00 yinyin_ joined #gluster
20:00 awheeler_ joined #gluster
20:10 awheeler_ joined #gluster
20:29 awheeler_ joined #gluster
20:45 ProT-0-TypE joined #gluster
20:53 glusterbot New news from newglusterbugs: [Bug 922432] Upstream generated spec file references non-existing patches <http://goo.gl/ThpfV>
20:58 joehoyle joined #gluster
21:00 yinyin joined #gluster
21:04 Ryan_Lane joined #gluster
21:14 social joined #gluster
21:30 premera joined #gluster
21:52 lh joined #gluster
21:52 lh joined #gluster
22:01 yinyin joined #gluster
22:20 NeatBasis joined #gluster
22:34 joehoyle joined #gluster
22:36 ProT-0-TypE joined #gluster
22:44 ProT-0-TypE joined #gluster
22:54 dbruhn joined #gluster
22:55 dbruhn Hey weird deal, I have a node that is showing all peers are connected, but the rest of the nodes are showing that one node disconnected?. Any idea what I should even be looking at?
23:01 yinyin joined #gluster
23:03 lh joined #gluster
23:16 dbruhn And that's weird hosts file being on different but addressable subnets caused the issue, all while it's only supposed to be using RDMA
23:16 dbruhn just an FYI for the logs
23:16 duerF joined #gluster
23:48 jdarcy joined #gluster
23:55 jdarcy_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary