Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 glusterbot New news from newglusterbugs: [Bug 1028582] GlusterFS files missing randomly - the miss triggers a self heal, then missing files appear. <https://bugzilla.redhat.com/show_bug.cgi?id=1028582>
00:00 srsc joined #gluster
00:05 srsc joined #gluster
00:16 psyl0n joined #gluster
00:24 zapotah joined #gluster
00:24 zapotah joined #gluster
00:33 mattapperson joined #gluster
00:48 JoeJulian srsc The way I replicate subsets is to just create a different volume for that subset.
00:50 JoeJulian srsc, "returned with 1, saying:" ... well that's not a very helpful error, is it. :(
00:57 johnbot11 joined #gluster
01:14 srsc JoeJulian: if i'm reading the log right, the "saying:" includes all the Popen lines underneath that line. but the Popen lines are just W and I, so I dunno what the problem is.
01:14 srsc is there a way to force geo-replication to use tcp transport? this is over ipsec vpn, so i dunno if socket would work.
01:15 srsc in the 3.3 pdf manual transport-type doesn't seem to be a geo-replication option, and hasn't worked for me
01:34 ninkotech joined #gluster
01:55 the-me joined #gluster
02:26 pk joined #gluster
02:33 pk left #gluster
02:46 mattappe_ joined #gluster
02:47 neofob joined #gluster
03:08 sac`away joined #gluster
03:11 harish_ joined #gluster
03:31 harish joined #gluster
03:33 vpshastry joined #gluster
03:56 vpshastry left #gluster
04:53 mattappe_ joined #gluster
04:54 mattapp__ joined #gluster
04:54 mattappe_ joined #gluster
04:55 sac`away joined #gluster
05:02 Slasheri joined #gluster
05:34 TDJACR joined #gluster
07:19 primechuck joined #gluster
08:00 skered- joined #gluster
08:09 verdurin_ joined #gluster
09:01 glusterbot New news from newglusterbugs: [Bug 1045690] There are some typos and whitespace issues in CLI output <https://bugzilla.redhat.com/show_bug.cgi?id=1045690>
09:03 vpshastry joined #gluster
09:11 vpshastry left #gluster
09:13 hagarth joined #gluster
09:18 RobertLaptop joined #gluster
09:19 primechuck joined #gluster
09:35 vpshastry joined #gluster
09:36 vpshastry left #gluster
09:58 rotbeard joined #gluster
09:59 prasanth joined #gluster
10:03 vpshastry joined #gluster
10:47 psyl0n joined #gluster
11:01 kshlm joined #gluster
11:20 primechuck joined #gluster
11:27 zwu joined #gluster
11:30 vpshastry joined #gluster
11:32 calum_ joined #gluster
11:47 jag3773 joined #gluster
11:52 diegows joined #gluster
12:16 zapotah joined #gluster
12:16 zapotah joined #gluster
12:20 VeggieMeat_ joined #gluster
12:28 vpshastry left #gluster
12:36 zapotah joined #gluster
13:21 primechuck joined #gluster
15:20 CLDSupportSystem joined #gluster
15:22 primechuck joined #gluster
15:44 zapotah joined #gluster
15:49 vpshastry joined #gluster
16:01 chirino joined #gluster
16:20 chirino joined #gluster
16:23 neofob joined #gluster
16:31 vpshastry joined #gluster
16:54 calum_ joined #gluster
17:18 vpshastry joined #gluster
17:23 primechuck joined #gluster
17:26 mattappe_ joined #gluster
17:26 mattappe_ joined #gluster
17:27 rotbeard joined #gluster
18:08 spechal joined #gluster
18:11 spechal I rebuilt a node and am trying to add it back to the cluster.  When I probe one of the nodes, it returns "peer probe: failed: Probe returned with unknown errno 107"  When I go to that node and run bluster peer status, it returns the other nodes as connected.  Anyone know what errno 107 is?
18:13 ozux I have  a Volume for testing (2GB) which is DistributedReplicated (RF:2). each Brick is 1GB  (2x2=4 or 1G+1G=2GB) ,   problem is If I have file 700MB and want to copy another 700MB in the mounted, I can't do it. "no space left"  how gluster handeling this? I mean how can I config it in a way which when storage is not enough on a brick, gluster start writing to next brick?
18:13 ozux as I have a 1GB free space on second brick
18:16 ozux And 2nd question is, how stable is Stipe or replicated stripe? comparing to Distributed-Replicated?
18:33 glusterbot New news from newglusterbugs: [Bug 1017176] Until RDMA handling is improved, we should output a warning when using RDMA volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1017176>
18:38 vpshastry left #gluster
18:57 rotbeard joined #gluster
19:03 purpleidea ozux: don't use stripe anything
19:03 purpleidea @stripe
19:03 glusterbot purpleidea: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
19:04 purpleidea ~stripe | ozux
19:04 glusterbot ozux: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
19:04 purpleidea glusterbot: thanks!
19:04 glusterbot purpleidea: I do not know about 'thanks!', but I do know about these similar topics: 'thanks'
19:04 purpleidea glusterbot: thanks
19:04 glusterbot purpleidea: you're welcome
19:05 gmcwhistler joined #gluster
19:06 ozux purpleidea, thanks. we have the over-brick-sized file situation actually. Talking to RedHat they strictly warning about using Stripe in any form, as it's unstable. But I was looking for any one who actually uses Striped+Replicated in any production
19:13 purpleidea ozux: well your bricks are really really small. i think that's the bigger problem.
19:14 purpleidea and some people apparently use it, but i don't know them personally.
19:14 ozux purpleidea, actually, bricks are 20-25TB (different servers), but files are large as  0.5-1.2TB
19:14 ozux purpleidea, so practically we loose 8TB when we have  16 servers
19:20 johnbot11 joined #gluster
19:21 purpleidea ozux: congratulations, you're one of the few who should probably use striping :P
19:22 purpleidea but yeah, you should definitely get RHS if you're doing that.
19:22 ozux purpleidea, lol
19:23 johnbot11 joined #gluster
19:24 primechuck joined #gluster
19:43 edoceo joined #gluster
19:43 edoceo I've got 10 4TB disks to expose via Gluster and I'm wondering about opinions on which FS to use
19:43 edoceo I've got one backed on ZFS, but there is some issue with extended attributes which causes this FS to get whacky some times
19:44 l0uis xfs is the recommended fs I believe
19:44 edoceo What if I make Btrfs?  Does it play nice with Gluster?
19:45 ozux edoceo, looking at mailing lists, Btrfs is working, but all documents in Gluster community and redhat storage rcommend strictly XFS, So I assume there is not much of option.
19:46 edoceo XFS it is, engage!
19:47 chirino joined #gluster
20:04 chirino joined #gluster
20:18 verdurin joined #gluster
20:25 zapotah joined #gluster
20:25 zapotah joined #gluster
20:29 klaxa joined #gluster
20:30 armiller joined #gluster
20:30 neofob left #gluster
20:39 psyl0n joined #gluster
21:23 mattappe_ joined #gluster
21:25 primechuck joined #gluster
21:49 dbruhn joined #gluster
22:12 TvL2386 joined #gluster
22:41 JonathanD joined #gluster
23:07 PatNarciso joined #gluster
23:26 primechuck joined #gluster
23:31 T0aD joined #gluster
23:59 delhage joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary