Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 polfilm joined #gluster
00:16 drockna1 left #gluster
00:24 stopbit joined #gluster
00:51 polfilm joined #gluster
02:01 Dell joined #gluster
02:16 __Bryan__ joined #gluster
03:21 abkenney joined #gluster
03:22 eightyeight johnmark: i'm aaron toponce. my blog is what you linked to on the gluster blog. fyi.
04:17 abkenney joined #gluster
04:21 Dell__ joined #gluster
04:37 atrius_away joined #gluster
04:43 shylesh joined #gluster
05:05 mohankumar joined #gluster
05:24 pai joined #gluster
05:47 pai joined #gluster
06:35 mohankumar joined #gluster
07:31 partner eightyeight: what blog, i don't see link in backlog, i'm hunger for reading :)
07:32 eightyeight partner: http://gluster.org/blog is the main blog. my post that is referenced is http://pthree.org/2013/01/25/g​lusterfs-linked-list-topology/
07:32 glusterbot <http://goo.gl/0HHCK> (at pthree.org)
07:36 sashko joined #gluster
07:39 partner ah
07:39 partner thanks
07:40 partner looks good and interesting indeed
07:58 eightyeight thx
09:18 Qten joined #gluster
09:24 lala joined #gluster
09:46 red_solar joined #gluster
10:05 redsolar joined #gluster
10:09 redsolar joined #gluster
10:10 redsolar joined #gluster
10:26 red_solar joined #gluster
10:58 errstr joined #gluster
11:00 red_solar joined #gluster
11:05 redsolar_office joined #gluster
11:06 GooGo joined #gluster
11:07 gbrand_ joined #gluster
11:07 GooGo hello guys
11:07 GooGo have an unexpected behaviour of gluster replication
11:07 GooGo i was configuring a mirroring of websites www-data
11:07 GooGo where users can upload files
11:07 GooGo according to this tutorial
11:07 GooGo http://www.iredmail.org/wiki/index.php?​title=Master-master_high-availability_f​ailover_iRedMail_system_using_GlusterFS
11:07 glusterbot <http://goo.gl/yJfus> (at www.iredmail.org)
11:08 GooGo replication is working
11:08 GooGo but
11:09 GooGo when i create a file on server a, it's available on server b, when i remove the file from server b, it removes from server a as well
11:09 GooGo but the opposite, when i create file on server b, it's available on server a, but when i remove that file from server a, it is not being removed from server b
11:09 GooGo after simple 'ls' on server a, file comes back on server b
11:09 red_solar joined #gluster
11:10 GooGo can you explain what i did wrong there?
11:24 red_solar joined #gluster
11:33 redsolar_office joined #gluster
11:37 red_solar joined #gluster
12:34 red_solar joined #gluster
12:43 mohankumar joined #gluster
13:13 redsolar_office joined #gluster
13:14 redsolar_office joined #gluster
13:24 red_solar joined #gluster
13:26 red_solar joined #gluster
13:33 red_solar joined #gluster
13:36 redsolar_office joined #gluster
13:37 pai joined #gluster
13:45 edward1 joined #gluster
13:56 redsolar_office joined #gluster
14:03 glusterbot New news from newglusterbugs: [Bug 904370] Reduce unwanted Python exception stack traces in log entries <http://goo.gl/sjEQp>
14:22 red_solar joined #gluster
14:47 DWSR joined #gluster
14:48 DWSR Hey all, is there a way to glusterize existing storage with data?
14:48 DWSR I have 2 servers and no way to back up the entirety of the existing data.
14:49 DWSR Was looking at the mailing list a little bit and there was some mention of it, but not much and it seemed to be more of a "it's unsupported" response.
14:57 chirino joined #gluster
15:49 NuxRo DWSR: what exactly are you trying to accomplish?
15:50 DWSR NuxRo: I have 2 servers, a home-rolled NAS that's got about 5TB of storage that's nearly full, and another server I was gifted with about 1.2TB of storage.
15:50 DWSR NuxRo: I want to smash them together.
15:50 partner in-place "conversion" to gluster i read
15:50 DWSR NuxRo: I can't add/remove drives to either or.
15:50 DWSR partner: Yeah, that about sums it up.
15:50 NuxRo it's not supported
15:51 NuxRo but
15:51 partner what i've read about it is "it should work" but as said not exactly supported
15:51 NuxRo you could create a distributed volume out of the free space on both, then slowly start to move existing data into the volume
15:52 DWSR NuxRo: mkay. Uneven brick sizes are ok for striping over, I assume?
15:52 DWSR Also, I'm using ZFS, will this cause problems?
15:52 NuxRo don't strip, just distribute
15:52 partner uneven brick size on distributed IS a problem
15:52 NuxRo ZFS might work, the recommended filesystem is XFS
15:53 NuxRo partner: why?
15:53 partner NuxRo: because it distributes files, think having 5 TB and 10 TB disks, roughly half of your writes will start to fail after 10 TB of data (5 on each)
15:54 abkenney joined #gluster
15:55 NuxRo partner: i would imagine gluster is smart enough to check free space on a brick before writing to it
15:55 partner its so easy to try out, create two 100 MB bricks in distribute mode and try it out, fill the mount with dd and wait for failure
15:55 partner NuxRo: that is exactly why i test - never assume anything like thhat
15:56 NuxRo if that's the case, than it's really bad
15:56 NuxRo *then
15:57 NuxRo partner: what you are saying is that no file can be larger than a brick, which is normal
15:57 NuxRo unless you do striping
15:57 partner no, i'm not saying that
15:58 partner i'm saying once the smaller brick gets full your writes will start to fail based on the hashing
15:58 DWSR I know with ZFS, if I stripe with uneven device sizes, it will dynamically distribute everything based on the size of the device.
15:58 partner the ones targetted to larger brick will succeed
15:58 DWSR Will gluster do this as well?
15:59 partner please prove me i'm wrong, i specifically tested this scenario this week and it failed for me
16:00 NuxRo then it's really based, so the location of a file is sort of pre-computed, no free disk space cheks done?
16:00 NuxRo we should raise this on the mailing list
16:01 NuxRo DWSR: as you can see, we are not sure of that :)
16:01 DWSR lol
16:01 partner on replica it is aware and does its magic
16:01 NuxRo then again we're just users, like you, you should raise your problem on the mailing list
16:02 partner gluster does lots of things i don't like, its so easy to screw up
16:02 partner i initially tested this distributed-setup on accidental configuration where i mounted other brick on 10 GB disk and other went to root disk having few gigs left
16:03 partner then i started to fill up the mount and started seeing write fails - due to small root disk filling up
16:03 NuxRo partner: were you using dd?
16:03 partner i recall yes
16:03 NuxRo because, imho, glusterfs is not aware of the file size, since it keeps getting appended
16:04 glusterbot New news from newglusterbugs: [Bug 904629] Concurrent requests of large objects (GET/PUT) can be starved by small object requests <http://goo.gl/vtsQ0>
16:04 partner NuxRo: so? i created like nnn amount of 10 MB files
16:04 NuxRo oh right, i was thinking you created 1 large file
16:04 NuxRo in that case, it's worrying
16:04 NuxRo you should definitely take this to the ml so we can get an "official" response from a dev
16:05 partner please try it out, its so easy to test if you have one virtual machine to test on
16:05 jjnash joined #gluster
16:05 partner just make sure you have clearly different sized bricks and start filling up the mount
16:06 NuxRo ok, I'll test asap
16:06 partner i just might repeat it shortly as i have my testing gluster nodes running
16:07 partner please do so that you know exactly what it does
16:07 DWSR anyway, back to my original question: How do I do an in place conversion.
16:08 NuxRo partner: going to, also kindly check my private msg
16:08 partner DWSR: if you would mount your brick to the disk that contains your data wouldn't you be able to move files without much extra disk used?
16:09 partner as the mountpoint would be just a dir on that disk.. and as you move within the disk the usage does not go up
16:09 DWSR partner: You mean create a brick with some of the free disk space, copy some data, expand, repeat ad naseum?
16:09 partner no, that's not what i meant at all
16:10 partner /dev/sdb1              99G  6.5G   88G   7% /srv
16:10 partner lets imagine that is your data disk
16:10 DWSR sure
16:10 partner if you would make a brick to say /srv/gluster and start it up it would see the size of that mount
16:12 partner now, as you would move files from /srv/data to /srv/gluster (obviously using the proper mountpoint/client for this) you would not actually use any more diskspace as you would be moving files within the (example) /dev/sdb1
16:12 partner right?
16:13 partner BUT
16:13 partner i recall you said its zfs and what not, i don't really know about that area
16:14 partner please do not go and try it out on production, its just an idea what i have not (yet) tested out myself
16:14 partner but in theory it could IMO work?
16:14 DWSR It should.
16:14 partner would be nice to hear comments from more pros, too but its weekend and all
16:14 DWSR And ZFS behaves, to programs accessing volumes, as you would expect.
16:15 partner i'm mostly worried about the gluster on top of non-xfs filesystems, i don't know those details at all
16:17 NuxRo i know the minimum requirement is for the fs to support xattrs
16:17 DWSR http://community.gluster.or​g/a/glusterfs-zfs-on-linux/ partner
16:17 glusterbot <http://goo.gl/uqjE8> (at community.gluster.org)
16:17 NuxRo so DWSR you should check if your zfsonlinux version has them
16:17 DWSR NuxRo: It does.
16:17 NuxRo right, well, that sounds promising
16:18 NuxRo i would start creating a volume and play with it
16:18 NuxRo move unimportant data around, see if it stays consistent etc
16:19 DWSR http://gionn.net/2011/08/2​7/zfs-glusterfs-on-linux/
16:19 glusterbot <http://goo.gl/ZwfnA> (at gionn.net)
16:19 DWSR Seems to work.
16:19 partner DWSR: yeah i know it "works" but underlaying details are too much unknown to me so i don't comment on that
16:20 partner even the unsupported thing do work, its just the BUT there :)
16:20 DWSR partner: I know, but someone (who seems to know more than either of us) says it works (tm)
16:21 NuxRo darn, it doesn't let me create a volume out of bricks on the same host
16:21 partner i'm not exactly sure which filesystems are supported, xfs is the recommended one but it works(tm) on ext3 aswell
16:21 DWSR Does gluster even really care what filesystem it's no, to be honest?
16:21 DWSR As long as you have xattr support, does gluster really care?
16:21 partner DWSR: i'm sure there's plenty of installations out there, i just cannot promise myself it works because i have not used such setup
16:22 NuxRo DWSR: it does, last year we had a really nasty problem with ext3/ext4
16:22 DWSR partner: We're building a house with matchsticks at this point.
16:22 NuxRo yep
16:23 partner ext2 supports xattrs.. there is something more under the hood..
16:25 partner DWSR: i suggest you wait for the pros to show up and comment on the topic, ml might be helpful too. and do try to setup some small test environment where you can try the process out several times
16:25 DWSR partner: Yeah
16:25 partner with gluster is very easy, i am using few virtual machines for various setups
16:27 DWSR awesome.
16:27 DWSR Anyway, gotta jet. Thanks for the help.
16:27 partner unfortunately i just yesterday dedicated it for some more serious testing so i can't give you the distributed "failure demo" in next 5 mins, too lazy to go and click one more disk.. i should rather go and grab a beer, heck its saturday :)
16:27 partner alright, cya
16:29 NuxRo partner: i managed to replicate the distributed problem
16:29 NuxRo I'll write about it on the ml
16:31 partner cool
16:34 partner maybe its some logic / thought behind the gluster which i'm yet to figure out but it does allow me to do quite severe configuration errors and does not even warn about them. like doing two bricks and one goes to root (ext3) while the other goes to proper xfs mount, different sized disks, inodes, everything - for gluster its fine. i kind of assume its intended so i'm not exactly complaining about that, its just so easy to fail. it even creates the non-existing
16:35 partner i wonder if i would find the power to sum up all of this into some sort of blog post or something, basic stuff but its not well documented anywhere
16:36 partner oh, i don't have a blog.. problem solved :)
16:39 NuxRo you can sum it up on the mailing list, get indexed by the search engines, voila
16:40 partner its different
16:41 partner ohwell, distribution becomes a problem only when i have something to publish..
16:46 NuxRo partner DWSR , I opened http://supercolony.gluster.org/pipermail​/gluster-users/2013-January/035344.html let's see what the devs say
16:46 glusterbot <http://goo.gl/NBTLL> (at supercolony.gluster.org)
16:50 red_solar joined #gluster
16:51 partner NuxRo: cool, now we wait :)
16:55 chirino joined #gluster
16:59 shylesh joined #gluster
17:28 mohankumar joined #gluster
17:38 dmojoryder joined #gluster
17:38 redsolar_office joined #gluster
17:56 lala joined #gluster
18:11 Cenbe joined #gluster
18:20 lala joined #gluster
18:25 sashko_ joined #gluster
18:28 lala joined #gluster
18:30 edong23 joined #gluster
18:47 greylurk joined #gluster
18:57 GooGo joined #gluster
19:03 andreask joined #gluster
19:42 red_solar joined #gluster
19:58 erik49 mount -t glusterfs isn't outputting errors, but its also not mounting?
20:04 glusterbot New news from newglusterbugs: [Bug 890618] misleading return values of some functions. <http://goo.gl/WsVnD>
20:10 erik49 nevermind there was a prob when i created the volume
20:11 erik49 although its odd that mount doesn't return an error when trying to mount a nonexistent volume :D
20:23 GooGo can someone help me? http://community.gluster.org/q/unexpe​cted-behaviour-of-volume-replication/
20:23 glusterbot <http://goo.gl/Mkgke> (at community.gluster.org)
21:09 greylurk joined #gluster
21:29 partner wow what a piece of instructions
21:32 partner like from 2009, no wonder the use is having issues if that is applied to today
21:43 red_solar joined #gluster
22:44 atrius_away joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary