Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:35 gdubreui joined #gluster
01:42 jag3773 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:04 bala joined #gluster
02:13 sticky_afk joined #gluster
02:13 stickyboy joined #gluster
02:28 jaw221 joined #gluster
02:29 jaw221 left #gluster
02:34 rastar joined #gluster
02:45 bharata-rao joined #gluster
03:01 nightwalk joined #gluster
03:08 shubhendu joined #gluster
03:15 hagarth joined #gluster
03:25 harish joined #gluster
03:41 davinder joined #gluster
03:41 rjoseph joined #gluster
03:55 itisravi joined #gluster
04:12 hagarth joined #gluster
04:13 ndarshan joined #gluster
04:30 ricky-ti1 joined #gluster
04:34 ravindran1 joined #gluster
04:47 deepakcs joined #gluster
04:52 bala joined #gluster
05:00 bala joined #gluster
05:08 spandit joined #gluster
05:08 ppai joined #gluster
05:12 shylesh joined #gluster
05:17 Philambdo joined #gluster
05:19 hflai joined #gluster
05:19 dusmant joined #gluster
05:21 kdhananjay joined #gluster
05:22 lalatenduM joined #gluster
05:26 glusterbot New news from newglusterbugs: [Bug 1074947] add option to bulld rpm without server <https://bugzilla.redhat.co​m/show_bug.cgi?id=1074947>
05:27 prasanth_ joined #gluster
05:29 benjamin_ joined #gluster
05:30 raghu joined #gluster
05:31 sahina joined #gluster
05:33 nshaikh joined #gluster
05:45 vpshastry joined #gluster
05:45 vpshastry left #gluster
05:48 kshlm joined #gluster
06:12 nishanth joined #gluster
06:23 bala joined #gluster
06:24 rastar joined #gluster
06:28 jtux joined #gluster
06:37 rgustafs joined #gluster
06:42 vimal joined #gluster
06:50 psharma joined #gluster
06:54 aravindavk joined #gluster
06:59 ctria joined #gluster
07:02 sputnik13 joined #gluster
07:02 ekuric joined #gluster
07:04 ricky-ti1 joined #gluster
07:06 eseyman joined #gluster
07:06 rastar joined #gluster
07:07 sputnik13 joined #gluster
07:09 bala1 joined #gluster
07:10 dusmant joined #gluster
07:14 keytab joined #gluster
07:25 ngoswami joined #gluster
07:43 mfs joined #gluster
07:47 fsimonce joined #gluster
07:48 keytab joined #gluster
07:48 dusmant joined #gluster
07:55 andreask joined #gluster
08:07 X3NQ joined #gluster
08:11 RameshN joined #gluster
08:13 lalatenduM joined #gluster
08:15 cyberbootje joined #gluster
08:16 cyberbootje joined #gluster
08:19 andrewklau joined #gluster
08:27 glusterbot New news from newglusterbugs: [Bug 1082991] Unable to copy local file to gluster volume using libgfapi <https://bugzilla.redhat.co​m/show_bug.cgi?id=1082991>
08:29 Andyy2 @ping-timeout
08:29 glusterbot Andyy2: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
08:33 Andyy2 is there a clean way to quantify the time required for re-establishing fd and locks ? "very expensive" is somewhat uncertain, and suggests that it takes longher than 42 seconds to reesablish fd+locks, which is not true un my experience. in fact, in a typical use case of gluster serving block devices, it might be much cheaper than having guests die because of disk timeouts. anyone cares to comment?
08:34 Pavid7 joined #gluster
08:40 saravanakumar joined #gluster
08:54 vpshastry joined #gluster
08:56 vpshastry1 joined #gluster
09:02 vpshastry joined #gluster
09:03 NuxRo Andyy2: I'm also concerned by this as I plan to use glusterfs + kvm at some point
09:03 NuxRo I'd like it very much if gluster achieved its purpose and not have the VMs nuked
09:04 NuxRo even if you decrease the ping-timeout, your VMs still "die"?
09:05 lalatenduM NuxRo, Andyy2 , just curios , are guys using glusterfs + kvm through fuse mount or libgfapi ?
09:19 sac_ joined #gluster
09:20 pk joined #gluster
09:20 hybrid512 joined #gluster
09:21 hybrid512 joined #gluster
09:21 cyberbootje joined #gluster
09:24 Andyy2 lalatenduM: I'm using gluster trough libgfapi. it makes much more sense as it's natively supported in qemu.
09:25 lalatenduM Andyy2, cool
09:26 lalatenduM @learn howtodoc as http://www.gluster.org/community/documentation/​index.php/Main_Page#Documentation_for_GlusterFS
09:26 glusterbot lalatenduM: The operation succeeded.
09:26 Andyy2 NuxRo: I learned the hard way that 42 seconds (or 10 seconds for that matter) is too much for any windows and linux machine to survive. I'm currently running at 5seconds and virtua machines seem to be happier. I have not done much testing tough.
09:28 Andyy2 the problem is that nobody actually seems to care. no examples around the web also and no warnings.
09:28 ravindran1 joined #gluster
09:31 lalatenduM Andyy2, on a different note , have you came across this http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt
09:31 glusterbot Title: Libgfapi with qemu libvirt - GlusterDocumentation (at www.gluster.org)
09:31 lalatenduM Andyy2, one of the colleague is writing this , thought of getting ur feedback
09:33 Andyy2 yes, I've read most stuff from gluster.org, the redhat docs andd oejulian blog. gluster.org looks like copy-paste of some information on the net, specifically redhat docs, which do not apply in all cases.
09:34 lalatenduM Andyy2, ok
09:37 meghanam joined #gluster
09:37 meghanam_ joined #gluster
09:37 zealxy joined #gluster
09:38 Slash joined #gluster
09:38 zealxy Hi Im running rhel, I have a SAN and need high availabillity.
09:38 Andyy2 np. some info is old. for example, in latest 3.4.2, option rpc-auth-allow-insecure on is not needed (on by default) and server.allow-insecure on is also unnecessary.
09:38 ProT-0-TypE joined #gluster
09:39 zealxy Im considering using NFS in some ha setup or gluster or gfs2
09:39 zealxy any recommendations on when to use what?
09:42 Andyy2 latadenduM: I have some additions for that page, perhaps your friend could add them there, for others. Currently I'm running gluster with these modified defaults, specifically for hosting virtual disk images:
09:42 Andyy2 performance.quick-read: off
09:42 Andyy2 performance.read-ahead: off
09:42 Andyy2 performance.io-cache: off
09:42 Andyy2 performance.stat-prefetch: off
09:42 Andyy2 network.remote-dio: enable
09:42 Andyy2 cluster.eager-lock: enable
09:42 Andyy2 network.ping-timeout: 5
09:42 Andyy2 cluster.server-quorum-type: server
09:42 lalatenduM Andyy2, interesting , I think this being written/tested using latest gluster rpms, may be 3.4.2 or 3.5beta and he found some issues without these options
09:43 lalatenduM Andyy2, thanks for sharing , I would request you to add them to the page, editing the page is very easy
09:43 lalatenduM glusterbot, @howtodoc
09:43 lalatenduM @howtodoc | Andyy2
09:43 lalatenduM @howtodoc
09:43 glusterbot lalatenduM: http://www.gluster.org/community/documentation/​index.php/Main_Page#Documentation_for_GlusterFS
09:45 lalatenduM zealxy, you can use NFS + CTDB+gluster
09:46 Andyy2 I'm not sure what option(s) exactly are needed, but I without those you get massive corruption during failover. I think it must be something with caching not being flushed properly to the backup brick in case the primary replica dies. but this is only a feeling.
09:46 lalatenduM zealxy, check this out, https://access.redhat.com/site/documentati​on/en-US/Red_Hat_Storage/2.1/html-single/A​dministration_Guide/index.html#idm88072688
09:46 glusterbot Title: Administration Guide (at access.redhat.com)
09:47 lalatenduM zealxy, the link was not correct, here is the correct one https://access.redhat.com/site/docume​ntation/en-US/Red_Hat_Storage/2.1/htm​l/Administration_Guide/ch09s05.html
09:47 glusterbot Title: 9.5. Configuring Automated IP Failover for NFS and SMB (at access.redhat.com)
09:48 lalatenduM Andyy2, got it, thanks
09:51 Andyy2 one more warning: if you set those options on a running volume with live virtual machines, these will die because glusterfsd will restart the volume automatically.
09:56 zealxy lalatenduM: thanks
09:56 lalatenduM zealxy, welcome
09:56 lalatenduM Andyy2, yup
09:59 NuxRo Andyy2: thanks for the tips, libgfapi is just starting to being used from what I see, you are a pioneer in that sense :)
10:02 Andyy2 ouch :(
10:02 Andyy2 but... thanks :)
10:25 hagarth joined #gluster
10:26 harish joined #gluster
10:40 harish joined #gluster
11:04 shyam1 joined #gluster
11:05 tdasilva joined #gluster
11:07 shyam1 left #gluster
11:07 shyam1 joined #gluster
11:24 kdhananjay joined #gluster
11:28 aravindavk joined #gluster
11:28 rwheeler joined #gluster
11:37 shubhendu joined #gluster
11:45 pk joined #gluster
11:48 sahina joined #gluster
11:54 dusmant joined #gluster
11:54 sputnik13 joined #gluster
11:57 ndarshan joined #gluster
12:11 itisravi_ joined #gluster
12:12 cyberbootje joined #gluster
12:12 dusmant joined #gluster
12:12 Ark joined #gluster
12:15 B21956 joined #gluster
12:18 benjamin_ joined #gluster
12:23 ppai joined #gluster
12:26 X3NQ joined #gluster
12:28 glusterbot New news from newglusterbugs: [Bug 1084964] SMB: CIFS mount fails with the latest glusterfs rpm's <https://bugzilla.redhat.co​m/show_bug.cgi?id=1084964>
12:30 saurabh joined #gluster
12:31 shyam joined #gluster
12:35 burn420 joined #gluster
12:35 burn420 having an issue with bluster quotas
12:35 burn420 drive space shows 57 gigs used but bluster quota shows 180 gigs used
12:36 burn420 gluster
12:36 burn420 spell check likes to replace bluster with bluster!
12:36 burn420 lol
12:41 ndarshan joined #gluster
12:45 dusmant joined #gluster
12:48 sroy joined #gluster
12:53 chirino joined #gluster
12:54 ctria joined #gluster
12:56 burn420 joined #gluster
13:11 primechuck joined #gluster
13:15 awheeler_ joined #gluster
13:16 dusmant joined #gluster
13:18 franc joined #gluster
13:18 franc joined #gluster
13:30 bennyturns joined #gluster
13:30 hagarth joined #gluster
13:30 X3NQ joined #gluster
13:37 Pavid7 joined #gluster
13:37 japuzzo joined #gluster
13:52 tdasilva joined #gluster
13:57 sahina joined #gluster
14:00 m0zes joined #gluster
14:01 kkeithley @ppa
14:01 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
14:02 Darnoth joined #gluster
14:02 Abrecus joined #gluster
14:08 rpowell joined #gluster
14:12 kaptk2 joined #gluster
14:12 ctria joined #gluster
14:13 kdhananjay joined #gluster
14:28 dbruhn joined #gluster
14:31 coredumb Andyy2: you around ?
14:31 Andyy2 sure
14:32 coredumb o/
14:32 coredumb good day sir
14:32 seapasulli joined #gluster
14:32 coredumb how are your ZFS testing ?
14:32 coredumb how is
14:33 coredumb going
14:33 Andyy2 Hi :) I compiled the generic packages for wheezy, using head, as head alone was too much in conflict with my cluster. I'm testing them on one node for now.
14:33 rwheeler joined #gluster
14:34 coredumb ok
14:34 coredumb i'm startzing some testing my self
14:34 Andyy2 zfs says  v0.6.2-242_gb79e1f1
14:34 Andyy2 seems ok. fingers crossed.
14:34 coredumb but i have no load at all on this ...
14:34 coredumb so can't really try this out :D
14:35 lmickh joined #gluster
14:35 chirino_m joined #gluster
14:36 Andyy2 It also depends on the type of load, as I learned. I did hundreds of bonnie tests before deploying in production and could never lock it up. one month in production no problems, then out of a sudden most rsync backup jobs will lock it up. so... it also depends on the load, unfortunately.
14:38 Andyy2 try this: make some 200GB of data tree, random files, 10-20 levels deep, up to 100-1000 files per directory, 30-50 kb per file. then try to copy from one zfs to another zfs on the same pool via rsync. after that try tarring all these files in a single zfs on the same pool. this is what my backup job looks like and that locks it up quite often.
14:40 Andyy2 off course, during the initial rsync you have to read and write randomly on the same source, multiple threads (it's a live webserver running a few hundred domains, mainly php based, so temporary files for sessions and a lot of reading for inexistant files and .htaccess).
14:40 kdhananjay joined #gluster
14:42 ira_ joined #gluster
14:44 coredumb i wanna know if you manage to lock it up on git master tree :D
14:45 seapasulli joined #gluster
14:46 vimal joined #gluster
14:47 shylesh joined #gluster
14:50 coredumb Andyy2: can't access snapshots using NFS
14:51 Andyy2 is that native nfs?
14:52 coredumb yes
14:52 coredumb native gluster nfs
14:55 Andyy2 I think there was an issue with that. https://github.com/zfsonlinux/zfs/issues/616 and https://github.com/zfsonlinux/zfs/pull/1655
14:55 glusterbot Title: Support .zfs/snapshot access via NFS · Issue #616 · zfsonlinux/zfs · GitHub (at github.com)
14:56 Andyy2 I normally don't use it over nfs.
14:56 coredumb Andyy2: my zfs version is patched for that
14:56 coredumb but this is for kernel ZFS
14:56 coredumb kernel NFS*
14:56 plarsen joined #gluster
14:58 Andyy2 ok.
14:58 jag3773 joined #gluster
14:58 siel joined #gluster
15:00 failshell joined #gluster
15:01 coredumb Andyy2: d??????????  ? ?    ?             ?            ? .zfs < using the native client :(
15:02 zaitcev joined #gluster
15:03 coredumb ok so what other solution do we have for snapshots ?
15:05 Andyy2 native gluster does not work too.
15:05 Andyy2 but this is to be expected.
15:06 coredumb so there's actually no way to use snapshoting underneath gluster :(
15:07 Andyy2 however, snapshots work in 6.2 official (wheezy). no need to patch anything
15:07 Andyy2 I'm accessing them over kernel nfs (no native zfs server).
15:08 coredumb native zfs server = kernel NFS
15:09 coredumb yeah but i'd like fo ppl mounting the gluster mount to be able to get snapshot datas themselves
15:09 coredumb else it get boring
15:10 Andyy2 I had issues some time ago with getting zfs to configure my nfs server, so I just exported zfs fs the normal way.
15:10 benjamin_____ joined #gluster
15:10 Andyy2 sure, it's better ppl have their data by themselfes.
15:10 Andyy2 perhaps #zfsonlinux on freenode can be of help?
15:10 Andyy2 I joined a few minutes ago.
15:11 coredumb Andyy2: i'm always lurking there
15:12 coredumb aside from an app scanning real .zfs/snapshots/ directory and copying datas over gluster native mount, i  don't see how one could access data themselves
15:12 coredumb and on distributed+replicated it's gonna get funny
15:14 Andyy2 I think that it will not be easy to get gluster to export .zfs at all. my best guess would be to export it via nfs on a dedicated share (i.e. /exports/snapshots -> /tank/data/.zfs)
15:14 Andyy2 gluster will not like it being "read only" and no attributes can be set.
15:15 Andyy2 it = .zfs dir
15:15 coredumb and it's created underneath the gluster layer
15:15 coredumb btw performances on this test seems to be acceptable
15:15 Andyy2 what test?
15:20 coredumb Andyy2: i'm testing glusterfs on 4 ZFS bricks
15:24 Andyy2 using bonnie or another benchmark?
15:31 daMaestro joined #gluster
15:38 Isaacabo joined #gluster
15:38 Isaacabo Good morning guys
15:39 Isaacabo last week i added a new brick to my configuation so i can remove another brick
15:40 Isaacabo i ran the gluster VOLUME remove-brick path_to_brick start
15:40 Isaacabo and after all weekend i can still see data on the brick
15:43 tomato joined #gluster
15:44 Andyy2 Isaacabo: what does your vol remove-brick .... status say?
15:44 coredumb Andyy2: just rsyncing some real datas from another host
15:45 DV joined #gluster
15:46 tomato Im installing Openstack with Gluster as the backend. I am not sure if I should have three nodes (/cinder, /glance, /swift) of if I shall install glance backend to either cinder or swift and either of these two to gluster. any ideas?
15:48 Andyy2 coredumb: I could get some pretty decent numbers in bonnie on a 4 node cluster. high latency tough.
15:49 madphoenix joined #gluster
15:49 coredumb Andyy2: the issue here is: how to access those snapshots :D
15:49 Isaacabo completed
15:49 Isaacabo sending the info
15:50 Andyy2 coredumb: did you try exporting the snapshot directory with nfs ?
15:51 Isaacabo this is the shares info, added the 3rd brick
15:51 Isaacabo http://ur1.ca/h0qs3
15:51 glusterbot Title: #92294 Fedora Project Pastebin (at ur1.ca)
15:51 coredumb Andyy2: that's the issue in distributed + replicated
15:51 madphoenix hi all, i have a quick question about upgrades.  we currently have our gluster systems set to automatically update gluster packages via gluster-epel.  when a new minor version is installed (e.g. 3.4.3), is it necessary to restart all gluster{d,fsd} daemons and if so should the volume be brought offline first?
15:51 coredumb i don't have just one snapshot dir ...
15:51 Isaacabo and this is the status of the remove-brick http://ur1.ca/h0qsi
15:51 glusterbot Title: #92295 Fedora Project Pastebin (at ur1.ca)
15:52 Andyy2 Isaacabo: once it's completed, you need to commit the remove-brick operation. https://access.redhat.com/site/documentation/en-​US/Red_Hat_Storage/2.0/html/Administration_Guide​/sect-User_Guide-Managing_Volumes-Shrinking.html
15:52 glusterbot Title: 10.3. Shrinking Volumes (at access.redhat.com)
15:53 Isaacabo but the status shows that only moved  232.5GB of 1.5 Tb
15:53 Andyy2 coredumb: I mean export the .zfs directory. If they are "many" perhaps you can script it to create the exports config for nfs, unless there are hundreds which will be an issue.
15:54 coredumb Andyy2: well there's one per brick
15:54 coredumb as you do the snapshot on your brick
15:55 jmarley joined #gluster
15:56 Andyy2 ok.. now I follow you.
15:56 Andyy2 and your users will not like to have to check multiple locations in hope to find a file.
15:56 Isaacabo ok, but since the remove-brick started on friday and ended yesterday, any new data will be migrated also?
15:57 coredumb Andyy2: indeed
16:00 Andyy2 coredumb: you might be out for an ugly hack. something like exporting the .zfs directories via nfs to a single server, and mounting them via unionfs, and re-exporting them as a single nfs share. now this would be messy.
16:01 Andyy2 if it works at all.
16:01 coredumb yeah thought about unionfs
16:01 coredumb i think i'm better writting off a webapp get the content of snapshot dirs for me
16:01 Andyy2 yes, would be much better.
16:02 coredumb but i wonder how that would scale on undreads of gigs of files
16:02 coredumb ....
16:02 coredumb and i need a way to synchronize the snapshot action
16:02 Isaacabo @Andyy2 this is production data and i'm not a gluster expert, This installation was made by a guy who left. I will set up a lab and start learning gluster
16:02 coredumb it needs to be triggered exactly at the same time on all gluster bricks
16:02 coredumb ....
16:03 Isaacabo so, even if the remove-brick status only shows 232.5GB is safe to do the commit?
16:03 Andyy2 you're already using ntp, so a cron job "should" be able to make it close to a few seconds. more than that you'll need to code something specific.
16:04 Andyy2 Isaacabo: how much data was there on the brick. how much is left? in any case, the data will not be deleted from the disk, so you should be able to copy it back on the gluster mount in case something goes wrong.
16:04 coredumb Andyy2: true
16:06 Isaacabo http://ur1.ca/h0qw6
16:06 glusterbot Title: #92305 Fedora Project Pastebin (at ur1.ca)
16:07 Isaacabo brick 2 has 1.2 Tb used data and Brick 3 has 209G used data
16:07 Isaacabo and is almost the same data brick 2 had when i executed the remobe-brick command
16:09 Andyy2 Isaacabo: ouch... I didn't thing anyone used AOE disks any more. what brick are you replacing? brick 1?
16:09 Isaacabo @andyy2: is really old... brick 2
16:10 Isaacabo brick 2 = /data/coraid0
16:10 Andyy2 so.. it seems that you had 200GB of data on brick 2 and gluster moved 200GB of data away and said completed. looks about right. your total data of 1.5TB is shared accross bricks. 1.2 on brick 1, 0.2 on brick 2 and the rest on others. looks right to me. what bothers you?
16:12 Isaacabo that df shows 1.2 TB used, not 200GB. All that disk is used on gluster
16:12 Andyy2 I see.
16:13 wgao joined #gluster
16:13 Andyy2 try looking in the bricks logs. you should either see errors or warnings that hint you of problems.
16:14 Isaacabo i see a lot of  0-shares-posix: setting xattrs
16:15 Andyy2 try grepping with ' E ' and ' W ' those are errors and warnings.
16:18 Isaacabo ok, with Error: bunch of 0-shares-posix: mkdir path to file failed: File exists
16:19 Isaacabo and the setting xattrs
16:19 Isaacabo and for WArnings: 0-server: inode for the gfid is not found. anonymous
16:19 Isaacabo fd creation failed
16:21 Andyy2 you might have to check a few files and gfids, on all three bricks. make sure you understand where data is and if it's duplicated or not moved from the brick you tried to remove.
16:21 hagarth joined #gluster
16:22 Andyy2 you could add the new brick anyway to help you with moving data.
16:22 Mo__ joined #gluster
16:23 vpshastry joined #gluster
16:24 Isaacabo sorry, not following you. i added brick 3 to help with the moving data
16:25 Andyy2 ok.
16:25 Andyy2 mi fault.
16:25 Andyy2 my
16:26 Isaacabo it's ok
16:26 Isaacabo no worries
16:26 Isaacabo but there is something odd right?
16:26 Andyy2 yes. seems migration was aborted for errors. should not say "complete".
16:27 Isaacabo should i ran again the gluster volume remove-brick ?
16:28 Isaacabo but if there are errors, will not do anything
16:28 Andyy2 you could try looking at the path with error and see if there is a reason before running it again.
16:29 jobewan joined #gluster
16:29 Isaacabo ok
16:31 Isaacabo one of the errors are problems with mkdir, "mkdir of /data/coraid1/shares1/tools/lpmacosx/Unins​taller.app/Contents/Resources/az_AZ.lproj fa
16:31 Isaacabo iled: File exists" but i go to the path and is there
16:31 Isaacabo [root@store2 az_AZ.lproj]# pwd
16:32 ira_ joined #gluster
16:32 Isaacabo sorry, this is better
16:32 Isaacabo http://ur1.ca/h0r34
16:32 glusterbot Title: #92318 Fedora Project Pastebin (at ur1.ca)
16:34 Andyy2 I'm afraid of telling you to run remove-brick again, or to commit. try finding files on brick-2 which are not on brick 1 or brick 3. then you know if all was migrated.
16:37 Isaacabo oh men *sigh*
16:37 Isaacabo just to check, the files that have a T are like "sims links" on gluster right
16:38 Isaacabo really appreciate your help tho
16:38 Andyy2 'T' ?
16:38 Isaacabo yes
16:39 Isaacabo i saw that when added brick 2 and do a Rebalance
16:39 Isaacabo on the files
16:40 Andyy2 what version of gluster you're running?
16:41 Isaacabo glusterfs 3.4.2 on CentOS 6.5
16:42 Andyy2 I don't see the T set on my bricks. you have a gfid for each file and a hardlink from a .gluster directory to the file (see: http://joejulian.name/blog/what-is-t​his-new-glusterfs-directory-in-33/)
16:42 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
16:43 Andyy2 that + the xattr is what gluster and clients use to sync content.
16:43 Joe630 joined #gluster
16:44 Isaacabo thx, reading it now
16:44 Andyy2 look for trusted.glusterfs.dht and trusted.glusterfs.afr
16:46 Isaacabo is going to take a while... we have a lots of small files
16:49 Andyy2 I'll quote some docs: "Data residing on the brick that you are removing will no longer be accessible at the Gluster mount
16:49 Andyy2 12:43           point. Note however that only the configuration information is removed - you can continue to
16:49 Andyy2 12:43           access the data directly from the brick, as necessary."
16:49 Andyy2 this is taken from http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
16:51 Isaacabo thanks, so the data will be still there, at least is good :D
16:53 pk joined #gluster
16:57 Isaacabo @Andyy2 i found this on the log
16:57 Isaacabo Extended attributes not supported (try remounting brick with 'user_xattr' flag)
16:58 Matthaeus joined #gluster
16:58 Andyy2 you need xattr for sure.
16:58 Isaacabo http://ur1.ca/h0rb0
16:58 glusterbot Title: #92328 Fedora Project Pastebin (at ur1.ca)
16:58 Isaacabo check my /proc/mounts
16:59 Andyy2 but I have not had to set user_xattr explicitly in my systems. just format xfs with -i size=512 and mount.
17:00 Andyy2 let me check...
17:00 Isaacabo same here
17:00 raghug joined #gluster
17:00 Andyy2 this is from a running gluster+xfs on my servers: /exports/gluster type xfs (rw,noatime,attr2,delaylog,noquota)
17:01 Isaacabo mine is like this
17:01 Isaacabo mount -onoatime,nodiratime,logbufs=8,logbsize=256​k,largeio,inode64,swalloc,allocsize=131072k /dev/etherd/e0.0p1 /data/coraid0/
17:04 Andyy2 if you can 'getfattr -m . -d -e hex  <file>' then you should be fine.
17:07 Isaacabo this is the result --> http://ur1.ca/h0rdw
17:07 glusterbot Title: #92330 Fedora Project Pastebin (at ur1.ca)
17:13 Andyy2 I would expect the dht attribute too (http://hekafs.org/index.php/2012/03/​glusterfs-algorithms-distribution/)
17:14 sputnik13 joined #gluster
17:14 Andyy2 perhaps that goes only on directories. but... xattr seem to be working in your case, so no worries there.
17:15 XpineX joined #gluster
17:15 Andyy2 I have to leave in few minutes. hope you make some sense of it.
17:15 Isaacabo well, will have to
17:16 Isaacabo thank you so much for you time and help
17:16 Andyy2 :)
17:18 Isaacabo just one last thing
17:19 Andyy2 sure
17:19 Isaacabo in case i have to sync all manually
17:20 Isaacabo i mean, move the data from brick 2 to brick 3 as was intented with the remove-brick operation
17:20 Isaacabo in theory a simple "cp" will do
17:20 Isaacabo and for simple cp i mean a copy operation via any means
17:21 Isaacabo snapshot, tar, restore from backup, etc
17:21 Andyy2 once brick2 is offline and out of the cluster, you should be able to copy the files as any other files, on the gluster mount and not directly on the bricks.
17:21 Andyy2 but you will have to "commit" the remove-brick operation before .
17:22 Isaacabo ok, so
17:22 Isaacabo 1.- Commit the remove-brick operation
17:22 Isaacabo 2.- Remove brick 2 from cluster
17:22 pk left #gluster
17:23 Isaacabo 3.- copy content from brick2 to gluster client
17:23 Isaacabo thanks
17:23 Andyy2 the commit will remove the brick. your clients will not see it any more.
17:23 Isaacabo ok.
17:24 Andyy2 then you will know if data was migrated or not.
17:25 Isaacabo yes, the hard way lol
17:25 Andyy2 keep backups, if this is an option.
17:26 Isaacabo again, really appreciate your time. We think gluster is really cool but i think is badly implemented in our environment
17:26 Isaacabo yes, backup for sure
17:26 vpshastry joined #gluster
17:28 Isaacabo take care @Anddy2
17:28 Isaacabo cya guys
17:32 Andyy2 bye
17:36 joshin joined #gluster
17:50 vpshastry joined #gluster
18:03 JoeJulian madphoenix: regarding updating, the rpm script is /supposed/ to restart glusterd and all the bricks on the server being upgraded. There's a bug, however, that prevents that from happening so the upgrade will actually leave glusterd not-running.
18:07 JoeJulian Andyy2: re: ping-timout. I haven't quantified the effect in a long while, which probably means it's much better than it used to be. It used to be that I would see degraded performance for five minutes when I would return a server to the pool.
18:08 doekia @JoeJulian, it happens that the xlator proto has again changed... I have stopped the other node until they get upgraded and managed to upgrade almost automatically
18:08 JoeJulian Andyy2: all I can suggest is to test it. Monitor server loads and return a heavily used previously removed server to service and see what happens. Then report your discovery. :D
18:09 chirino joined #gluster
18:12 JoeJulian doekia: It's a bug in postinstall. After running "glusterd --xlator-option *.upgrade=on -N" it needs to run glusterd again if glusterd was running when the upgrade happened. (line 37 of the postinstall scriptlet)
18:14 sputnik13 joined #gluster
18:15 doekia @JoeJulian: I do not argue on that, however it seems like the chicken eggs problem if the protocol change how does a server can contact other bricks on update/restart
18:21 sputnik13 joined #gluster
18:26 rossi_ joined #gluster
18:27 rossi_ joined #gluster
18:32 JoeJulian doekia: It has a versioned RPC since 3.3 so if there are new RPC calls, it just uses the old ones until everything is upgraded.
18:33 doekia It did not the last time I upgraded my servers and I tend to remember it was on 3.3
18:39 JoeJulian doekia: Not sure what you did differently then. I've tested it myself and was surprised at how well it worked.
18:49 ricky-ticky1 joined #gluster
18:50 SteveCooling joined #gluster
18:52 doekia I did not considered it didn't work ... my main concern is that bricks does not get corrupted ...but it may well be a mistake of mine ...
18:54 andreask joined #gluster
19:06 jbrooks joined #gluster
19:08 mfs joined #gluster
19:13 JoeJulian doekia: Ah, no corruption issues. The use of the extended attributes hasn't really changed in a long time and the files are, of course, whole files so they're pretty safe.
19:17 doekia Which is basically the goal ... now if from time to time we have to do some educated gym for an upgrade doesn't bothers me that much ... I was just reponding to your message thinking you where stuck during an upgrade ... this seems not to be the case
19:18 zerick joined #gluster
19:20 JoeJulian Thanks, doekia. I was just responding to the scrollback. I always try to make sure everyone who's still in the channel eventually gets answered.
19:22 doekia ;-)
19:51 semiosis OHAI new releases in the 3.4 & 3.5QA ,,(ppa)s
19:51 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
20:03 aurigus_ joined #gluster
20:04 ninkotech_ joined #gluster
20:05 Dgamax joined #gluster
20:05 l0uis_ joined #gluster
20:05 johnmark_ joined #gluster
20:05 the-me_ joined #gluster
20:05 sulky_ joined #gluster
20:05 Amanda__ joined #gluster
20:09 delhage_ joined #gluster
20:09 brosner_ joined #gluster
20:13 lpabon joined #gluster
20:15 Ark joined #gluster
20:16 refrainblue joined #gluster
20:19 diegows joined #gluster
20:34 sulky joined #gluster
20:42 chirino_m joined #gluster
20:43 sputnik13 joined #gluster
20:45 tdasilva left #gluster
20:46 madphoenix @JoeJulian: thanks for the tip on rpm updates being broken.  Has this been fixed?  I'm not even sure where bugs against gluster-epel packages are tracked
20:49 ira_ joined #gluster
21:08 JoeJulian madphoenix: bug 1084432
21:08 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1084432 unspecified, unspecified, ---, rwheeler, NEW , Service fails to restart after 3.4.3 update
21:22 sputnik13 joined #gluster
21:34 gmcwhistler joined #gluster
22:03 fidevo joined #gluster
22:05 sputnik13 joined #gluster
22:09 marcoceppi joined #gluster
22:09 marcoceppi joined #gluster
22:17 askb_ joined #gluster
22:19 andreask joined #gluster
22:19 sputnik13 joined #gluster
22:20 Durzo JoeJulian, are you around?
22:21 JoeJulian always
22:22 JoeJulian ok... maybe not quite always... but close
22:22 Durzo is there any gluster volume type that behaves like raid 5 when it comes to volume size? like 3 bricks of 100GB to yeild a 200GB volume?
22:23 Durzo and/or is it possible to dynamically grow a volumes size without recreating it?
22:23 Durzo if a time comes where we can add more bricks to a volume
22:23 JoeJulian no
22:24 Durzo the docs for distributed [replicated|striped] are not very clear on that
22:24 chirino joined #gluster
22:24 Durzo theres a mailing list post that says a volume type that behaves like raid5 is 'coming soon' but this was back in gluster 3.2
22:25 JoeJulian http://www.gluster.org/community/documentat​ion/index.php/Planning35#Features_beyond_3.5.0
22:25 glusterbot Title: Planning35 - GlusterDocumentation (at www.gluster.org)
22:25 JoeJulian See "Features/disperse"
22:29 jordan_ joined #gluster
22:30 Durzo joined #gluster
22:30 Durzo net dropped... :/
22:30 Durzo so disperse...
22:34 plarsen joined #gluster
22:35 Durzo https://forge.gluster.org/disperse/ida/commit/d71​d9bd0f2a276dece143647cfa8a06c1a1f4b3c#comment_78 heh.. spam on a code commit comment
22:35 glusterbot Title: Commit in ida in Dispersed Volume - Gluster Community Forge (at forge.gluster.org)
22:43 Durzo hmmm.. i wonder how gluster would perform on top of something like DRBD or Ceph (purely as a block device) in order to achieve the raid5/6 functionality
22:43 Durzo anyone got any experience with that?
22:44 semiosis joined #gluster
22:46 Durzo morning semiosis
22:46 semiosis greetings
22:46 Durzo maybe you know
22:47 Durzo im trying to achieve larger volume sizes within gluster either by growing a volume or achieving a raid5 functionality of N-1.. i know gluster itself cant do this but has anyone played with a replicated block device like DRBD or Ceph undearneath a gluster brick?
22:48 JoeJulian Durzo: I use GlusterFS because of how badly drbd trashed my data multiple times.
22:48 Durzo as do we
22:48 semiosis Durzo: sounds complicated.  i like simplicity
22:48 Durzo we lost 4 machines in a data center due to drbd :/
22:48 yosafbridge joined #gluster
22:48 semiosis Durzo: i've heard of people using zfs under gluster
22:48 Durzo havign said that, gluster 3.2 wasnt much better...
22:48 Durzo its slowly getting better though!
22:49 Durzo hmmmm zfs
22:49 shyam joined #gluster
22:49 Durzo i guess you would still run into trouble because you cant resize a brick once its been created...
22:49 * JoeJulian never had to pull stripes of data off different drives and journal partitions to put a file back together when using glusterfs.
22:50 semiosis afk
22:50 chirino joined #gluster
22:56 Durzo just to be sure, there is redundancy in a striped volume is there? losing a single node will destroy all data?
22:56 Durzo s/there is/there is no/
22:56 glusterbot What Durzo meant to say was: just to be sure, there is no redundancy in a striped volume is there? losing a single node will destroy all data?
22:56 JoeJulian Durzo: correct
22:57 Durzo so i would want a distributed striped, yes ?
22:59 Durzo hmmm. im thinking its not possible to create a distributed striped volume over 2 nodes is it?
23:03 JoeJulian Durzo: Only replication offers redundancy.
23:04 JoeJulian Durzo: Oh... and ,,(stripe)
23:04 glusterbot Durzo: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
23:04 JoeJulian "You keep using that word. I do not think it means what you think it means."
23:05 sputnik13 joined #gluster
23:05 Durzo whats the minimum brick count i need for stripe + replicate?
23:05 JoeJulian 4
23:05 Durzo and is there any docs for it? i can only find volume info for gluster 3.2 :/
23:05 JoeJulian Why the heck would you want stripe anyway?
23:05 Durzo KVM server
23:05 JoeJulian Why the heck would you want stripe anyway?
23:05 Durzo images of 20GB and larger
23:06 Durzo would you recommend replicate for that?
23:06 JoeJulian Are your bricks smaller than 20Gb?
23:06 Durzo no
23:06 JoeJulian Then you don't want stripe.
23:06 Durzo the bricks can be a couple of TB if required
23:06 Durzo i was hoping for the increased read speed of random IO from stripe
23:06 JoeJulian No such thing.
23:07 JoeJulian Read that blog article.
23:07 Durzo yes, it says striped replicate "This should still only be used with over-brick-sized files, or large files with random i/o"
23:08 Durzo kvm images are large files with random i/o
23:08 JoeJulian images aren't as random as you might think.
23:09 JoeJulian But test it.
23:09 JoeJulian See if one gives you better performance than another. I'd be happy to amend my article.
23:09 sputnik13 joined #gluster
23:09 JoeJulian btw... you can have more bricks than servers.
23:10 JoeJulian Just partition things up.
23:12 Durzo so do you think a standard replicate volume will work fine then?
23:13 JoeJulian That's what I use.
23:14 Durzo how many vms?
23:14 JoeJulian Just 30.
23:14 JoeJulian I fully admit I have a very small and light use system.
23:15 gdubreui joined #gluster
23:15 chirino_m joined #gluster
23:15 Durzo i have quite alot of IO.. there are 2 virtual file store servers there
23:19 JoeJulian ... I will, however, say for myself that I've been doing this with people for a long time and the only ones that really have needed and used stripe are certain backup software users.
23:20 JoeJulian You're not getting parallel operations. Your network device is still a serial i/o channel.
23:21 sputnik13 joined #gluster
23:22 Durzo i have 2 gbit bonded nics between the servers on a 10gbe switch
23:22 Durzo and SSD's in the brick nodes
23:24 JoeJulian So you don't have any seek time to worry about. 100 bonded nics still give you a single io channel. I really can't picture where any performance increases would come from, and can imagine a few possible decreases. But prove me wrong. I like it. :D
23:27 Durzo well i only have 2 nodes so a replicate + stripe is out of the question
23:27 Durzo il go with your advice and use just plain replicate
23:27 Durzo are there any volume options you can recommend tweaking for large files ?
23:28 andrewklau joined #gluster
23:40 Slash joined #gluster
23:43 JoeJulian Durzo: https://access.redhat.com/site/documentation/en​-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/​chap-Quick_Start_Guide-Virtual_Preparation.html
23:43 glusterbot Title: Chapter 3. Managing Virtual Machine Images on Red Hat Storage Servers (at access.redhat.com)
23:43 JoeJulian Those.
23:44 Durzo thanks
23:49 cyberbootje joined #gluster
23:54 sputnik13 joined #gluster
23:58 sputnik13 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary