Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 nightwalk joined #gluster
00:14 eightyeight joined #gluster
01:09 _br_ joined #gluster
01:13 _br_ joined #gluster
01:57 dstywho joined #gluster
02:56 jabrcx I would like to create a new, single-brick gluster volume using an already populated xfs filesystem that used to be part of a different (replicated) gluster volume (now deleted).  Anyone know if the left-over xattrs from the former config will cause a problem?  TIA
02:58 semiosis probably will be fine except for the thing about path or a prefix of it is already part of a volume
02:58 glusterbot semiosis: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
02:58 JoeJulian jabrcx: They /shouldn't/ though if the volume name doesn't match, they could cause the inode to become large enough to overflow the additional attributes into another one.
02:58 JoeJulian ... which would just cause inefficiencies.
02:59 jabrcx hmm, so I swear I did search around a lot before asking, but I just found this: http://community.gluster.org/q/how-​do-i-reuse-a-brick-after-deleting-t​he-volume-it-was-formerly-part-of/
02:59 glusterbot <http://goo.gl/HTfdm> (at community.gluster.org)
03:00 semiosis there really should be a --force option to override that silly thing
03:01 semiosis maybe
03:01 JoeJulian Isn't there a feature request for that yet?
03:01 JoeJulian I like the " --yes-i-know-what-im-doing" option in dmtools.
03:01 JoeJulian mdtools
03:02 JoeJulian dmtools are something completely different...
03:04 jabrcx semiosis: thanks for the links, missed that at first, reading up now...
03:05 nightwalk joined #gluster
03:05 semiosis yw
03:08 jabrcx sounds like I will only have to run the setfattr -x / rm .glusterfs stuff on the top-level directory, and not recurse the whole filesystem?
03:09 semiosis i think so
03:09 semiosis possilby parents of it... from the sound of it, but probably not children within the brick dir
03:10 nhm man, you guys are working too?
03:11 semiosis playing with my new laptop at home, setting up my kde workspace & apps
03:11 semiosis probably not what you wanted to hear if you're working tho, sorry
03:11 nhm semiosis: naw, I'm actually working because I want to be. :)
03:12 semiosis oh good
03:12 nhm semiosis: I got a generator written that pumps out configuration files so I can do parametric sweeps over different variables to see how it affects performance.
03:12 semiosis man windows 8 is a pain.  i thought i could just back it up & restore it after i installed an ssd, but that plan failed epically
03:13 semiosis no dual boot for me.. oh well.  kubuntu <3
03:13 nhm semiosis: I haven't run windows since xp, I'm very behind the times.
03:13 semiosis yeah me neither but i figured since it came with the laptop i would try to keep it around... idk what i was thinking!
03:14 JoeJulian I have a windows 7 dual boot for the sole purpose of running Firefall occasionally.
03:14 semiosis kubuntu is the first os that i actually *enjoy* using, never going back!!!
03:14 nhm good to try new things, but better to acknowledge when something is crap and move on. ;)
03:15 semiosis i'd play with resolume if i had windows, thats probably it though
03:15 nhm I run in gnome fallback mode with a theme I ported from gnome2 just because I was angry at gnome3.
03:16 semiosis yeah gnome made me angry too
03:17 semiosis that day i tried using it ;)
03:17 bala1 joined #gluster
03:22 semiosis nhm: oh re: generator, thats awesome.  looking forward to hearing about your test results
03:24 semiosis gluster.org
03:25 semiosis oops, meant that for the search box.  this new keyboard is going to get me in trouble soon enough
03:27 semiosis _Scotty: re "<_Scotty> i've already posted the gluster + zfs article on gluster.org." could you share the link here?
03:38 jabrcx semiosis and JoeJulian: thanks so much.  This channel is awesome.  I believe this reuse of the brick will work, just the rm -fr .glusterfs will take a *long* time (I poked around and noticed all the hard links, so figured just moving it sideways was not a good idea).  Have fun with your new laptop.
03:41 semiosis glad to hear it & thx :)
05:57 __Bryan__ joined #gluster
06:26 mooperd joined #gluster
06:53 badone joined #gluster
07:56 milos_ joined #gluster
08:18 Kins joined #gluster
08:55 tjikkun joined #gluster
08:55 tjikkun joined #gluster
08:59 duerF joined #gluster
09:10 raven-np joined #gluster
09:11 isomorphic joined #gluster
09:14 sunus joined #gluster
09:45 _br_ joined #gluster
09:45 _br_ joined #gluster
09:47 _br_ joined #gluster
09:52 paulc2 joined #gluster
10:08 yosafbridge joined #gluster
10:40 sunus joined #gluster
10:44 sunus1 joined #gluster
10:54 sunus joined #gluster
10:57 bala joined #gluster
10:57 paulc2 joined #gluster
11:17 FyreFoX semiosis: any chance this patch https://bugzilla.redhat.com/show_bug.cgi?id=887098 could get add into the next 3.3.1 debs ?
11:17 glusterbot <http://goo.gl/QjeMP> (at bugzilla.redhat.com)
11:17 glusterbot Bug 887098: urgent, high, ---, vshastry, ASSIGNED , gluster mount crashes
12:43 sunus joined #gluster
12:55 duerF joined #gluster
13:24 mooperd joined #gluster
14:03 paulc2 joined #gluster
14:26 _Scotty semiosis: http://www.gluster.org/community/do​cumentation/index.php/GlusterOnZFS
14:26 glusterbot <http://goo.gl/BG4Bv> (at www.gluster.org)
14:28 mnaser joined #gluster
14:34 _Scotty semiosis: and, fwiw, i just finished upgrading to zfsonlinux rc13 and have yet to experience the same file deletion issue (where ZFS consumes excessive CPU deleting many files containing xattrs)
14:34 _Scotty semiosis: I think rc13 fixed it compared to rc12.  i'm still testing just to make sure.
14:39 robo joined #gluster
14:54 _Scotty semiosis: ah ha, rc13 claims "Improved performance when unlinking files with xattrs" buried in the detailed changelog.  Eureka!
15:28 raven-np joined #gluster
15:50 sunus joined #gluster
16:09 joeto joined #gluster
16:09 cicero joined #gluster
16:13 _Scotty I wonder if there is a way to increase or implement client-side caching in glusterfs.  Uncompressing the linux 2.6.39.4 kernel source and doing an ls takes 4 seconds. Going to the drivers subdirectory and doing an ls takes 20 seconds.  Conversely, on the native zfs filesystem it takes 0.103 and 0.478 seconds, respectively.
16:20 samppah joined #gluster
16:41 theguidry joined #gluster
17:02 theguidry Hi folks, I'm having some trouble with a gluster 3.3 volume.  Out of the blue, I'm seeing very high CPU and this message constantly in my logs: [2012-12-23 11:59:51.623094] E [afr-self-heal-metadata.c:472:afr_sh_metadata_fix] 0-theguidrys-replicate-3: Unable to self-heal permissions/ownership of '/' (possible split-brain). Please fix the file on all backend volumes
17:03 theguidry I have had some split-brain issues before and was able to fix them, but this one is different in that it's complaining that the root is split-brain
17:03 theguidry Any troubleshooting advice appreciated.  As far as I can tell, the ownership and perms are all fine on the underlying bricks
17:07 tryggvil joined #gluster
17:38 semiosis theguidry: in this case i think '/' means the brick's top level dir, if your brick is server:/some/path then it means it's unable to heal /some/path
17:39 theguidry semiosis: Thanks.  Any idea what to look for to troubleshoot?  I've got the whole cluster stopped and offline at this point.
17:40 semiosis check the ,,(extended attributes) of *all* bricks' top level dirs
17:40 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
17:40 semiosis ideally the trusted.afr.client* are all 00000, meaning healthy/in-sync
17:41 semiosis if more than one have non-zero attrs that means split brain iirc (been a while since i've done this :)
17:43 theguidry I'm seeing several values:
17:43 theguidry trusted.afr.theguidrys-client​-0=0x000000000000000100000000
17:43 theguidry trusted.afr.theguidrys-client​-1=0x000000000000000000000000
17:43 theguidry trusted.afr.theguidrys-client​-6=0x000000000000000f00000000
17:43 theguidry trusted.afr.theguidrys-client​-7=0x000000000000000000000000
17:43 theguidry trusted.afr.theguidrys-client​-2=0x000000000000000000000000
17:43 theguidry trusted.afr.theguidrys-client​-3=0x000000000000000300000000
17:43 theguidry trusted.afr.theguidrys-client​-4=0x000000000000000000000000
17:43 theguidry trusted.afr.theguidrys-client​-5=0x000000000000000000000000
17:43 theguidry trusted.afr.theguidrys-client​-2=0x000000000000000000000000
17:43 theguidry was kicked by glusterbot: message flood detected
17:44 semiosis @later tell theguidry please use pastie.org or similar for multiline pastes
17:44 glusterbot semiosis: The operation succeeded.
17:44 theguidry joined #gluster
17:45 theguidry sorry for the spam
17:45 semiosis welcome back
17:45 semiosis i've never seen glusterbot do that before
17:45 theguidry Thanks, semiosis
17:46 theguidry So how do I reset this state?  This is a low-traffic personal cluster and I'm pretty sure nothing new has gone into it over the last couple days (when this all started)
17:46 theguidry I'm comfortable with resetting the state and preserving the data
17:47 semiosis first you need to be absolutely sure that these dirs really do have exactly the same perms, owners, and contents
17:47 theguidry I'm certain of perms/owners
17:48 semiosis once you are then you can delete the nonzero xattrs using setfattr -x
17:48 theguidry I'm confident of contents (not 100% sure)
17:48 theguidry okay, so i have all of my gluster processes everywhere stopped right now
17:48 semiosis all you need to check is immediate contents, not contents of subdirs
17:48 theguidry okay, that's easy, since the root just has 4 directories
17:49 theguidry semiosis: should I remove all of the trusted.afr.* attributes from all bricks?  Or just the nonzero ones?
17:49 semiosis just the nonzeros i think, like i said, been a while
17:50 theguidry no worries....it's funny, I think you and I have had a similar conversation 2 years ago :)
17:51 semiosis hmm, perhaps
18:02 theguidry semiosis, that didn't help, still getting the high CPU and the log messages about '/' being split-brain
18:03 theguidry an interesting thing: when I mount from my client and try to ls the root, it hangs and starts spitting out that message; when i kill all the gluster processes everywhere, my client dumps out a directory listing that shows a couple of the top-level directories duplicated thousands of times
18:06 semiosis that could be related, but it also sounds a lot like the ,,(ext4) bug which could be a separate issue
18:06 glusterbot Read about the ext4 problem at http://goo.gl/PEBQU
18:07 semiosis using latest centos?  ext4 bricks?
18:07 semiosis recent kernel upgrade?
18:08 theguidry I'm on Ubuntu 12.10
18:08 theguidry using ext4 bricks :(
18:08 semiosis oh ok, right
18:08 semiosis ubu 12.10 has a kernel >= 3.3.0 so you probably got bit by that bug
18:08 theguidry 64-bit, too
18:08 semiosis ubu precise had a safely old kernel, as did centos but RH backported the "new feature" into older kernels
18:08 semiosis bummer
18:09 theguidry ouch
18:09 semiosis best solution is to replace your ext4 bricks with xfs bricks
18:09 semiosis recommended tweak is to use -i size-512 when formatting xfs
18:10 semiosis s/size-/size=/
18:10 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
18:10 semiosis s/size-/size=/
18:10 glusterbot What semiosis meant to say was: recommended tweak is to use -i size=512 when formatting xfs
18:10 duerF joined #gluster
18:10 semiosis glusterbot: you lagging!  ,,(meh)
18:10 theguidry okay, so i don't have enough spare capacity anywhere to store all of this data while I rebuild the volume from scratch; i wonder if i could do it one brick at a time
18:10 theguidry :)
18:11 semiosis you can but glusterfs isnt going to work until you're done
18:11 theguidry another question: would it help to nuke every trusted.afr.*client xattr on every file?
18:12 theguidry :) it's not working now!
18:12 semiosis at this point the much bigger problem is the ext4 bug, anything else should wait until you're on xfs
18:13 semiosis did you upgrade ubuntu from precise to quantal?
18:13 semiosis and gluster started failing afterward?
18:13 semiosis that would explain a lot
18:13 semiosis just curious tho, doesnt change the problem or solution
18:13 theguidry okay, i'll look into that.  one last question, strategy-wise: if i can move the data off, make a new xfs on a brick, and move the files back on, how do i preserve the xattrs through those moves
18:14 theguidry upgraded from precise to quantal, and later upgraded to gluster 3.3 (from 3.1), with some scary moments in the middle there; everything has been humming along nicely for a couple months that way
18:14 theguidry this just happened out of the blue a day or two ago
18:14 semiosis rsync -aX -- the capital X preserves xattrs -- from old brick to new brick.
18:15 theguidry cool
18:15 theguidry okay, i guess i've got my work cut out for me
18:15 theguidry thanks again, semiosis
18:15 semiosis interesting that you were able to run ok on quantal for a while
18:15 semiosis yw
18:15 semiosis bbl
18:15 * semiosis &
18:16 semiosis glusterbot: reconnect
18:16 glusterbot semiosis: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
18:16 semiosis glusterbot: meh
18:16 glusterbot semiosis: I'm not happy about it either
18:17 _br_ joined #gluster
18:21 _br_ joined #gluster
18:39 JoeJulian You don't?
18:40 JoeJulian @reconnect
18:40 glusterbot joined #gluster
18:41 tryggvil joined #gluster
18:53 _Scotty glusterbot: feeling better?
18:54 JoeJulian @meh
18:54 glusterbot JoeJulian: I'm not happy about it either
18:54 _Scotty Seems less laggy. :)
19:06 semiosis JoeJulian: i could, but glusterbot seemed less laggy just giving me that access denied message
19:07 JoeJulian Ah, ok
19:07 semiosis _Scotty: do you have a link to your zfs article on gluster.org?
19:07 semiosis i didnt see it on the web site
19:08 _Scotty semiosis: I do. It's under Documentation -> Administrators section
19:08 _Scotty "HOWTO Guides"
19:08 _Scotty semiosis: or if you want a direct link, http://www.gluster.org/community/do​cumentation/index.php/GlusterOnZFS
19:08 glusterbot <http://goo.gl/BG4Bv> (at www.gluster.org)
19:08 semiosis nice!  thanks
19:09 JoeJulian Hrm... irclog's search look like it's broken.
19:09 semiosis johnmark: ^^^
19:09 semiosis johnmark: the zfs article not the irclog
19:10 semiosis _Scotty: i'm going to tweet that link... do you have a twitter handle you'd like me to shout out with it?
19:10 _Scotty semiosis: I don't use twitter.  :/
19:11 _Scotty semiosis: If I did, it'd be jhuscott.
19:11 semiosis ok
19:13 semiosis https://twitter.com/pragmatic​ism/status/282926796662505472
19:13 glusterbot <http://goo.gl/ozIFK> (at twitter.com)
19:14 _Scotty semiosis: sweet! thanks
19:14 semiosis yw, but you deserve the thanks!
19:50 JoeJulian <rant>No, twitter, I don't want a new header photo. You're not facebook, stop trying to emulate them! What's next? Timelines? </rant>
20:03 juhaj My problem of input/output error from yesterday turned out to be caused by "mount -o acl".
20:03 juhaj Not that it matters much as the ACL support is broken anyway
20:04 juhaj (I.e. it does not behave as documented: default ACLs are not honoured and therefore using ACLs to create shared directories is impossible)
20:05 semiosis JoeJulian: hah funny you should mention it, i just made a spiffy new header image for my twitters yesterday & quite fond of it
20:05 semiosis s/my/mah/
20:05 glusterbot What semiosis meant to say was: JoeJulian: hah funny you should mention it, i just made a spiffy new header image for mah twitters yesterday & quite fond of it
20:10 JoeJulian Maybe it's jut 'cause I hate facebook that much.
20:10 semiosis ditto
20:15 _Scotty it's been regularly moved and seconded we hate fb... all those in favor? lol
20:15 semiosis aye ought to keep quiet
20:20 _Scotty lol
20:21 _Scotty Ya know, I've noticed gluster 3.3.1 fuse client is now faster than nfs for small file accesses.
20:21 _Scotty doing a find across 30k small files isn't great performance-wise with two storage servers, but it's certainly acceptable considering.
20:22 _Scotty i'll be curious to see what the performance looks like when I bring more storage servers online.
20:23 _Scotty i'm not sure if performance will stay the same or will increase.
20:37 H__ _Scotty: do you have any ideas how theind across 30k small files speed compares to 3.2.5 ?
20:50 _Scotty H__: averaged across multiple runs on the linux 2.6 kernel source tarball, 30% faster in my environment.
20:51 _Scotty H__: i'd still ask end users to use tarballs in lieu of making 30k individual small files. it's just bad coding. imho.
20:52 _Scotty H__: I still need to tweak the performance settings in gluster, tho
21:08 _Scotty as in, http://www.slideshare.net/Gluster/gluster​-for-geeks-performance-tuning-tips-tricks
21:08 glusterbot <http://goo.gl/BPP0w> (at www.slideshare.net)
22:20 bauruine joined #gluster
23:21 y4m4 joined #gluster
23:30 _Scotty Added e-mail daily status reports to http://www.gluster.org/community/do​cumentation/index.php/GlusterOnZFS
23:30 glusterbot <http://goo.gl/BG4Bv> (at www.gluster.org)
23:30 mooperd left #gluster
23:36 robo joined #gluster
23:47 y4m4 joined #gluster
23:55 _Scotty c'mon gluster community... :P
23:55 _Scotty "The following text is what triggered our spam filter: http://akismet.com blacklist error  "
23:55 glusterbot Title: Comment spam prevention for your blog - Akismet (at akismet.com)
23:55 _Scotty I can't seem to save my wiki update on the gluster community site.  BOOOOO.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary