Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 bronaugh ok.
00:00 semiosis bronaugh: yeah you can add ro to the mount opts like any other fs
00:00 semiosis it works
00:00 Ryan_Lane ah. whoops. I thought you were asking about sharing
00:00 bronaugh nope. I'm just examining what'll happen if we try to use it the way I'm thinking of using it.
00:01 bronaugh ie: mounting read-only filesystem snapshots of glusterfs bricks, then assembling (for lack of a better word) a glusterfs out of the snapshots.
00:02 semiosis bronaugh: that will probably work
00:02 semiosis bronaugh: i wouldn't be surprised if it produced lots of scarry log messages in the brick & client logs, but i think it would work
00:03 bronaugh because it'd think it's creating a glusterfs out of bricks that are already in use?
00:03 semiosis oh right, that
00:03 semiosis yeah that might be a problem
00:03 semiosis that's new in 3.3
00:03 semiosis and i'm not thrilled about it
00:04 bronaugh so how does it identify them? files in .gluster?
00:04 semiosis brick is already part of a volume
00:04 semiosis oh come on glusterbot
00:05 semiosis or a prefix of it is already part of a volume
00:05 glusterbot semiosis: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
00:05 semiosis ,,(awesome)
00:05 glusterbot ohhh yeeaah
00:07 _Scotty Gotta love the glusterbot.
00:07 bronaugh haha
00:08 bronaugh what the hell is setfattr part of?
00:08 semiosis attr package on debian/ubuntu
00:08 _Scotty It's installed by default on RHEL or CentOS
00:09 bronaugh semiosis: installed that; still not setfattr.
00:09 bronaugh no*
00:09 bronaugh er. nm.
00:09 bronaugh PEBKAC. again.
00:11 Ryan_Lane left #gluster
00:14 _Scotty i have to say, performance on glusterfs on top of zfsonlinux is pretty awesome. esp. with zfs compression enabled.
00:14 semiosis cool!
00:14 _Scotty even with small (<=4k) files
00:15 semiosis _Scotty: does self-heal work right?
00:15 semiosis like if you had writes in progress when a server rebooted... would those files heal ok?
00:15 semiosis (once the server returns)
00:15 _Scotty semiosis: sure does. no issues. ill be running failover tests shortly.
00:16 _Scotty semiosis: i run a, ah, non-standard zfs config though. i did a lot of tweaks for my particular environment. specifically, i have a large ups, which these servers receive shutdown commands from. so i can take liberties with lazy writes to disk. in any case though, we would only lose up to 5 seconds of writes.
00:17 semiosis if you keep notes/comments on your experience we'd love to publish them or syndicate your own posting if you have your own blog
00:17 _Scotty semiosis: do you work for the gluster team?
00:17 semiosis there's been a lot of interest in zfs over the last year
00:17 semiosis i'm a ,,(volunteer) but i know people :)
00:17 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
00:17 _Scotty wow.
00:17 _Scotty lol
00:17 _Scotty glusterbot ftw
00:18 semiosis yeah
00:18 _Scotty semiosis: i'm creating complete step-by-step instructions as i go, starting from immediately following a centos 6.3 base server install. i can hand those off when im done
00:18 _Scotty it walks you right through setting up zfs on linux, tunables, then to glusterfs
00:19 * semiosis falls off chair
00:19 _Scotty if my calcs are correct, with the completed 18 node system we are looking at >22k IOPS and 23GB/s from 18 1U storage nodes.
00:19 semiosis that... is... awesome
00:20 semiosis ethernet or infiniband?
00:20 _Scotty 10GbE
00:21 _Scotty somewhere around 350TB of storage.
00:21 _Scotty all RAID10
00:23 _Scotty with drives getting larger there is no way id seriously consider running any kind of RAIDZ2/RAID6 setup
00:24 _Scotty rebuild times on 2TB drives with RAIDZ2 is 3. days.  3 days!
00:24 _Scotty zfs raid10 is 6 hours.
00:25 _Scotty oh.  im writing a slightly crufty script to deal with restoring from snapshots.
00:26 _Scotty so i can take zfs snapshots on each brick, but i can't present it to the end user bc the snapshot directories are read-only, gluster can't set xattrs so therefore it's a nonstarter
00:27 _Scotty so my script will take the full path to the file to be restored, locate it on the appropriate brick, and restore it
00:38 weplsjmas joined #gluster
00:50 nightwalk joined #gluster
01:03 ngoswami joined #gluster
01:05 bronaugh _Scotty: 3 days to rebuild how large a raidz3?
01:05 bronaugh er raidz2
01:06 bronaugh because yeah, 30TB mdraid RAID6 here is about 12-14 hours.
01:06 bronaugh maybe a little less.
01:10 _Scotty bronaugh: 11 sets of 7-drive RAIDZ2. i know it's not an optimal config, but the requirement at the time was to deliver as much storage as possible w/o regard to performance. ~107TB on disk, ~160TB uncompressed.
01:10 yinyin joined #gluster
01:11 bronaugh _Scotty: you using 2.5" disks?
01:11 _Scotty bronaugh: 3.5" samsung spinpoint f3's. did i mention it had to be done on the cheap? lol
01:11 bronaugh shrug.
01:11 bronaugh we do stuff on the cheap too. nothing inherently wrong with that.
01:13 bronaugh so the 11 sets are your bricks then?
01:13 _Scotty i'm seeing ~80% year over year growth with no end in sight. so, that led me to revamping our existing nfs box and moving to a parallel filesystem. i'm evaluating orangefs and glusterfs. i can't burn a fte maintaining lustre, &c.
01:13 bronaugh _Scotty: it sounds like you're going down the same road we are.
01:13 _Scotty nah, with zfs, it's all one big storage pool. roughly 11 times the performance of a single drive.  -ish.
01:14 bronaugh except that our motivation for ditching nfs is that it likes to toss its cookies when shoving data over a fast enough link (IPoIB on a 32GBit link).
01:14 _Scotty bronaugh: basically raid60
01:15 _Scotty i hear that
01:15 bronaugh did you run into similar problems with 10GbE?
01:16 _Scotty not at all. it's just that i have two storage servers and i'm running out of room. besides, the spinpoints are 3gb sata and the bus is saturated. i can't keep adding disks, bc the performance is bad enough as it is.
01:16 bronaugh yeah, that's unusually bad perf.
01:16 _Scotty the new array will be using constellation es.3 sata drives.  the sas drives ended up being slower for most operations.
01:17 _Scotty even though, i *thought* zfs was supposed to be tuned for sas drives.
01:17 bronaugh we pull usually ~80% of (n-2) drive perf with raid6
01:17 bronaugh er. 80% of (n - 2) * single_drive_speed
01:17 _Scotty gotcha
01:17 bronaugh so around 1.2GB/sec, maybe a bit more, on 12-disk raid6
01:18 _Scotty well, 77 drives in that configuration yields me ~1GB/s and ~800 iops
01:18 bronaugh which disk controllers are you using?
01:18 _Scotty it's slow and ugly.  lol
01:18 _Scotty lsi 9211
01:18 bronaugh haha
01:18 JoeJulian can you add more sata controllers?
01:18 _Scotty yeah tell me about it
01:18 bronaugh yeah same controllers we're using.
01:18 bronaugh cheapest controller w/o stupid bottlenecks so far as I can tell.
01:19 _Scotty i can, but i figured there is no point. the new array will be built on 1U boxes with a single controller.  that should be fine for 12 drives.
01:19 _Scotty if glusterfs ever adds tiered storage, those lumbering beasts will be relegated to tier 2
01:19 bronaugh why're you looking at enterprise drives?
01:20 _Scotty higher performance - 6gb. the constellations also have a much higher mtbf.
01:21 bronaugh http://www.hgst.com/internal-drive​s/desktop/deskstar/deskstar-7k4000
01:21 glusterbot <http://goo.gl/ilifn> (at www.hgst.com)
01:21 _Scotty and lower vibration.  almost nonexistent.
01:21 _Scotty DEATHSTARS???? really?
01:21 bronaugh shrug.
01:21 _Scotty :)
01:21 bronaugh that's what Backblaze uses.
01:21 _Scotty i jest.
01:21 bronaugh well; 5k3000
01:21 bronaugh but same idea.
01:21 bronaugh point is that they can be had for $250 per drive or less.
01:22 bronaugh Seahate has theoretically been shipping 4TB desktop drives in external enclosures for less money than that for the last year, but they're rare as hen's teeth.
01:22 _Scotty $230 for the 2TB constellation.  you say 250 - for what size?
01:22 bronaugh 4TB.
01:22 JoeJulian egads... I haven't seen a good batch of hitachi drives since ibm.
01:22 _Scotty lol
01:22 _Scotty true enough
01:23 JoeJulian My boss bought a batch recently and I'm at a 17% return rate.
01:23 bronaugh JoeJulian: where'd you buy them from, how big a batch, and how were they packaged?
01:23 _Scotty eh
01:23 nueces joined #gluster
01:24 _Scotty i had to return 1 drive out of 200 for the samsung spinpoints.  i guess i lucked out.
01:24 JoeJulian Not sure where he got them from, 200, and they're in the foam crate.
01:24 bronaugh we have had a 0% DOA rate with Seagate drives; we've been either buying them in external enclosures (the 3TB drives) or in the original hard drive shipping boxes.
01:25 bronaugh n=70 or so.
01:25 JoeJulian These things work for a couple weeks then just either stop responding or start throwing track 0 errors.
01:25 bronaugh ugly...
01:26 bronaugh it's possible they got mishandled.
01:26 JoeJulian Luckily they're not for anything critical.
01:26 bronaugh I can't make myself believe that they have a failure rate that high out of the factory.
01:26 JoeJulian Possible, but the RMA replacements direct from Hitachi haven't had any better results.
01:26 bronaugh that would seriously erode their bottom line.
01:26 bronaugh RMA replacements are a joke from Seagate.
01:26 bronaugh we give them away.
01:27 bronaugh because they have a shockingly high failure rate.
01:27 bronaugh the number being bandied about is ~10x the failure rate of non-RMA.
01:27 JoeJulian replacing these drives as they come in is still probably cheaper than issuing a recall or admitting liability.
01:28 bronaugh so these are 7k4000 4TB drives you're talking about?
01:28 JoeJulian No these are...
01:29 JoeJulian 7K500
01:29 JoeJulian (had to look...)
01:29 bronaugh 2.5" 500GB drives?
01:29 _Scotty I had a 30% return rate on a box of 200 WD20EARS.  I believe they let those things go in that fashion.
01:30 JoeJulian yep, like I said, they're not main storage.
01:30 bronaugh yeah; do have a theory about that.
01:30 bronaugh which is, buy the biggest drive available, because that essentially gets you top binning platters.
01:30 bronaugh anything else will likely be platters that didn't pass QC at higher density.
01:31 _Scotty i believe that one.
01:31 bronaugh this isn't based on anything more than some relatively limited experience and some reasoning regarding how the supply channel has to work.
01:31 JoeJulian http://fpaste.org/Hrze/ is a 605hour old drive. Check out the realloc...
01:31 glusterbot Title: Viewing smartctl 5.43 2012-06-30 r3573 [x86_ ... SMART Attributes Data Structure revi ... wer-up, resume after 0 minute delay. (at fpaste.org)
01:31 bronaugh but so far we've done quite well that way.
01:32 bronaugh "Reallocated_Event_Count" eh... hah.
01:32 bronaugh funny how Realloc sector count is 0
01:32 bronaugh wonder why they're making their own shit up in SMART data...
01:33 JoeJulian I know, right...
01:33 JoeJulian And they disable it by default.
01:33 bronaugh what?
01:33 bronaugh _NOW_ I am curious.
01:33 JoeJulian hehe
01:33 bronaugh so how do you get access to it?
01:34 JoeJulian smartctl -s on
01:34 _Scotty nice
01:34 bronaugh oh. no, that's the dumbass motherboard disabling it by default.
01:35 bronaugh sorry, not the drive itself.
01:35 bronaugh I believe you're just enabling the SMART command set.
01:35 bronaugh it's tracking the data regardless
01:35 semiosis pulled away from my desk but wanted to share that virtual xattr... getfattr -n trusted.glusterfs.pathinfo <file>
01:35 semiosis says what brick a file is on
01:35 bronaugh cool.
01:35 _Scotty semiosis: even easier.  thanks!
01:36 JoeJulian It can't be the motherboard... it's only these hitachi drives that I have to do that for.
01:36 semiosis yw
01:36 bronaugh JoeJulian: yeah meh. not too exciting, sorry :)
01:36 JoeJulian hehe
01:36 bronaugh but yeah, regarding realloc'd sectors, just keep an eye on the growth rate.
01:37 bronaugh we currently log all SMART data for each and every drive by serial # daily .
01:37 bronaugh so far the predictive ability hasn't actually been that great.
01:38 _Scotty bronaugh: +1 for ZFS. weekly or monthly scrubs identify drives with issues, so you can go troubleshoot.
01:38 bronaugh _Scotty: yeah we do that with mdraid already.
01:38 _Scotty gotcha
01:39 bronaugh and yes, we use both atm.
01:39 _Scotty smart move
01:39 bronaugh legacy move.
01:39 bronaugh xfs + mdraid is what we used to use. zfs is what we're doing for new volumes.
01:40 _Scotty cool!
01:41 _Scotty my gluster-on-zfs doc should be ready tomorrow. i'll let y'all know.
01:41 semiosis johnmark: ^^
01:41 JoeJulian +1
01:43 twx_ using zfs on linux or gluster on fbsd/oi/solaris ?
01:43 yinyin joined #gluster
01:43 _Scotty zfs on linux.
01:43 _Scotty though I'm a little concerned the zfs rc13 build failed, so i'll keep using rc12 for now.
01:45 bronaugh hmm
01:45 bronaugh let behlendorf know.
01:45 bronaugh that or ryao.
01:46 _Scotty i posted it to zfs-devel, i don't know who those two folks are
01:46 bronaugh but behlendorf is the main dev person who hangs around in #zfsonlinux
01:46 mooperd joined #gluster
01:46 _Scotty ah.
01:46 _Scotty thanks!
01:47 bronaugh np
01:48 raven-np joined #gluster
01:54 kevein joined #gluster
01:58 _Scotty ugh.  looks like i need to completely uninstall spl and zfs rc12 before i can upgrade to rc13. no graceful upgrades on that one.  heh.
01:58 bronaugh hmmm
01:58 bronaugh I don't think that was our experience.
01:59 bronaugh we had lots of -other- problems, but not that one.
01:59 _Scotty https://groups.google.com/a/zfsonlinux.org/fo​rum/?fromgroups=#!topic/zfs-devel/f6naL1MFwxE
01:59 glusterbot <http://goo.gl/JnPxu> (at groups.google.com)
02:00 andreask joined #gluster
02:02 bronaugh _Scotty: wouldn't you be able to install them together if they're codependencies?
02:02 bronaugh also are you using the rpm packages?
02:04 _Scotty bronaugh: yes I'm using the RPM, and no I can't build zfs rc13 without having spl rc13 installed first.  i can't upgrade to spl rc13 because spl rc12 is a dependency of zfs rc12.  chicken & egg.
02:04 bronaugh rpm -Uvh spl.rpm zfs.rpm?
02:05 _Scotty bronaugh: Has to be built from source.  zfs is under CDDL and linux is GPL.
02:05 bronaugh dunno about rpm-based systems, but this typically isn't a problem with Debian; which makes me suspect the same type of thing should work with an rpm-based system.
02:05 bronaugh you've installed the dkms package I presume
02:05 _Scotty http://zfsonlinux.org/faq.htm​l#WhatAboutTheLicensingIssue
02:06 glusterbot <http://goo.gl/6SkTj> (at zfsonlinux.org)
02:06 _Scotty absolutely i used dkms. no way i'm rebuilding every time i upgrade the kernel… :D
02:06 bronaugh yup.
02:06 bronaugh same way we did it.
02:06 bronaugh alien'd the rpm
02:06 _Scotty ah
02:16 andreask1 left #gluster
02:19 JoeJulian Roll your own rpms with koji so the dependencies can be satisfied during the build and you'll have rpms to install/upgrade from
02:40 glusterbot New news from newglusterbugs: [Bug 889382] Glusterd crashes in volume delete <http://goo.gl/oAAVp>
02:57 nueces joined #gluster
03:00 sazified joined #gluster
03:00 sunus joined #gluster
03:04 _br_ joined #gluster
03:06 _br_ joined #gluster
03:09 _br_ joined #gluster
03:11 raven-np joined #gluster
03:13 _Scotty JoeJulian: I'll check out koji.  thanks!
03:24 saz_ joined #gluster
03:34 rastar joined #gluster
03:56 _Scotty joined #gluster
04:26 niv joined #gluster
04:30 _br_ joined #gluster
04:37 _br_ joined #gluster
04:40 _br_ joined #gluster
04:41 _br_ joined #gluster
04:41 vpshastry joined #gluster
04:42 Humble joined #gluster
04:56 nightwalk joined #gluster
04:57 _Scotty joined #gluster
05:01 shylesh joined #gluster
05:05 nueces joined #gluster
05:23 ngoswami joined #gluster
05:23 bdperkin_ joined #gluster
05:23 bfoster_ joined #gluster
05:23 kshlm|AF1 joined #gluster
05:24 dblack_ joined #gluster
05:24 jdarcy_ joined #gluster
05:25 kkeithley1 joined #gluster
05:26 bdperkin- joined #gluster
05:26 dblack joined #gluster
05:27 jdarcy__ joined #gluster
05:27 spn joined #gluster
05:28 bfoster joined #gluster
05:28 kshlm|AFK joined #gluster
05:28 kshlm|AFK joined #gluster
05:30 bulde joined #gluster
05:33 hagarth joined #gluster
05:43 bfoster_ joined #gluster
05:43 dblack_ joined #gluster
05:43 kshlm|AF1 joined #gluster
05:43 bdperkin_ joined #gluster
05:44 kkeithley joined #gluster
05:46 jdarcy joined #gluster
05:49 bulde joined #gluster
05:50 carrar joined #gluster
05:51 vpshastry joined #gluster
06:02 bdperkin_ joined #gluster
06:03 dblack joined #gluster
06:03 bfoster joined #gluster
06:03 kshlm|AFK joined #gluster
06:03 kshlm|AFK joined #gluster
06:03 jdarcy_ joined #gluster
06:07 bulde joined #gluster
06:09 glusterbot New news from resolvedglusterbugs: [Bug 765584] KVM migration works once, fails second time <http://goo.gl/7tsfT> || [Bug 845748] forget_cbk not implemented warnings seen for write-behind <http://goo.gl/McQbz>
06:15 an joined #gluster
06:15 sgowda joined #gluster
06:20 sunus hi , where can i find the src about replicate volume's implementions?
06:20 JoeJulian @hack
06:20 glusterbot JoeJulian: The Development Work Flow is at http://goo.gl/ynw7f
06:20 JoeJulian Or github
06:20 sunus JoeJulian: i have source
06:20 JoeJulian @git repo
06:20 glusterbot JoeJulian: https://github.com/gluster/glusterfs
06:20 sunus JoeJulian: i don't know which file to look into
06:20 kkeithley joined #gluster
06:21 JoeJulian Ah the replicate translator is xlators/cluster/afr/src
06:22 JoeJulian afr = Automatic File Replication
06:22 sunus JoeJulian: thank you!!
06:22 JoeJulian er, no
06:22 JoeJulian s/Automatic/Advanced/
06:22 glusterbot What JoeJulian meant to say was: afr = Advanced File Replication
06:22 sunus JoeJulian: so a replicate volume is using xlator afr,
06:22 JoeJulian yeah, that's the ticket.
06:22 JoeJulian yes
06:22 sunus JoeJulian: thank you!
06:22 JoeJulian cluster/afr
06:23 JoeJulian sunus: What're you working on?
06:23 sunus JoeJulian: just digging glusterfs, that's all:)
06:24 JoeJulian cool
06:24 hagarth joined #gluster
06:24 sunus JoeJulian: my company using ovirt and might integer it with glusterfs, so i now just trying to know it as much as possible:)
06:28 _Scotty joined #gluster
06:30 bala joined #gluster
06:31 vpshastry joined #gluster
06:41 an joined #gluster
06:42 shireesh joined #gluster
06:50 sgowda joined #gluster
07:09 glusterbot New news from resolvedglusterbugs: [Bug 843748] Setting lots of quota will make client get blocked. <http://goo.gl/JDthz>
07:09 jtux joined #gluster
07:16 Nevan joined #gluster
07:25 sgowda joined #gluster
07:39 hagarth joined #gluster
07:42 gbrand_ joined #gluster
07:51 an joined #gluster
07:58 ekuric joined #gluster
08:00 vimal joined #gluster
08:10 ctria joined #gluster
08:18 tjikkun_work joined #gluster
08:19 ramkrsna joined #gluster
08:19 ramkrsna joined #gluster
08:29 _Scotty joined #gluster
08:39 sunus joined #gluster
08:46 hagarth joined #gluster
08:46 cbehm_ joined #gluster
08:46 QuentinF joined #gluster
08:46 chacken1 joined #gluster
08:46 helloadam joined #gluster
08:46 eightyeight joined #gluster
08:46 nullsign joined #gluster
08:47 Humble joined #gluster
08:49 DaveS_ joined #gluster
08:51 andreask joined #gluster
08:59 _Scotty joined #gluster
09:04 bulde joined #gluster
09:04 isomorphic joined #gluster
09:11 duerF joined #gluster
09:14 Humble joined #gluster
09:15 Humble joined #gluster
09:17 passie joined #gluster
09:23 guest2012 joined #gluster
09:23 bulde joined #gluster
09:42 bulde joined #gluster
09:44 hurdman joined #gluster
09:44 passie left #gluster
09:45 hurdman hello, i get a lot of [2012-12-21 10:43:15.157189] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.30.1.3:***** into my log, i have googled but no response with this bug match my situation
09:45 glusterbot hurdman: That's just a spurious message which can be safely ignored.
09:46 mooperd joined #gluster
09:47 hurdman ok thx the bot ^^"
09:57 gbrand__ joined #gluster
09:59 Norky joined #gluster
09:59 _Scotty joined #gluster
10:03 bulde joined #gluster
10:11 jtux joined #gluster
10:11 andreask joined #gluster
10:13 passie joined #gluster
10:43 rgustafs joined #gluster
10:43 darth joined #gluster
10:48 duerF joined #gluster
10:50 tryggvil joined #gluster
10:52 x4rlos joined #gluster
11:00 _Scotty joined #gluster
11:22 vpshastry joined #gluster
11:23 nullck joined #gluster
11:23 vpshastry joined #gluster
11:25 vpshastry joined #gluster
11:26 isomorphic joined #gluster
11:27 andreask joined #gluster
11:30 _Scotty joined #gluster
11:33 andreask joined #gluster
11:40 glusterbot New news from resolvedglusterbugs: [Bug 763739] [FEAT] Need glusterd to listen or communicate thru a specific network <http://goo.gl/Jg0QG> || [Bug 764871] Gluster client picking wrong port for rdma connection <http://goo.gl/sr38a>
11:47 bulde joined #gluster
11:55 vimal joined #gluster
11:55 bauruine joined #gluster
12:00 vpshastry joined #gluster
12:08 chirino joined #gluster
12:10 glusterbot New news from resolvedglusterbugs: [Bug 860023] Make error: "clang: error: linker command failed with exit code 1" <http://goo.gl/fNFYQ>
12:10 x4rlos joined #gluster
12:14 vimal joined #gluster
12:15 _Scotty joined #gluster
12:16 andreask joined #gluster
12:18 rgustafs joined #gluster
12:27 vpshastry left #gluster
12:31 khushildep joined #gluster
12:33 puebele joined #gluster
12:37 puebele left #gluster
12:41 jtux joined #gluster
12:41 Nevan1 joined #gluster
12:43 Alpinist joined #gluster
12:48 khushildep joined #gluster
12:59 raven-np joined #gluster
13:12 edward1 joined #gluster
13:12 hateya joined #gluster
13:15 puebele joined #gluster
13:17 gbrand_ joined #gluster
13:27 guigui1 joined #gluster
13:27 shylesh joined #gluster
13:29 Humble joined #gluster
13:34 badone joined #gluster
13:35 guest2012 joined #gluster
13:36 puebele1 joined #gluster
13:38 puebele1 left #gluster
13:41 VSpike Hmm, the gluster clients on my webservers are running pretty hot .... guess that's why y'all said to use NFS :/
13:47 shylesh joined #gluster
13:50 andreask joined #gluster
13:53 shylesh joined #gluster
14:01 bulde joined #gluster
14:05 hateya joined #gluster
14:08 H__ VSpike: in my experience the gluster clients are more stable than the gluster-nfs ones
14:08 plarsen joined #gluster
14:11 rgustafs joined #gluster
14:15 guigui1 joined #gluster
14:20 bulde joined #gluster
14:48 khushildep joined #gluster
14:48 wN joined #gluster
14:52 stopbit joined #gluster
14:53 stigchristian joined #gluster
14:54 stigchristian Hi, I have just rebalanced my cluster after adding two new bricks and I have a lot of failures. Where do I find which files failed and why they failed.
15:09 khushildep joined #gluster
15:15 wushudoin joined #gluster
15:18 shylesh joined #gluster
15:33 raven-np joined #gluster
15:38 shylesh joined #gluster
15:40 khushildep joined #gluster
15:45 ctria joined #gluster
15:46 sjoeboo_ question : does one enable things like the quick-read by editing the volfile or is there a cli way i'm missing docs on ?
15:47 badone joined #gluster
15:49 khushildep joined #gluster
15:55 nueces joined #gluster
15:57 passie left #gluster
16:01 jtux joined #gluster
16:03 daMaestro joined #gluster
16:09 hagarth joined #gluster
16:11 Staples84 joined #gluster
16:12 khushildep joined #gluster
16:23 puebele joined #gluster
16:24 bala1 joined #gluster
16:35 khushildep joined #gluster
16:38 zaitcev joined #gluster
16:38 crushkill joined #gluster
16:38 crushkill hello
16:38 glusterbot crushkill: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:39 crushkill I just mounted my gluster bricks via NFS. I notice that if the server reboots, the file mount hangs
16:39 crushkill is this the intended behaviour?
16:39 sjoeboo_ crushkill: think so. when you mount via nfs, the client is talking to really ust one of your nodes, which is acting in turn as a client of itself/the volume
16:40 jdarcy_ I wouldn't say it's *intended*, but it's the way NFS naturally behaves.
16:40 crushkill i thought mounting through gluster would provide some sort of redundancy
16:40 crushkill can the client just mount the brick via its own ip?
16:40 jdarcy You can use various forms of failover to provide protection for NFS clients, but those are essentially outside of GlusterFS.
16:40 crushkill i see
16:41 jdarcy With native protocol, it's all transparent.
16:41 sjoeboo jdarcy: any good resources you know of for cifs + gluster tuning?
16:41 crushkill jdarcy: as far as nfs redundancy, beyond the local file caching that can be opted for, can you recommend anything outside of gluster , if failover redundancy is not built into gluster + nfs?
16:42 crushkill i see nfs is recommended with gluster for performance reasons, so i am testing it out
16:42 jdarcy crushkill: A lot of people seem to use UCARP.
16:42 zaitcev crushkill: does the client unhang when the server completes rebooting?
16:42 crushkill yes
16:42 crushkill it unhangs
16:42 zaitcev Phew
16:42 twx_ ucarp is ok
16:43 crushkill its strange though.. mounting nfs via client1 : mount -t nfs client1.ip:/volume /data/volume , when server1 reboots client1 hangs
16:43 twx_ I'd prolly go for VCS or some other cluster software (corosync or w/e?)
16:43 twx_ to handle failover and resource management
16:49 andreask joined #gluster
17:06 mooperd joined #gluster
17:16 y4m4 joined #gluster
17:17 andreask left #gluster
17:19 _br_ joined #gluster
17:22 _br_- joined #gluster
17:34 Mo___ joined #gluster
17:59 robo_ joined #gluster
18:01 robos_ joined #gluster
18:07 _Scotty joined #gluster
18:08 _Scotty Hello all
18:09 _Scotty JoeJulian and bronaugh: I finished the GFS on ZFS install guide. How do you want me to send it?
18:12 spn joined #gluster
18:16 jdarcy We don't seem to have a HOWTO section of the wiki.  Hm.
18:21 jdarcy _Scotty: Try starting here: http://www.gluster.org/communit​y/documentation/index.php/HowTo
18:21 glusterbot <http://goo.gl/0Y2v2> (at www.gluster.org)
18:21 jdarcy _Scotty: Or you could just email stuff to me @redhat.com and I'll post it for you (with attribution)
18:24 _Scotty I'm updating the page now.  Thanks!
18:32 hagarth :O
18:32 milos__ joined #gluster
18:33 milos_ joined #gluster
18:34 milos_ Hi, anybody using glusterfs with opennebula? Everything is fine except live-migration because of some cache.. Can I discuss it ?
18:36 milos__ joined #gluster
19:03 _Scotty jdarcy: guide is now posted at http://www.gluster.org/community/do​cumentation/index.php/GlusterOnZFS.  Let me know what you think!
19:03 glusterbot <http://goo.gl/BPNjG> (at www.gluster.org)
19:08 jdarcy _Scotty: Looks awesome.  Thanks!  :)
19:09 johnmark _Scotty: woah. sweet!
19:11 _Scotty Thanks, all!  I'll be updating it after Christmas break with my snapshot and e-mail status scripts.
19:11 jdarcy I love working on open-source projects.
19:11 jdarcy I'm going to go celebrate with some coffee.
19:13 _Scotty LOL me too, oddly enough!
19:13 _Scotty Happy holidays all.
19:17 johnmark :)
19:17 johnmark see ya
19:30 bronaugh _Scotty: still here?
19:30 _Scotty bronaugh: yup!
19:31 bronaugh ok :)
19:32 bronaugh have you thought much about the problem of thrash and snapshot growth?
19:32 bronaugh what're your thoughts about mitigation of disk space usage there?
19:32 bronaugh also, if you force a rebalance with gluster, I believe you may get into interesting times (again, it'll cause files to move between volumes, which'll cause large snapshot sizes)
19:35 bronaugh fwiw this would be solvable with writeable snapshots on zfs (a la btrfs) but I doubt that's going to happen anytime soon.
19:41 bronaugh ok, yet more stupid questions from yours truly.
19:41 bronaugh can you modify the transport-type for a volume without deleting and recreating it on the client?
19:42 bronaugh can't seem to "gluster volume set transport-type
19:42 bronaugh or anything like that.
19:45 bronaugh uhh, wtf guys.
19:45 bronaugh [2012-12-21 11:43:28.594814] E [glusterd-store.c:1320:glus​terd_store_handle_retrieve] 0-glusterd: Unable to retrieve store handle for /var/lib/glusterd/vols/skateboard0/info, error: No such file or directory
19:46 bronaugh this after I stopped a volume (which succeeded) and tried to delete it (which failed)
19:46 bronaugh but apparently you can have an intermediate partial failure...
19:46 bronaugh which is very bad.
19:46 bronaugh wtf.
19:46 bronaugh how many -other- operations like this are not atomic...?
19:48 _Scotty bronough: Let me address the snapshot questions first.
19:49 _Scotty bronough: If I were rebalancing the volume I would certainly disable snapshots until the rebalance was complete.  ZFS uses Copy-on-Write, so if it's a striped volume (which I'm not running) the snapshot size wouldn't be THAT bad I wouldn't think.
19:49 _Scotty bronough: Honestly though, looking at my environment and how much storage we are talking, I doubt I'd ever actually rebalance the volume.
19:49 bronaugh depends how well the hashing ends up working.
19:50 bronaugh if you get unlucky with hashing, it can become unbalanced.
19:50 _Scotty bronough: I'd just tack on more storage nodes.  Recently written files would end up on the newer, faster, larger hardware.
19:50 _Scotty bronough: I think Red Hat / Gluster did a case study on Pandora Radio, I'd follow the same type of growth format.
19:51 _Scotty bronough: writable snapshots are pretty much a nonstarter on ZFS.  Which makes sense, because most of it are pointers to the live data.
19:51 _Scotty bronough: Well, the hashing is always relative anyhow.
19:52 _Scotty bronough: it's "relatively evenly" balanced.
19:52 bronaugh yes.
19:53 _Scotty bronaugh: realistically, rebalancing a petabyte would take weeks.
19:53 bronaugh depends how unbalanced it is
19:55 _Scotty bronaugh: It also depends on the available network bandwidth, too.  From what I read, the rebalancing operation runs at a low priority.
19:58 _Scotty bronaugh: If the rebalancing could take place on a 10Gb network and your aggregate throughput is 22GB/s from all storage nodes, shuffling 500TB would take 6.3 hours.  It actually only taking that long is a little unrealistic, I think.
20:04 _Scotty bronaugh: WRT your transport question, how are you mounting the volume from the client
20:05 samppah _Scotty: i'm sorry i haven't been following zfs (on linux) that closely lately but is there a reason why you are using compression instead of dedup?
20:06 _Scotty bronaugh: I can flip back and forth with my mount option depending on if I access things over TCP or RDMA with the mount option
20:07 _Scotty samppah: unless you have a LOT of RAM and (possibly multiple) SSDs for L2ARC, do NOT enable dedupe on ZFS.  It's a HUGE performance hit because once you can't store the table in RAM it has to read the table from disk each time.
20:07 _Scotty samppah: I'd also ignore dedupe if you aren't using that volume to store virtual disk images (VDI-type environment)
20:08 milos_ Hello, is there anybody who know howto turn off write cache because of KVM live migration ?
20:08 milos_ thank you
20:08 _Scotty samppah: unless you know for sure your users make duplicate copies of data
20:09 _Scotty bronaugh: http://gluster.org/pipermail/glus​ter-users/2011-March/007146.html
20:09 glusterbot <http://goo.gl/KPygi> (at gluster.org)
20:09 samppah _Scotty: okay, thanks :)
20:09 samppah have you done any extensive testing on this setup?
20:10 _Scotty samppah: compression, especially the default lightweight lzjb compression in ZFS, gives you a nice bang for the buck.  In a 50/50 mix of compressed and text data on my volumes, I get around 75% compression.  Data is served faster because fewer blocks are stored on disk. It's transparent to the end users.
20:13 _Scotty samppah: yes.
20:16 samppah _Scotty: any gotchas there?
20:16 _Scotty samppah: "there" as in where?
20:16 samppah _Scotty: glusterfs over zfs mostly :)
20:19 robo joined #gluster
20:20 _Scotty samppah: samppah: none so far.
22:33 Kins joined #gluster
22:36 ultrabizweb joined #gluster
22:43 Kins joined #gluster
22:53 hattenator joined #gluster
22:56 Kins joined #gluster
23:42 raven-np joined #gluster
23:57 jskelton joined #gluster
23:57 _Scotty joined #gluster
23:57 _Scotty Back.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary