Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 cjbehm JoeJulian so, in theory, i could copy the brick data to some media (formatted to preserve the extended attr)? what i primarly want to avoid would be bringing the replacement into the volume and then having performance tank for anything trying to use the volume
01:10 lkthomas joined #gluster
01:34 daMaestro joined #gluster
01:50 kevein joined #gluster
02:11 daMaestro joined #gluster
02:17 dabeowulf joined #gluster
02:19 mohankumar joined #gluster
02:19 64MAB3C66 joined #gluster
02:55 harish joined #gluster
03:09 mohankumar joined #gluster
03:12 daMaestro joined #gluster
03:22 mohankumar joined #gluster
03:26 bharata joined #gluster
03:29 kshlm joined #gluster
03:30 bharata joined #gluster
03:51 daMaestro joined #gluster
03:54 shylesh joined #gluster
03:55 bharata joined #gluster
04:23 vpshastry joined #gluster
04:24 sripathi joined #gluster
04:28 sashko joined #gluster
04:29 sashko joined #gluster
04:30 JoeJulian cjbehm: Yes, if your replacement server has the same name and the brick was duplicated (including extended attributes) to the new server, the only self-heal would be for files that changed during the downtime.
04:31 cjbehm JoeJulian excellent, thanks for the assistance :)
04:31 JoeJulian cjbehm: If you can leave your old server on during it's replacement, you could just do a replace-brick to move the files to the new server.
04:31 JoeJulian Would cause less impact and would not degrade the redundancy during the change.
04:32 JoeJulian I plan on doing just that this next week with my new server replacement.
04:33 cjbehm so during a replace brick operation it's a "bring the new brick up to speed before starting its operation" type of thing? that might be better indeed
04:33 JoeJulian Totally.
04:36 cjbehm can it be throttled? i'd rather it take longer but not impact the useability
04:48 vpshastry joined #gluster
04:50 64MAB3C66 left #gluster
05:07 daMaestro joined #gluster
05:08 vpshastry joined #gluster
05:09 jays joined #gluster
05:13 faizan joined #gluster
05:20 sac joined #gluster
05:26 hagarth joined #gluster
05:28 deepakcs joined #gluster
05:28 JoeJulian Yes and no. You can change the cluster.background-self-heal-count lower. It does reduce load but you'd have to experiment to determine what works for your system.
05:29 JoeJulian cjbehm: ^
05:29 cjbehm it's at least a handle to use
05:29 JoeJulian cjbehm: Sorry my response times are a bit slower than they are on weekdays.
05:30 cjbehm no problem, i'm happy to have had responses at all given how quiet the channel's been :)
05:34 JoeJulian It's only because it's the weekend. It'll be downright noisy tomorrow morning.
05:35 cjbehm good to know - i figured it might be busier on a weekday. i'd not be here now either if i hadn't already been working all day and still going. figured i'd fill the moments of waiting by following up on some other questions
05:36 ramkrsna joined #gluster
05:38 sripathi1 joined #gluster
05:41 sashko joined #gluster
05:43 lng joined #gluster
05:44 lng Hi! How to add safely add bricks to live Gluster?
05:44 JoeJulian lng: ,,(rtfm). Search for "add-brick"
05:45 glusterbot lng: Read the fairly-adequate manual at http://gluster.org/community/doc​umentation//index.php/Main_Page
05:46 octi joined #gluster
05:49 raghu joined #gluster
05:49 octi what happened to the gluster debs?
05:50 JoeJulian The download server is broken. You can download from bits.gluster.org or check out the ,,(ppa)
05:50 glusterbot semiosis' Launchpad PPAs have 32 & 64-bit binary packages of the latest Glusterfs for Ubuntu Lucid - Precise. http://goo.gl/DzomL (for 3.1.x) and http://goo.gl/TNN6N (for 3.2.x) and http://goo.gl/TISkP (for 3.3.x). See also @upstart
05:50 ankit9 joined #gluster
05:53 lng JoeJulian: ftnx
05:53 JoeJulian lng: After reading that, if you have any questions feel free to ask
05:53 JoeJulian ... although I'll be going to bed shortly so read fast. ;)
05:54 guigui1 joined #gluster
05:54 octi Ok, I have a backup image, will the server be fixed?
05:55 octi Thanks.
05:55 Humble joined #gluster
05:55 JoeJulian I guess so. I'm a little surprised it wasn't up Friday.
05:56 JoeJulian I know there's been talk about bringing the yum repo and the ppa under the gluster.org umbrella and having gluster no longer roll their own.
05:56 octi Its weird that the download links now only point to the rpm repository
05:56 lng JoeJulian: after a brick was added, do I need to re-balance the cluster?
05:57 JoeJulian lng: You should, yes.
05:57 lng now it is comprised of 4 nodes
05:57 lng 1 node - 1 brick
05:57 JoeJulian 2 servers and 2 clients?
05:57 JoeJulian Ah
05:57 lng replicated
05:57 JoeJulian So you'll want to add as many as your replica count.
05:57 lng JoeJulian: many clients
05:58 lng yea, 4 bricks as toatal
05:58 lng total*
05:58 JoeJulian @define node
05:58 lng so each instance should have 2 bricks
05:58 JoeJulian @dict node
05:58 glusterbot JoeJulian: wn, foldoc, gcide, and moby-thes responded: foldoc: node1. A point or vertex in a {graph}.2. {network node}.3. A {hypertext} document; wn: noden 1: a connecting point at which several lines come together2: any thickened enlargement [syn: {node}, {knob}, {thickening}]3: (botany) the small swelling that is the part of a plant stemfrom which one or more leaves emerge [syn:
05:58 glusterbot JoeJulian: {node}, {leafnode}]4: (physics) the point of minimum displacement in a periodicsystem [ant: {antinode}]5: (astronomy) a point where an orbit crosses a plane6: the source of lymph and lymphocytes [syn: {lymph node},{lymph gland}, {node}]7: any bulge or swelling of an anatomical structure or part8: (computer science) any computer that is hooked up to acomputer network [syn: {node}, (8 more messages)
05:58 lng node - server
05:59 lng - peer
05:59 JoeJulian That's why I don't like that word. :)
05:59 JoeJulian Using the ,,(glossary) terms makes communicating much easier.
05:59 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
05:59 Humble joined #gluster
06:00 lng JoeJulian: when volumes are mounted and ready
06:00 lng is it okay to turn them on consequently?
06:01 lng with add-brick
06:01 JoeJulian You can leave your volumes up as you add bricks and rebalance. Your clients will continue to function normally.
06:01 lng I know
06:01 lng I mean I added a brick to server1
06:01 lng what should I do next?
06:01 JoeJulian Did you also add one to server2?
06:02 lng should I rebalance or keep on adding the rest of bricks?
06:02 JoeJulian Er...
06:02 JoeJulian You added one single brick?
06:02 lng not yet - just asked first
06:02 lng :-)
06:02 lng anyway, it is impossible to add them all at once
06:03 JoeJulian Ah, ok. Assuming you're using replica 2, you'll have to add your bricks by pairs. You can rebalance then if you need, or add all your brick pairs (if you're adding more than 2)  and rebalance then.
06:03 JoeJulian You can add them all at once.
06:03 JoeJulian Each 2 listed in sequence will be a replica pair (again, assuming "replica 2")
06:05 tryggvil joined #gluster
06:05 lng gluster volume create storage replica 2 transport tcp ...
06:05 JoeJulian It's like I'm psychic or something...
06:06 lng the command I used for creation
06:06 octi Oh thats cool, so if two replicated gluster nodes can't talk to each other, but the client can still talk to both, then replication continues to work and there is no real split-brain problem?
06:06 JoeJulian correct
06:07 JoeJulian Provided the same caveat I used earlier regarding "nodes". :P
06:08 JoeJulian Well, I'm starting to feel like I'm being funny but nobody's smiling. That usually means I'm way too tired and need to go to bed.
06:09 JoeJulian Later all...
06:09 cjbehm thanks for the info earlier - i'm going to go pass out too
06:10 lng JoeJulian: can I use multiple server:brick per single add-brick command?
06:10 JoeJulian yep
06:10 Nr18 joined #gluster
06:10 lng nice
06:11 ngoswami joined #gluster
06:12 sunus joined #gluster
06:14 lng JoeJulian: something like that? http://pastie.org/private/5a5m8x6nr7oijob4bcsqa
06:14 glusterbot Title: Private Paste - Pastie (at pastie.org)
06:15 mdarade joined #gluster
06:16 lng man: "to expand a distributed replicated volume with a replica count of 2, you need to add bricks in multiples of 2 (such as 4, 6, 8, etc.)"
06:17 mohankumar joined #gluster
06:17 lng should be ok, right?
06:17 JoeJulian yes
06:17 lng JoeJulian: have you see my paste?
06:18 JoeJulian yes
06:18 lng seen*
06:18 lng ah, ok
06:18 lng thanks
06:18 JoeJulian Ok, really going to bed now. Bangalore should be waking up shortly and they might be around if you get stuck.
06:19 lng lat question
06:19 lng size
06:19 JoeJulian size?
06:19 lng should be the same?
06:19 JoeJulian It's best, yes.
06:19 lng oki doki
06:19 lng good night
06:32 deepakcs joined #gluster
06:37 sgowda joined #gluster
06:41 mdarade1 joined #gluster
06:43 Humble joined #gluster
06:50 lng when rebalance, do I need fix-layout??
06:51 lng after adding new bricks
06:52 puebele joined #gluster
06:55 lng was migrate-data option removed from rebalance?
07:06 ekuric joined #gluster
07:09 bala1 joined #gluster
07:11 puebele1 joined #gluster
07:15 manik joined #gluster
07:15 TheHaven joined #gluster
07:19 guigui3 joined #gluster
07:20 Nr18 joined #gluster
07:25 andreask joined #gluster
07:33 sripathi joined #gluster
07:40 andreask joined #gluster
07:44 lng rebalancing takes time...
07:46 lng but Rebalanced-files:0, size:0, scanned:0, failures:0, status:in progress
07:47 lng is it normal?
07:49 ekuric joined #gluster
07:53 tjikkun_work joined #gluster
08:01 ika2810 joined #gluster
08:04 ika2810 joined #gluster
08:07 oneiroi joined #gluster
08:09 sgowda joined #gluster
08:10 rwheeler joined #gluster
08:21 manik left #gluster
08:22 deepakcs joined #gluster
08:26 haakond joined #gluster
08:27 stickyboy joined #gluster
08:41 dobber joined #gluster
08:43 sunus joined #gluster
08:51 sripathi joined #gluster
08:53 sunus hi, is there any document that explain the src of glusterfs?
08:54 ndevos sunus: the blog on hekafs.org contains some good information
08:54 mohankumar joined #gluster
08:54 sunus ndevos: is xlator 101 series?
08:54 sunus ndevos: i've read that
08:55 sunus anything else?
08:55 ndevos sunus: yeah, those good, there is probably more on there too
08:56 ndevos sunus: what explanation on the source do you need? imho most of it is easy to understands, you just need to find the pieces you're looking for
08:57 guigui5 joined #gluster
08:57 sunus ndevos: hummm, yeah, maybe i don't know where to get in, can you point it out? where to start?
08:58 sunus ndevos: thank you in advance!
08:58 badone_home joined #gluster
08:59 ndevos sunus: there are some docs on http://www.gluster.org/community/d​ocumentation/index.php/Developers , maybe start with the Developer Workflow?
09:00 guigui5 left #gluster
09:01 guigui1 joined #gluster
09:07 duerF joined #gluster
09:11 hagarth joined #gluster
09:15 Triade joined #gluster
09:16 rgustafs joined #gluster
09:16 ankit9 joined #gluster
09:23 Machske joined #gluster
09:26 ekuric joined #gluster
09:27 sripathi joined #gluster
09:28 sgowda joined #gluster
09:29 Machske hi, I'm stuck with a stale gluster volume in gluster 3.3, which I cannot remove.
09:29 Machske gluster> volume status
09:29 Machske Volume teststore is not started
09:30 Machske gluster> volume delete teststore
09:30 Machske Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
09:30 Machske Volume teststore does not exist
09:30 Machske anyone an idea on how to fix such a situation ?
09:34 vpshastry joined #gluster
09:34 sripathi joined #gluster
09:41 mohankumar joined #gluster
09:50 badone_home joined #gluster
10:00 vpshastry joined #gluster
10:02 bala joined #gluster
10:10 ika2810 left #gluster
10:11 mohankumar joined #gluster
10:14 badone_home joined #gluster
10:16 sgowda joined #gluster
10:18 mdarade1 left #gluster
10:28 verywiseman joined #gluster
10:30 verywiseman what are the features that are provided with glusterfs not provided by GFS2 cluster ?
10:47 guigui3 joined #gluster
10:55 Gerben joined #gluster
10:55 verywiseman joined #gluster
10:55 Gerben Hello, The documentation link on gluster.org leads to spam about GPS trackers
10:56 Gerben Don't know if any admins are here..
10:57 sripathi1 joined #gluster
10:59 sripathi joined #gluster
11:02 deepakcs joined #gluster
11:04 kshlm Gerben: should be fixed now.
11:05 Gerben yes, I see it
11:14 Triade joined #gluster
11:18 verywiseman joined #gluster
11:18 sripathi joined #gluster
11:23 sgowda joined #gluster
11:34 vpshastry joined #gluster
11:36 verywiseman joined #gluster
11:36 verywiseman joined #gluster
11:38 zoldar what happend with deb packages which were available from gluster.org's download section? I see only rpms right now
11:42 bulde zoldar: download.gluster.org was down, and is getting fixed now... johnmark would update once its all fixed
11:43 ndevos @ppa
11:43 glusterbot ndevos: semiosis' Launchpad PPAs have 32 & 64-bit binary packages of the latest Glusterfs for Ubuntu Lucid - Precise. http://goo.gl/DzomL (for 3.1.x) and http://goo.gl/TNN6N (for 3.2.x) and http://goo.gl/TISkP (for 3.3.x). See also @upstart
11:43 ndevos zoldar: maybe you want to use those ^ ?
11:48 kkeithley hagarth: ping
11:50 rgustafs joined #gluster
11:53 DataBeaver JoeJulian: The CPU is a 3 GHz Core 2 Duo, which I think should be enough.  Top shows almost no CPU usage in the user and system categories, but extremely high amounts of I/O wait.  Similar filesystem operations on the server complete an order of magnitude faster.
11:53 verywiseman joined #gluster
11:54 verywiseman joined #gluster
12:03 ngoswami joined #gluster
12:04 verywiseman joined #gluster
12:04 verywiseman joined #gluster
12:04 shylesh joined #gluster
12:13 verywiseman joined #gluster
12:13 verywiseman joined #gluster
12:13 tjikkun_work joined #gluster
12:24 sashko joined #gluster
12:26 glusterbot New news from newglusterbugs: [Bug 866456] gluster volume heal $ full keeps increasing the No. of entries for gluster volume heal $ info healed even if healing is not done <https://bugzilla.redhat.com/show_bug.cgi?id=866456>
12:31 manik joined #gluster
12:32 sashko joined #gluster
12:44 verywiseman joined #gluster
12:45 rwheeler joined #gluster
12:46 Azrael808 joined #gluster
12:53 verywiseman joined #gluster
12:53 verywiseman joined #gluster
12:56 nightwalk joined #gluster
12:56 aliguori joined #gluster
12:58 plarsen joined #gluster
13:02 deepakcs joined #gluster
13:10 robo joined #gluster
13:13 TheHaven joined #gluster
13:16 Azrael808 joined #gluster
13:17 rkubany joined #gluster
13:18 rkubany Hi, I would like to know what will happen if a replace a crashed brick with the same hardware (with same uuid as old crashed server) but with bigger disk(s)
13:19 rkubany I'm in a distributed-replicated setup with 8x3 bricks
13:19 rkubany (3 replicas)
13:19 verywiseman joined #gluster
13:19 verywiseman joined #gluster
13:20 rkubany I assume the excedentary space will be lost
13:26 bulde1 joined #gluster
13:27 vpshastry left #gluster
13:31 tqrst left #gluster
13:32 tryggvil joined #gluster
13:38 kkeithley hagarth, hagarth_, johnmark: ping
13:46 delta99 joined #gluster
13:51 guigui3 joined #gluster
13:56 ankit9 joined #gluster
14:03 stopbit joined #gluster
14:04 Psi-Jack joined #gluster
14:05 Psi-Jack joined #gluster
14:07 michig joined #gluster
14:07 michig Hi, where did you guys put the gluster 3.3.* .debs?!
14:08 ndevos ~ppa | michig
14:08 glusterbot michig: semiosis' Launchpad PPAs have 32 & 64-bit binary packages of the latest Glusterfs for Ubuntu Lucid - Precise. http://goo.gl/DzomL (for 3.1.x) and http://goo.gl/TNN6N (for 3.2.x) and http://goo.gl/TISkP (for 3.3.x). See also @upstart
14:08 ndevos at least until download.gluster.org is back :)
14:09 michig would need the .deb for debian squeeze or wheezy...
14:11 hagarth joined #gluster
14:11 * ndevos wouldnt know which works with that
14:17 michig So there are currently no .debs for debian?
14:18 Heebie left #gluster
14:19 kkeithley michig: we're working on it. We had a problem with download.gluster.com and it's in the process of being rebuilt. johnmark will tell more about it soon.
14:19 michig Ok, thanks for info.
14:21 Azrael808 joined #gluster
14:24 johnmark michig: sorry guys :(
14:24 johnmark we are working to restore download services ASAP.
14:24 michig no problem ;)
14:25 johnmark michig: are you debian or ubuntu?
14:25 michig debian
14:25 johnmark there are ubuntu builds by semiosis that you may find useful
14:25 michig are .debs for debian stable (squeeze) planned for the 3.3.1?
14:25 michig since the 3.3.0 was only for lenny and wheezy?
14:26 johnmark michig: that is the plan. I'm trying to get in touch with our debian maintainer
14:26 michig very nice!
14:26 michig thanks :)
14:26 johnmark yeah, more on that later
14:26 johnmark I'd like to have both stable and unstable
14:26 michig me 2
14:27 michig what do you think how long it will take? Cause I'm currently setting up a new Gluster.
14:27 michig If it only take a few days a would wait for the 3.3.1
14:28 johnmark michig: I would recomment waiting. I don't see it being more than a few days
14:28 michig Okay, so I will wait for it :)
14:28 michig Where can I find a changelog for 3.3.1?
14:29 wushudoin joined #gluster
14:32 johnmark michig: we
14:32 johnmark we're cleaning it up at the moment, but if you have cloned the git tree
14:32 johnmark 07:30 <hagarth> git checkout -b release-3.3 origin/release-3.3
14:32 johnmark 07:30 <hagarth> git log v3.3.0..HEAD
14:33 stickyboy Hooray for git... /me goes to clone.
14:33 johnmark michig: ^^^ thatgives you all the changes from 3.3.0 to 3.3.1
14:33 michig k tnx
14:34 michig I didn't use git for gluster before, I was just downloading the .debs from download.gluster.org
14:35 johnmark michig: I hear ya
14:37 stickyboy johnmark: Ah, you guys must be using gerrit.
14:38 kkeithley stickyboy: yes
14:39 stickyboy You guys are so hip, it's awesome.
14:41 johnmark haha :)
14:41 stickyboy 492 commits in master since 3.3.0 was tagged.
14:41 johnmark stickyboy: I don't think we've been accused of that before :)
14:42 stickyboy Nah it's cool.  I dig gerrit.  It's great when people actually use it.
14:42 stickyboy Though I've only ever used it in community projects, like AOSP or CyanogenMod (Android stuff).
14:42 johnmark ah, CM
14:42 stickyboy When you have professional devs it's probably even better.
14:43 johnmark do you work on the CM project?
14:43 stickyboy Yah.
14:43 johnmark SWEET
14:43 stickyboy Well, I've ported a few devices hehe.
14:43 johnmark awesome
14:43 stickyboy But I'm not a "CM" dev. :P
14:43 johnmark heh, ok
14:43 stickyboy Why, you want me to try gluster on my phone? :D
14:45 johnmark stickyboy: well, actually...
14:45 johnmark putting together an android client for our UFO stuff would rock
14:45 johnmark which basically means an android client for swift
14:45 stickyboy UFO == The object storage?
14:45 johnmark rackspace kinda sorta has one, but it really blows
14:45 johnmark indeed
14:46 johnmark stickyboy: their ios app can work with UFO, with some slight config changes
14:46 johnmark but their android app is a few revs behind
14:48 stickyboy So a mobile app opens up possiblities like access to "all your stuff" from anywhere.
14:48 johnmark yup
14:48 y4m4 joined #gluster
14:49 stickyboy Yeah, I guess without a native API for that you'd have to do it over a web service or something.
14:50 stickyboy On a server where your gluster storage is mounted.
14:50 daMaestro joined #gluster
14:50 stickyboy So what's swift?
14:50 johnmark stickyboy: right
14:50 johnmark stickyboy: swift is the object storage project for openstack
14:51 pdurbin can glusterbot do google searches?
14:51 stickyboy Ah ok, I'm actually clueless about openstack.
14:53 johnmark pdurbin: I think so
14:53 johnmark @lucky pdurbin
14:53 glusterbot johnmark: https://github.com/pdurbin
14:54 pdurbin nice!
14:54 johnmark heh
14:54 johnmark @lucky johnmark
14:54 glusterbot johnmark: http://en.wikipedia.org/wiki/John_Mark
14:54 pdurbin @lucky openstack swift
14:54 glusterbot pdurbin: http://swift.openstack.org/
14:54 pdurbin stickyboy: there you go :)
14:54 stickyboy Hehe.
15:02 eadric joined #gluster
15:03 jbrooks joined #gluster
15:04 tc00per Has anybody seen files left on bricks after completing removal of peer?
15:04 jbrooks joined #gluster
15:05 eadric hi, can someone tell me where to find the glusterfs install documentation, download.gluster.org gives me timeouts
15:06 eadric so I guess I meant alternative location for documentation. Other than: http://download.gluster.org/pub/glus​ter/glusterfs/3.2/Documentation/IG/
15:07 eadric tia
15:08 johnmark eadric: unfortunately, we do not have that doc at teh moment, but I will try to fish it out
15:10 eadric ok I'll wait, thx
15:10 johnmark eadric: have you tried the new user guides? http://www.gluster.org/community/document​ation/index.php/Getting_started_overview
15:10 johnmark and http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
15:10 stickyboy eadric: I have  Gluster_File_System-3.2.5-I​nstallation_Guide-en-US.pdf  as well as   Gluster_File_System-3.3.0-Ad​ministration_Guide-en-US.pdf
15:10 eadric stickyboy, that be great
15:11 eadric johnmark, I'm looking for specific thing I think I read before
15:12 stickyboy eadric: You want just the Install guide?
15:12 eadric both would be great just in case
15:13 stickyboy Ok.
15:14 stickyboy eadric: Install guide: http://www.filepup.net/files/n0Yok1350314106.html
15:14 glusterbot Title: FilePup.net - Dead Simple File Sharing! (at www.filepup.net)
15:15 eadric thanks
15:15 stickyboy eadric: Admin guide: http://www.filepup.net/files/g3Ne1350314184.html
15:15 glusterbot Title: FilePup.net - Dead Simple File Sharing! (at www.filepup.net)
15:17 eadric and thats 2, thanks a lot.
15:18 stickyboy No prob.
15:18 manik joined #gluster
15:18 stickyboy Alright, it's quitting time over here in GMT+3.  Adios!
15:20 neofob joined #gluster
15:31 tryggvil joined #gluster
15:37 _Marvel_ joined #gluster
15:39 tc00per All files/links appear to have been moved successfully to active bricks but glusterd didn't clean up after itself properly on the node 'to be removed'.
15:40 tc00per I'm going to remove another node from my test gluster cluster and am looking for tips on what to 'collect' to see if I can identify a bug. Any suggestions?
15:55 seanh-ansca joined #gluster
15:55 duerF joined #gluster
15:56 glusterbot New news from newglusterbugs: [Bug 866557] Some error messages logged should probably be warnings <https://bugzilla.redhat.com/show_bug.cgi?id=866557>
16:13 themadcanudist joined #gluster
16:13 themadcanudist left #gluster
16:14 hagarth1 joined #gluster
16:19 ika2810 joined #gluster
16:21 hagarth joined #gluster
16:22 ankit9 joined #gluster
16:28 Mo___ joined #gluster
16:33 hagarth joined #gluster
16:38 Gilbs joined #gluster
16:42 aliguori joined #gluster
16:48 noob2 joined #gluster
16:49 noob2 hey guys, i'm noticing something odd on a gluster fuse mount when i try to copy a 500MB text file from one directory to the directory above it.  It takes about 25minutes when i time it.  On another machine with just nfs and no fuse client, it takes between 3-5 seconds.  I timed it a few times
16:50 noob2 strace shows both clients doing the same thing.  reading the file in 4k chunks and writing it back.
16:50 JoeJulian noob2: Doesn't surprise me all that much. That directory probably hashes to a different server so you're literally copying between servers.
16:51 noob2 ok, that's what i was trying to figure out
16:51 noob2 why would the nfs client have such a large copy time difference?
16:52 manik joined #gluster
16:52 sashko joined #gluster
16:53 JoeJulian My suspicion is that nfs doesn't actually complete the transaction when it tells you it's done, but I don't know that for sure.
16:53 noob2 ah
16:54 noob2 interestnig
16:54 JoeJulian though 3 to 5 seconds is an odd number for that.
16:54 noob2 whys that?
16:54 JoeJulian Would be interesting to wireshark it and see what's happening.
16:54 noob2 ok
16:54 noob2 let me dig up some more data
16:54 JoeJulian Because if it's going to lie to you it should do it immediately.
16:55 noob2 right
16:55 noob2 you say copy and it says I'm done!  but really it just informed the server to do that and the server copies it as it can
16:56 Gilbs Howdy all, I need to enable the locks feature to use fcntl locking, but I don't see a detailed way of enabling this.  Do I add this to the glusterd.vol and restart glusterd?
16:56 Gilbs url: http://gluster.org/community/document​ation/index.php/Translators/features
16:56 vimal joined #gluster
16:56 JoeJulian posix locking is enabled by default.
16:57 hagarth joined #gluster
16:57 Technicool joined #gluster
16:57 Gilbs Ah, the site says: This option enables mandatory locking semantics on the posix locks. By default this option is 'off'
16:59 Gilbs That's a bummer, I'm having issues with CTDB and the re-lock file unable to be locked with gluster.  My last shot was to hope fcntl locking was diabled.  Any ideas or similar issues seen out there?
16:59 JoeJulian Well, that's true if you're building your own vol files ala GlusterFS 1.0-3.0.
17:00 JoeJulian Let me look through the chat archives for the link I thought I posted...
17:00 Gilbs Thank you.
17:01 JoeJulian What nick were you using last time?
17:02 Gilbs same
17:03 Gilbs This one?  http://download.gluster.com/pub/gluster/sys​tems-engineering/Gluster_CTDB_setup.v1.pdf
17:04 JoeJulian Nope. Damn, can't find it...
17:06 Gilbs samba/ctdb guys are saying posix locking is not enabled, testing with ping_pong that yes it is not locking correctly.  I just want ctdb and gluster to play nice :)
17:07 JoeJulian Tell me about this test that's failing.
17:08 Gilbs https://wiki.samba.org/index.php/Ping_pong
17:08 glusterbot Title: Ping pong - SambaWiki (at wiki.samba.org)
17:08 wN_ joined #gluster
17:09 wN joined #gluster
17:09 Gilbs "This can in particular be tested with the -rw option toping_pong: If you run "ping_pong -rw /path/to/file 3" on one node and then "ping_pong -rw /path/to/file 3" on a second node, you should see the "data increment" notice (going from "1" to "2"), indicating that you now have two processes operating on the same file. If this stays constant (at 1) then your gluster setup does not provide sufficient fcntl byte range lock suppo
17:09 manik joined #gluster
17:11 JoeJulian That certainly looks reproducible. Let's see if I can get the same result.
17:14 JoeJulian Nope...
17:14 JoeJulian What version are you using?
17:14 Gilbs 3.3
17:15 JoeJulian With 3.3.1 from the ,,(yum repo) I'm getting abour 900 locks/sec.
17:15 glusterbot kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
17:16 JoeJulian Hmm, the volume I'm testing on does have performance.write-behind off
17:16 JoeJulian Try that.
17:21 hagarth left #gluster
17:22 hagarth joined #gluster
17:24 Gilbs Phone always rings when you're getting some where...  When you did the ping_pong, did the "data increment" go from 1 to 2 when you started the second node?
17:26 JoeJulian Ah, I see...
17:34 noob2 JoeJulian: i have some dstat data of the nfs copy if you want to take a look
17:36 JoeJulian Gilbs: Interestingly, despite the data increment not increasing, the locks/sec certainly change a lot when using more clients.
17:37 JoeJulian Gilbs: Could you please file a bug report. I'll substantiate it.
17:37 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:38 edward1 joined #gluster
17:43 JoeJulian Gilbs: I can't test it on this volume and I need to get out the door and to the office sometime before noon... But could you also test setting cluster.eager-lock on and see if that works any differently?
17:43 JoeJulian noob2: Could be interesting. Wanna fpaste that?
17:43 noob2 sure
17:43 noob2 http://fpaste.org/zi5T/
17:44 glusterbot Title: Viewing Paste #243420 (at fpaste.org)
17:45 JoeJulian So it's definitely not doing the read/write dance.
17:45 noob2 doesn't seem to be
17:46 noob2 that's over nfs
17:50 ankit9 joined #gluster
17:51 hagarth joined #gluster
17:52 Gilbs Sure thing, I'll test and file this afternoon.
17:56 MrHeavy joined #gluster
17:57 MrHeavy Are there any companies besides Red Hat offering support for Gluster? The pricing on Red Hat Storage Server is insane -- we can buy scale-out SAN for their node licensing alone
17:57 H__ what are the 0 bytes ---------T mode files for during a rebalance layout ?
17:58 semiosis @link files
17:58 JoeJulian MrHeavy: Really, just out of curiosity, what's the price tag?
17:58 adechiaro MrHeavy: what are they charging for it?
17:58 MrHeavy It's bad professional ethics to disclose confidential vendor pricing
17:59 MrHeavy But it's not what I expected given that we already have a RHEL site license
17:59 semiosis translation... price depends on who you are
17:59 JoeJulian They price it confidentially? Erm, okay...
17:59 * semiosis guesses
18:01 semiosis H__: those are link files, they have xattrs that point to the brick location of that file path.  glusterfs places them where a file "should" be, as a pointer to where the file *is*
18:02 semiosis H__: to save future lookups from having to poll all bricks, when a file is not at the location it should be, as determined by the elastic hash algorithm
18:03 MrHeavy Anyway, I was just wondering if anyone else was in on the game, thanks
18:03 faizan joined #gluster
18:06 JoeJulian MrHeavy: Not that I'm aware of. I've heard that Canonical is trying to hire and train up some people, but I have no idea whether that's going anywhere.
18:21 H__ semiosis: ah ok. cool. So that's what the next rebalance step works on
18:22 Gilbs JoeJulian: Do i need to restart glusterd or the volume when I set the cluster.eger-lock?
18:22 semiosis H__: yep... do you know for sure they were put there by the fix-layout?  could they have been there before?
18:23 semiosis my understanding is that they're not put there by fix-layout...
18:23 semiosis though should be "cleaned up" by the full rebal
18:27 H__ semiosis: i've started the fix-layout 3 days ago, the disks are thus also in regular production use
18:28 semiosis wow
18:28 semiosis and status reports it's still going? i thought fix-layout was a quick(er) operation
18:28 semiosis but idk for sure
18:29 jdarcy Fix-layout still has to traverse all of the directories.  Still, three days seems excessive.  How many directories are we dealing with here?
18:30 H__ semiosis: yes, still running. it reports "rebalance step 1: layout fix in progress: fixed layout 773985" now, whatever that means ;-)
18:31 semiosis sorry to hear that
18:31 H__ jdarcy: well, with traversing the tree this slow I dont' dare to walk it to find out :-p
18:31 tc00per Is there a 3.3.1 Admin Guide out yet?
18:31 faizan joined #gluster
18:31 jdarcy H__: Heh, yeah, I was hoping you'd have a ballpark figure already in mind.
18:32 H__ something around 1M
18:33 H__ the bigger trees use NN/NN/NN/NN with N a number 0-9
18:33 jdarcy Something definitely seems slower than it should be, then.  Three days is what I might expect for say 50M files, not 1M.
18:33 H__ sorry, i meant 1M directories
18:36 jdarcy Hm.  Sad to say, that might almost make sense.  We have to readdir(p) past all the files to find the directories, so if there are ~50 files per directory then three days is consistent with what I've seen elsewhere.  :(
18:37 jdarcy Insert standard rant about directory-operation performance here.
18:40 * H__ makes up one then " boo for not reaching below 1ms rebalance performance ' :-D
18:42 jdarcy Yeah, I'm not going to offer a defense on that one.  It's just bad.
18:42 tc00per Attempting/testing failed server replacement per: http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server my 'living' peer doesn't have an /etc/glusterd directory. Where do I/can I find the UUID of the failed peer?
18:43 H__ in the other peers' config directories ?
18:43 jdarcy tc00per: /var/lib/glusterd/peers perhaps?
18:43 * jdarcy thought it was still in /etc for 3.2 but could be wrong.
18:44 tc00per I'm using 3.3.1... howto on gluster.org is for 3.2.
18:45 jdarcy Ah, then it should definitely be in /var/lib instead.
18:45 H__ interesting, so /etc/glusterd/peers/ will move to /var/lib/glusterd/peers/ when I upgrade (3.2.5 here)
18:46 jdarcy Correct.  Fedora doesn't like polluting /etc and AFAIK other distros are coming along as well.
18:46 tc00per In that howto it suggests I will find the UUID in the peers directory when grep'ing for the failed hostname. No UUID is returned...
18:46 hagarth1 joined #gluster
18:48 tc00per OK... it's obviously because there is ONLY ONE PEER in my gluster cluster. The file in /var/lib/glusterd/peers/ is named after the peer. grepping for a wildcard does nothing. Nevermind...
18:48 Gilbs When I do a volume set <option> do I need to restart glusterd or the volume for the changes to take?
18:48 Technicool tc00per, cat /var/lib/glusterfs/glusterd.info
18:48 Technicool is the UUID of the node you are on
18:49 crashmag joined #gluster
18:50 Technicool oops
18:50 Technicool typoed
18:50 Technicool cat /var/lib/glusterd/glusterd.info
18:50 tc00per Yup... and /var/lib/glusterd/peers/ contains files with names matching existing peers and contents matching hostnames of those peers. Thanks.
18:52 tc00per I'm going to reinstall this 'failed' host and try to bring it back online now... please stand by a few minutes... :)
18:53 tc00per While I'm doing that... did anybody see my Q. from earlier this morning about files hanging around on bricks that had been removed? Expected or not? Data access seems fine... should I care?
18:54 kkeithley removing a brick from the cluster does not delete the files on the disk, that's normal. Is that what you're asking?
18:55 jdarcy Seems fine to me.  Deleting files takes time, and there's no reason to do it for files we no longer care about.
18:55 layer3 joined #gluster
18:56 sashko joined #gluster
18:56 tc00per kkeithley: Essentially yes. Looked at brick contents after remove-brick/commit to see that all was gone. MOST was but a 'handful' of files remained. Perhaps I shut down glusterd too soon to complete the 'cleanup'.
18:59 jdarcy I wonder if it's the "rebalance away" part that does that.
19:01 Nr18 joined #gluster
19:01 semiosis Gilbs: no
19:01 tc00per Rebalance was observed with 'watch ... status' until 'complete' on all peers/bricks. Files remained after that AND after 'Rebalance is complete' was written to the log.
19:11 Gilbs Thank you
19:23 oneiroi joined #gluster
19:27 layer3 joined #gluster
19:28 vimal joined #gluster
19:32 samppah @latest
19:32 glusterbot samppah: The latest version is available at http://goo.gl/TI8hM and http://goo.gl/8OTin See also @yum repo or @yum3.3 repo or @ppa repo
19:33 samppah @yum3.3 repo
19:33 glusterbot samppah: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
19:51 ankit9 joined #gluster
19:52 samppah @changelog
19:52 samppah @meh
19:52 glusterbot samppah: I'm not happy about it either
19:56 nightwalk joined #gluster
20:13 tc00per Reinstalled 'failed' node, re-added as a peer (same ip/hostname/UUID), bricks in place, glusterd restarted, attempt to trigger self-heal... nothing seems to be getting copied to peer. Ideas?
20:19 Nr18 joined #gluster
20:20 Gerben joined #gluster
20:38 johnmark commits for 3.3.1: http://www.gluster.org/community/doc​umentation/index.php/GlusterFS_3.3.1
20:50 sashko joined #gluster
20:53 Technicool joined #gluster
20:54 akadaedalus joined #gluster
20:55 mooorten joined #gluster
21:01 Gerben joined #gluster
21:09 Pushnell_ joined #gluster
21:11 badone_home joined #gluster
21:19 Pushnell_ Hey all.  reading the various GSGs now, but wondering if anyone would be willing to help me figure out if gluster makes sense for us … we have a 30Gb write server being distributed to several read servers with rsync several times a day.  we want to keep the data out on the read servers for performance and redundancy (vs something like NFS) but it'd be nice to have write updates propagate automatically (and immediatel
21:19 Pushnell_ and to also be able to rebuild the write server, should it go down, using the data already available on the read servers
21:20 Pushnell_ does this sound like a cluster of 'servers' with no clients?
21:21 JoeJulian Sounds like you "write server(2)" would be gluster servers and your "read" sites would be either clients or georeplication, depending on latency, etc.
21:21 JoeJulian s/2/s/g
21:21 glusterbot What JoeJulian meant to say was: Sounds like you "write server(s)" would be gluster servers and your "read" sites would be either clients or georeplication, depending on latency, etc.
21:22 Matthaeus joined #gluster
21:22 nueces joined #gluster
21:22 JoeJulian where the hell did I get 2... I must be a little distracted...
21:24 tryggvil joined #gluster
21:24 tc00per Correction to ^^^.... glusterd was _only_ restarted on the failed/reinstalled peer. It would seem it must be restarted on _all_ peers for the self-heal to work. Self-heal now complete... no errors in the client volname-selfheal.log file.
21:25 Pushnell_ JoeJulian: thanks.  (and oddly enough, I make the same '2 vs S' mental mistake fairly often … visually mirrored characters methinks)
21:26 semiosis JoeJulian: i thought you were referring to a syscall named server
21:26 semiosis but that didnt make sense
21:27 JoeJulian I'm not infrequently good at not making sense.
21:37 Matthaeus That sentence took me a full minute to parse.
21:52 semiosis wb Matthaeus
21:53 duerF joined #gluster
21:54 Matthaeus Thanks much!  No longer working at the university, so I haven't had much occasion to use gluster recently.
21:56 mooorten1 joined #gluster
22:02 elyograg tc00per: I am still wading through the backlog, saw your comments from a few hours ago.  WHen I did the remove-brick start/status/commit thing, some of the files got left behind.  After the commit, I spot checked a bunch of them that were still there, and they were accessible from the client mount with no problem.
22:04 tc00per elyograg: Thanks... that is also what I found.
22:04 wiqd_ joined #gluster
22:05 wiqd joined #gluster
22:06 tc00per I've since completely reinstalled ALL peers from scratch one-by-one while keeping the data available the whole time. Dist-repl is definitely the way to go if this kind of capability is required.
22:06 ankit9 joined #gluster
22:06 elyograg tc00per: Awesome.  That's the kind of resiliency I want.
22:08 tc00per Granted... my 'testing' peers only have two 300GB SAS drives/bricks each right now and I only had 6GB of data live. Don't know if I could do the same so easily with a larger system other than simply waiting longer for the self-heals. By the sounds of things they _can_ get pretty lengthy.
22:08 elyograg A question to all: This morning I had an idea -- use xfs for all the left-hand brick servers and btrfs for all the right-hand brick servers.  do periodic (hourly or similar) snapshots on the btrfs bricks.  in the eevnt of an extreme catastrophe (user error), old versions of the data will be available, but a little scattered.
22:09 elyograg Is that something any of you would try?
22:11 wiqd hey folks, http://download.gluster.org appears to be for me, are there any mirrors available ?
22:11 semiosis johnmark: ^^
22:11 semiosis johnmark: i was just going to ping you about that
22:12 semiosis wiqd: they were having some issues with it friday, idk the current status though
22:12 Technicool semiosis, wiqd  for RHEL based distros, you can use bits.gluster.com
22:12 semiosis hey eco
22:12 wiqd semiosis: thanks
22:12 Technicool afternoon
22:12 wiqd Technicool: actually after the deb's
22:12 semiosis wiqd: for what distro/version?
22:13 Technicool http://bits.gluster.com/pub/gluster/glusterfs/
22:13 glusterbot Title: Index of /pub/gluster/glusterfs (at bits.gluster.com)
22:13 Technicool wiqd, unfortunately don
22:13 Technicool oops
22:13 Technicool don't have access to those at the moment
22:13 wiqd Technicool: Debian Squeeze 6.0.5
22:13 Technicool to be overly cautious we disabled downloads of any bits from the server that was compromised
22:14 Technicool you can still build from source
22:14 wiqd always a good precaution :P
22:14 wiqd no problem, that sounds fine, thanks again
22:15 semiosis wiqd: i'm probably going to end up doing a debian squeeze build but it's going to be a day or few before i have that ready for you
22:15 semiosis we need debian packaging help :)
22:16 semiosis anyone know of a service like launchpad ppa's for debian?  with the build servers & everyhitng?
22:16 semiosis s/everyhitng/everything/
22:16 glusterbot What semiosis meant to say was: anyone know of a service like launchpad ppa's for debian?  with the build servers & everything?
22:17 wiqd semiosis: sounds good, I'd love to help but I'd need to do a lot of brushing up on current packaging requirements
22:19 semiosis ok thx anyway
22:20 tc00per ^^^ compromised?
22:26 tc00per RE: ^^^... Are you referring to a historical or current event?
22:26 wiqd semiosis: I couldn't find anything similar to Launchpad ppa's for Debian, so I just rolled my own with a commit-hook, reprepro and Amazon S3 :P
22:32 semiosis how about the build server part?
22:33 semiosis you know, building for amd64, i386 architectures, across the various releases of the distro?
22:33 semiosis i think pbuilder can do that, but wasn't able to get it to work right last time i tried
22:36 wiqd semiosis: yeah my Debian stuff is all Python based, so it's a little easier.  It wouldn't work for anything that requires builds for architectures...
22:37 wiqd pbuilder is quite fiddly
22:41 wushudoin joined #gluster
22:56 lanning joined #gluster
22:58 semiosis bug 825906
22:59 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=825906 urgent, medium, ---, kaushal, ASSIGNED , man pages are not up to date with 3.3.0 features/options.
23:00 semiosis i can haz man pagez?
23:01 JoeJulian I always feel a bit awkward when I forget the command line options to "touch" and have to read the man page...
23:09 semiosis barcelona's probably real nice this time of year
23:09 semiosis oh wait, that's next month
23:09 semiosis JoeJulian: you going?
23:09 JoeJulian I wish
23:11 semiosis me too
23:11 tc00per Are the existence of man pages a function of packaging or the source? My install is from kkeithley's repo and I _only_ have a man page for gluster.
23:11 semiosis for some strange reason, building packages make me yearn for a european vacation
23:12 semiosis tc00per: all of the above
23:12 semiosis if the source doesn't have man pages, the package sure wont either.  but if the source does have man pages, the package needs to install them into the system
23:15 tc00per May also have another little packaging problem as well. Logrotate is complaining on my client because two entries in /etc/logrotate.d want to rotate my glusterfs.log file. Also, the glusterfs rpm installed /etc/logrotate.d/glusterd which seems a bit odd.
23:15 tc00per s/problem/problems/
23:15 glusterbot What tc00per meant to say was: May also have another little packaging problems as well. Logrotate is complaining on my client because two entries in /etc/logrotate.d want to rotate my glusterfs.log file. Also, the glusterfs rpm installed /etc/logrotate.d/glusterd which seems a bit odd.
23:16 tc00per s/another/other/
23:16 glusterbot What tc00per meant to say was: May also have other little packaging problem as well. Logrotate is complaining on my client because two entries in /etc/logrotate.d want to rotate my glusterfs.log file. Also, the glusterfs rpm installed /etc/logrotate.d/glusterd which seems a bit odd.
23:16 tc00per s/I/give up/
23:16 glusterbot What tc00per meant to say was: Are the existence of man pages a function of packaging or the source? My install is from kkeithley's repo and give up _only_ have a man page for gluster.
23:16 semiosis glusterbot: awesome
23:16 glusterbot semiosis: ohhh yeeaah
23:16 JoeJulian I don't have a problem with logrotate
23:19 JoeJulian This is why I would want to go to Spain: https://plus.google.com/photos/1134575256​90313682370/albums/5613146957561803233/57​99679532305077986?authkey=CJSC7ICewL7cAQ
23:19 penglish JoeJulian: cheap real estate there I hear
23:19 JoeJulian (btw, if you look closely you can see the Starbucks headquarters in that picture)
23:20 JoeJulian penglish: Can you guess where that picture is? :D
23:21 penglish That's the viaduct south of the city center.. I don't know the cross streets, but it must be near your office!
23:22 JoeJulian Yep, that's from our building.
23:23 tc00per Yum claims the file /etc/logrotate.d/glusterfs comes from glusterfs-client-2.0.9-2.el5.x86_64 which, according to /var/log/yum.log, has never been installed. Don't know how it got there. Getting rid of it...
23:23 JoeJulian D'oh!
23:24 JoeJulian I'd have to guess it came from anaconda.
23:26 semiosis apparently man pages are still an issue
23:27 semiosis not sure how to read doc/Makefile.in, but looks like only gluster.8 may be built
23:27 semiosis tc00per: which would seem to explain what you saw
23:28 JoeJulian ... he just can't leave it alone. No matter how many times I ask him not to comment on my professional posts, my dad just has to insist on embarrassing himself.
23:30 semiosis hahaha
23:32 vimal joined #gluster
23:40 ankit9 joined #gluster
23:59 tc00per Client cleaned of all "*glusterfs*" files/dirs and reinstalled. The offending /etc/logrotate.d/glusterfs file did not reappear. It either came from an older install (3.3.0 is the only thing I've successfully used) or operator error. The logrotate problem is gone.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary