Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 nueces joined #gluster
00:44 lh joined #gluster
00:44 lh joined #gluster
01:12 mooperd joined #gluster
01:17 bala1 joined #gluster
01:33 flrichar joined #gluster
01:41 bala1 joined #gluster
01:47 mooperd joined #gluster
02:03 portante joined #gluster
02:07 glusterbot New news from newglusterbugs: [Bug 948657] Fix failing unit tests <http://goo.gl/LbFby>
02:07 yinyin joined #gluster
02:21 kevein joined #gluster
02:41 Supermathie avati_: Nope, firewall is definitely off. I see nlockmgr actively refusing the locks.
03:08 tristanz joined #gluster
03:26 yinyin joined #gluster
03:49 sgowda joined #gluster
03:53 fidevo joined #gluster
03:55 fidevo left #gluster
04:02 lalatenduM joined #gluster
04:03 lala_ joined #gluster
04:31 aravindavk joined #gluster
04:31 mohankumar joined #gluster
04:32 sripathi joined #gluster
04:32 satheesh joined #gluster
04:38 deepakcs joined #gluster
04:41 yinyin joined #gluster
04:55 bala joined #gluster
04:56 hagarth joined #gluster
05:05 vpshastry joined #gluster
05:13 fidevo joined #gluster
05:16 sripathi1 joined #gluster
05:22 bala joined #gluster
05:27 piotrektt_ joined #gluster
05:27 saurabh joined #gluster
05:40 raghu joined #gluster
05:43 bala joined #gluster
05:52 fling joined #gluster
05:52 fling May I use ftp hosts as a backend somehow?
05:52 fling I have multiple ftp accounts with 100G space availabre
05:52 fling s/availabre/available/
05:52 fling I want to use it as a single filesysteg
05:52 glusterbot fling: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
06:01 ngoswami joined #gluster
06:01 vshankar joined #gluster
06:02 NuxRo fling: it does not work with ftp, you need a proper filesystem as backend (xfs recommended)
06:09 fling NuxRo: I use xfs, ok
06:15 Nevan joined #gluster
06:17 sripathi joined #gluster
06:20 ollivera joined #gluster
06:22 jtux joined #gluster
06:31 jkroon joined #gluster
06:34 ricky-ticky joined #gluster
06:36 rastar joined #gluster
06:38 glusterbot New news from newglusterbugs: [Bug 918917] 3.4 Beta1 Tracker <http://goo.gl/xL9yF>
06:50 vimal joined #gluster
07:00 ctria joined #gluster
07:05 kevein joined #gluster
07:08 glusterbot New news from newglusterbugs: [Bug 948692] [FEAT] Make geo-rep start and stop distributed. <http://goo.gl/u6aMe> || [Bug 948698] [FEAT] Implement geo-rep start force and stop force <http://goo.gl/puCaY>
07:10 bala joined #gluster
07:20 twx joined #gluster
07:30 bala joined #gluster
07:34 andreask joined #gluster
07:40 ricky-ticky joined #gluster
07:51 vpshastry joined #gluster
07:52 camel1cz joined #gluster
07:57 dobber_ joined #gluster
08:08 zetheroo joined #gluster
08:11 camel1cz joined #gluster
08:13 shireesh joined #gluster
08:17 camel1cz left #gluster
08:29 redbeard joined #gluster
08:37 puebele1 joined #gluster
08:39 shylesh joined #gluster
08:55 puebele1 joined #gluster
08:59 Norky joined #gluster
09:00 DEac- joined #gluster
09:08 glusterbot New news from newglusterbugs: [Bug 948729] gluster volume create command creates brick directory in / of storage node if the specified directory does not exist <http://goo.gl/Og5Sf>
09:18 ollivera I need to retire a brick and move all the files from the retired brick to a specific brick.
09:18 ollivera Is there any tool available for this task? If not, I guess I need to know how the hash values are calculated, isn't it?
09:18 Chiku|dc what is Multi-master GeoReplication ?
09:18 Chiku|dc for 3.4
09:19 Chiku|dc you can use master-master for geo?
09:20 ProT-0-TypE joined #gluster
09:26 sripathi joined #gluster
09:29 H__ Chiku|dc: that's the plan
09:36 satheesh1 joined #gluster
09:38 glusterbot New news from resolvedglusterbugs: [Bug 927111] posix complaince test fails <http://goo.gl/Uy4OC> || [Bug 927112] posix compliance test fails <http://goo.gl/MBmCB>
09:38 glusterbot New news from newglusterbugs: [Bug 927109] posix complaince test fails <http://goo.gl/QDseS>
09:43 tryggvil joined #gluster
09:43 kevein joined #gluster
09:52 Norky ollivera, it sounds like a replace-brick operation will do what you want...
10:09 InnerFIRE left #gluster
10:09 karoshi joined #gluster
10:09 karoshi hi, what exactly is a gluster "GA release"?
10:10 Norky General Availability   - http://en.wikipedia.org/wiki/Sof​tware_release_life_cycle#Release
10:10 glusterbot <http://goo.gl/a9h5w> (at en.wikipedia.org)
10:12 karoshi so, isn't the 3.2.x branch generally available?
10:12 Norky it is/was, but 3.3 supercedes it
10:12 karoshi ok thanks
10:14 shireesh joined #gluster
10:15 hagarth joined #gluster
10:18 jtux joined #gluster
10:40 jtux joined #gluster
10:46 manik joined #gluster
10:53 vpshastry joined #gluster
10:57 zwu joined #gluster
10:59 ollivera joined #gluster
11:08 jclift joined #gluster
11:13 ollivera Norky, will existing data will be moved as part of rebalance(migrate-data)? I guess rebalance could take days and I want to avoid that
11:19 hagarth joined #gluster
11:20 aravindavk joined #gluster
11:21 satheesh joined #gluster
11:23 Norky ollivera, yes, just make sure you include the "start " option
11:23 Norky then monitor it with "status"
11:23 Norky it will take a long time if you have a lot of data
11:25 Norky https://access.redhat.com/site/documentation/en-​US/Red_Hat_Storage/2.0/html/Administration_Guide​/sect-User_Guide-Managing_Volumes-Migrating.html
11:25 glusterbot <http://goo.gl/Pw9Ur> (at access.redhat.com)
11:27 ollivera Norky, thank you for the link.
11:28 puebele2 joined #gluster
11:30 puebele3 joined #gluster
11:35 shireesh joined #gluster
11:35 ollivera Norky, yes, I have a lot of data. Is there anyway of avoiding the rebalance?
11:43 Norky well, no
11:44 Norky well, I suppose you could remove the brick straight waay (without the "start" option), then the files on that brick would no longer be available in your volume. Then you could copy data from the brick-to-be-retired on to the fuse mountpoint of the entire volume
11:45 Norky you'd still want a rebalanace at some point
11:45 Rydekull joined #gluster
11:46 Norky another opint might be to stop the volume, move data, including the .glusterfs hierarchy from the old brick to the new and (assumiong the old an new bricks are on the same server) give the new brick the mountpoint of the old
11:46 Rydekull joined #gluster
11:49 puebele1 joined #gluster
11:58 nitalaut_ joined #gluster
11:59 nitalaut_ Hello everyone, is it possible to add gluster 3.3 server into 3.2 pool ? I'd like to migrate from 3.2 to 3.3 but I can't afford big downtime.
12:00 H__ not possible
12:00 nitalaut_ so the only way is uninstall 3.2 and install 3.3 ?
12:00 H__ i'm doing a 3.2.5 -> 3.3.1 upgrade myself soon, with downtime but i scripted the whole thing with parallelism on all servers and clients to <60sec
12:01 nitalaut_ hmm
12:01 nitalaut_ ok, thanks
12:01 H__ from 3.3 onwards we're promised live upgrades again
12:03 nitalaut_ I hope )
12:03 kkeithley It should be the case for 3.3->3.4 at least.
12:04 andreask joined #gluster
12:08 niv left #gluster
12:12 Chiku|dc what about add gluster 3.4 server into 3.3 pool ?
12:13 Chiku|dc any error message when you add 3.2 server ? or OK message and it just crash your 3.3 pool  ?
12:14 dustint joined #gluster
12:18 aravindavk joined #gluster
12:19 vpshastry joined #gluster
12:26 18WAC8TXG joined #gluster
12:29 kkeithley I doubt we've tested mixed 3.3 and 3.4 pools. Given that, I expect we're not saying anything about it.
12:30 piotrektt_ joined #gluster
12:30 kkeithley And TBH, I'd guess that there could be some behavioral aspects that have changed that might make mixed 3.3 and 3.4 not play well together.
12:30 kkeithley But try it and see. ;-)
12:30 Rocky joined #gluster
12:40 Chiku|dc what is Multi-master GeoReplication for 3.4 ?
12:41 ollivera Norky, actually, I realized that I want to shrinking volumes, not replace a brick. Sorry :)
12:41 ollivera Norky, https://access.redhat.com/site/documentation/en-​US/Red_Hat_Storage/2.0/html/Administration_Guide​/sect-User_Guide-Managing_Volumes-Shrinking.html
12:41 glusterbot <http://goo.gl/TgIaG> (at access.redhat.com)
12:41 ollivera Norky, I am using Gluster 3.1 ... I dont think it is applicable to me :(
12:41 Norky <ollivera> I need to retire a brick and move all the files from the retired brick to a specific brick.  --- I'm confused then :)
12:42 Norky ahh, by "specific brick", do you mean one of the existing bricks?
12:42 ollivera Norky, yes
12:42 rubbs this is more of a theoretical question than an actual need, but is it possible to migrate the data off of a brick so that it can be safely removed?
12:43 Norky ollivera, well, it might be possible to put everything on that one brick, byt in doing so, and not letting gluster do the rebalance you're bypassing gluster's DHT mechanism
12:44 Norky rubbs, yes, see the "remove-brick URL that ollivera   pasted
12:44 rubbs oh, that's handy, thanks Norky
12:45 plarsen joined #gluster
12:51 aliguori joined #gluster
12:52 ProT-0-TypE joined #gluster
12:59 robos joined #gluster
13:08 JoeJulian @ping-timeout
13:08 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
13:10 JoeJulian Hmm, there's one of the reasons I hate doing support over Q&A, email, or (now) Google Plus... no factoids.
13:13 kkeithley Multi-master geo-rep is, IIRC, among other things, bi-directional geo-rep. Where's johnmark? Are we still on track to deliver multi-master in 3.4?
13:14 JoeJulian Really? Are there commits toward that end? I didn't think that was going to be around in 3.4.
13:16 kkeithley Hence my question for johnmark about whether we're actually going to deliver.
13:16 bennyturns joined #gluster
13:26 JoeJulian Not sure how authoritative this is, but it's not listed on http://www.gluster.org/community/d​ocumentation/index.php/Features34
13:26 glusterbot <http://goo.gl/4MvOh> (at www.gluster.org)
13:28 mohankumar joined #gluster
13:30 msmith_ joined #gluster
13:30 plarsen joined #gluster
13:34 rwheeler joined #gluster
13:40 kevein joined #gluster
13:40 theron joined #gluster
13:43 zetheroo left #gluster
14:04 magnus^p joined #gluster
14:09 glusterbot New news from newglusterbugs: [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9> || [Bug 948041] rebase to grizzly (release-3.4) <http://goo.gl/77lzU>
14:10 magnus^p just followed the quickstart guide. Is it reasonable to see 25MB/s? tested with dd as in https://gist.github.com/anonymous/5319548
14:10 glusterbot Title: gist:5319548 (at gist.github.com)
14:13 Norky magnus^p, that says 62MB/s....
14:14 magnus^p woops, copypaste fail .. i promise, it really says 25MB/s :)
14:14 andreask if I have a distributed volume and loose one brick will the volume be still available expect for the files on a lost brick?
14:17 magnus^p local fs test with dd gives 60MB/s .. should be noted that im in a virtualized environment where disks are mounted over vmdk on a SAN
14:18 failshell joined #gluster
14:18 failshell i wish gluster peer status would list the local node too
14:19 plarsen joined #gluster
14:26 lpabon joined #gluster
14:28 Supermathie OK... my client is perfectly happy to lock files on an mounted NFS filesystem, unless the filesystem underneath the mount is gluster.
14:29 karoshi joined #gluster
14:29 samppah Supermathie: DNFS?
14:30 Supermathie Nope, I took that out of the equation for now.
14:30 Supermathie Testing from Python
14:30 Supermathie >>> fcntl.flock(g, fcntl.LOCK_SH)
14:30 Supermathie Traceback (most recent call last): File "<stdin>", line 1, in ?
14:30 Supermathie IOError: [Errno 37] No locks available
14:31 karoshi I'm trying to set up a simple 2-server gluster mirrored volume, one brick on each server, using puppet. Basically, I want that after running puppet everything be set up and running, without me having to do "peer probe", "volume create", "volume start" and all that. I understand I could probably create some configuration files, but I can't find which ones and where they are.
14:31 Supermathie I can flock directly on the glusterfs filesystem itself.
14:31 karoshi I see there is something under /var/lib/glusterd/vols/, but where is that documented?
14:33 plarsen joined #gluster
14:34 Supermathie Huh! Even stranger, looking at the traffic...
14:35 lpabon joined #gluster
14:35 Supermathie I see: TEST (lock) call, TEST (lock) GRANTED, LOCK (on same FH) call, LOCK NLM_DENIED_NOLOCKS
14:35 Supermathie so nlm thinks it's going to be able to grant the lock, but fails to actually do so.
14:35 bugs_ joined #gluster
14:42 Supermathie Any ideas?
14:45 vpshastry joined #gluster
14:45 JoeJulian andreask: If you brick gets loose, just tighten the screws. ;) No, if you are not using replication, the files on the missing brick will not be accessible.
14:47 kevein joined #gluster
14:48 JoeJulian failshell: Me too. Someone should probably file a bug report. Showing all peers, including itself, just seems logical. Probably with some indication of which one is "me".
14:48 glusterbot http://goo.gl/UUuCq
14:48 JoeJulian Supermathie: 3.3.1?
14:48 Supermathie JoeJulian: Yep.
14:49 Supermathie Really all I wanted was to get a capture of how Linux NFS handles FSINFO RPC, versus Gluster's NFSD (which can't process the FSINFO RPC)
14:50 bala joined #gluster
14:52 JoeJulian magnus^p: are you using that dd to write to a file within the virtualized image, or to a file via fuse mount? Are your server vm and client vm on the same hypervisor? And last and most importantly, does this evaluate your actual needs?
14:55 JoeJulian karoshi: That sounds like a disaster waiting to happen. There's no "reload" for the management daemon so changing files beneath it doesn't work. You should be generating custom facts and using conditional execs.
14:55 karoshi ah ok
14:56 magnus^p JoeJulian, all machines on the same hypervisor and underlying LUN. Im writing over a fuse mount from a third machine. If it evaluates my needs? At the moment I guess 25MB/s is sufficient .. but from what I've seen from benchmarks (think one was published by gluster) performance was more than tenfold of what I'm getting.. ... more or less, I just want to assert that it is not unreasonable to get 25MB/s
14:56 karoshi do you have some pointer to what kind of custom facts are required?
14:56 magnus^p which I guess it is, since local fs writes give me ~60MB/s
14:57 JoeJulian failshell: You can use the "--remote-host" flag to the gluster cli to find out what another peer has for a list, which will allow you to assemble a complete list: https://github.com/joejulian/python-g​luster/blob/master/src/peer/probe.py
14:57 glusterbot <http://goo.gl/iqGUg> (at github.com)
14:57 redbeard joined #gluster
15:02 JoeJulian karoshi: I, and others that have dabbled in puppet modules for GlusterFS, haven't had a need for fully automating that. You'd need exported resources to know what servers you have. Facts to check and see if they're in the peer group. facts to see if the volume's defined and what bricks it has...
15:03 karoshi ok, so do you have your puppet code published somewhere to have a look?
15:03 JoeJulian Probably custom types and providers would be better. When you add/remove bricks, you'll want to rebalance.
15:04 daMaestro joined #gluster
15:04 JoeJulian All I do in puppet is make sure that the filesystems for the bricks are mounted, and that the clients have their volumes mounted. Search gluster on github to find some others. semiosis and purpleidea are two that I'm aware of.
15:07 purpleidea karoshi: ,,puppet
15:07 purpleidea karoshi: ,puppet
15:07 purpleidea karoshi: https://github.com/purpleidea/puppet-gluster
15:07 glusterbot Title: purpleidea/puppet-gluster · GitHub (at github.com)
15:07 purpleidea sorry i forget how the magic works... that's my module. there are now apparently 2 others.
15:07 purpleidea JoeJulian: I didn't know you had one
15:09 johnmark and also https://forge.gluster.org/puppet-gluster
15:09 glusterbot Title: puppet-gluster - Gitorious (at forge.gluster.org)
15:10 failshell any performance metrics exposed by Gluster that could be sent to Graphite for example?
15:11 purpleidea JoeJulian: what's the link to your puppet-gluster module ?
15:11 karoshi ok, many thanks to all
15:11 semiosis ,,(puppet)
15:11 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
15:11 purpleidea semiosis: i knew there were commas!
15:11 johnmark purpleidea: heh :)
15:12 johnmark you guys should sync up at some point
15:12 johnmark I'd like all puppet module activity to end up here: https://forge.gluster.org/puppet-gluster
15:12 glusterbot Title: puppet-gluster - Gitorious (at forge.gluster.org)
15:12 purpleidea johnmark: you sent out that email to everyone... i was sort of waiting for someone else to do some talking...
15:12 johnmark purpleidea: right. people have jobs - it will happen eventually
15:13 purpleidea johnmark: yeah no rush, just explaining why i haven't replied
15:13 johnmark purpleidea: no worries :)
15:14 tryggvil_ joined #gluster
15:17 mrwboilers1 joined #gluster
15:18 semiosis feel like i should manage some expectations here... i use my puppet module in production, but i'm not actively maintaining it
15:19 JoeJulian purpleidea: I don't have one published. The one that I use internally is way too specific to be used generally and I haven't had time to redo it.
15:19 semiosis and never intended it to be used as-is by anyone else, i mean there's code in it that just plain wont work without modification (nagios/logstash)
15:20 purpleidea JoeJulian: okay, are there features you're missing that i'd be able to hack into mine for you? (to eventually unify into one puppet module for all?)
15:21 semiosis and i have philosophical issues with puppet modules that try to be everything to everyone, but i digress
15:21 semiosis :)
15:21 purpleidea semiosis: same question, what do you think we should do about all this. I think my module tries to do "too much" for some people, but with good docs, you can make it clear that you only need to add the "extra" if you want it
15:21 purpleidea semiosis: good timing... i knew that comment was coming :)
15:21 semiosis hehe
15:21 johnmark semiosis: that's a good point. In this case, it might be impossible. To be clear, I don't want "one repo to rule them all"
15:22 johnmark but I do want collaboration and multiple repos to fall under an umbrella project
15:22 vpshastry joined #gluster
15:23 purpleidea johnmark: I mean, I fully agree with semiosis in that my module does more. And some people want that, and there are definitely some use cases that even I have, where you only want a subset. I do think they could all be one module though. But I don't care if this happens.
15:23 purpleidea Meaning, I'm happy to help if thats what people want, but i'm not trying to force it.
15:23 johnmark purpleidea: I hear you
15:23 JoeJulian Well, in my mind the only sensible reason to manage your volume(s) with puppet is if they're going to frequently have dynamic changes. Why would one need to have that frequency? Load balancing I would imagine, since you're not likely to need frequently dynamic changes in storage capacity.
15:23 rcheleguini joined #gluster
15:24 semiosis well i put mine out under the MIT license, very permissive, feel free to take whatever you want/can from it :)
15:24 JoeJulian So a puppet module should target that need.
15:25 johnmark well, if you have "everything else" under puppet control, and you frequently add more/take away resources, then I can see it being extremely useful
15:25 purpleidea JoeJulian: I want my module to be _able_ to do more, one reason: rapidly prototyping vm's for example for testing. All automated. If you don't want to manage the volume (a reasonable request) don't use the ::volume part of my code. It's modular of course.
15:25 JoeJulian Ah, good point.
15:26 hybrid512 joined #gluster
15:26 * JoeJulian has to look at razor at some point....
15:26 johnmark JoeJulian: you know all the new toys
15:26 JoeJulian hehe
15:27 JoeJulian not as many as I'd like.
15:27 * purpleidea hasn't even heard of razor yet
15:27 * semiosis neither
15:27 JoeJulian https://puppetlabs.com/solution​s/next-generation-provisioning/
15:27 glusterbot <http://goo.gl/XXZqH> (at puppetlabs.com)
15:28 purpleidea semiosis: if you want me to add any features that my module is missing that yours accomplishes, i can do this if you like if it would help unify all this.
15:31 purpleidea JoeJulian: you're an expert of things puppet, can you solve this: http://www.fpaste.org/iCE5/
15:31 glusterbot Title: Viewing Paste #289773 (at www.fpaste.org)
15:32 semiosis purpleidea: thanks for offering, i appreciate it, but to be honest idk much about your module, and wouldn't know what to ask for... besides, i'm satisfied with my module for the time being
15:33 purpleidea semiosis: so i think the only person with an issue is johnmark. we've written our code, it's up to the redhat guy to figure out where he wants to go with all this ;)
15:33 lh joined #gluster
15:33 lh joined #gluster
15:33 semiosis hahaha
15:33 semiosis johnmark on the spot
15:33 purpleidea semiosis: have you looked at my code at all?
15:34 semiosis yes i have glanced over it in the past and just now, but haven't really taken the time to learn it :/
15:34 JoeJulian purpleidea: put hello in a class and it would work. As it is, my guess is that it's executing out of order.
15:34 purpleidea semiosis: specifically, the thing i want to improve upon is this sort of thing: https://github.com/purpleidea/puppet-gluste​r/blob/master/manifests/volume/property.pp
15:34 glusterbot <http://goo.gl/vWWdf> (at github.com)
15:37 semiosis well, that is a tricky one.
15:37 JoeJulian The "right" way would be to write types and providers in ruby. You've done amazingly well at working around that with what you have though. Not even a single custom fact... I'm impressed.
15:38 robos joined #gluster
15:38 purpleidea JoeJulian: haha, you're very right. I'm embarrassed to say that I've been late to incorporate custom facts, because i was avoiding ruby, and I like seeing all the code together, but it's still valid puppet, and not hacky. thanks!
15:39 purpleidea JoeJulian: what I don't have, is a gluster super expert like you to help me fill in some unknowns. If we had a one day hack fest, I could *finish* the module, and it would be perfect for everyone... It could add in the optimizations that newbies don't set, which will get them to complain less about performance, etc...
15:40 purpleidea JoeJulian: (and btw, putting hello in a class doesn't change anything :()
15:40 JoeJulian my "expert" opinion about setting optimizations is generally "don't".
15:41 purpleidea JoeJulian: okay well, or any other properties...
15:41 purpleidea like auth.allow for example
15:43 purpleidea JoeJulian: btw if you like puppet tricks, have a look at: https://github.com/purpleidea/puppet-gl​uster/blob/master/manifests/wrapper.pp
15:43 glusterbot <http://goo.gl/S7EZ1> (at github.com)
15:44 mrwboilers1 Anyone have any work arounds for this bug? https://bugzilla.redhat.com/show_bug.cgi?id=877522
15:44 glusterbot <http://goo.gl/YZi8Y> (at bugzilla.redhat.com)
15:44 glusterbot Bug 877522: medium, unspecified, ---, jdarcy, ON_QA , Bogus "X is already part of a volume" errors
15:44 mrwboilers1 I deleted my volume (I'm just in a testing phase) and tried to create a new one. But it tells me the brick or a prefix of it is already part of a volume.
15:44 glusterbot mrwboilers1: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
15:45 mrwboilers1 thanks, I'll try that
15:51 manik joined #gluster
15:52 hagarth joined #gluster
15:55 karoshi joined #gluster
15:55 karoshi well, I see in https://github.com/purpleidea/pup​pet-gluster/tree/master/templates some config files are managed, after all
15:55 glusterbot <http://goo.gl/c08GA> (at github.com)
15:55 Matthaeus joined #gluster
15:57 purpleidea karoshi: sorry, i don't understand your question. also, my module comes with no warranties, don't use on your data unless you're comfortable with it, or you only use the safe parts
15:58 karoshi purpleidea: I was referring to what JoeJulian told me earlier
15:58 purpleidea karoshi: uh, which was?
15:58 karoshi that management thorugh puppet should be done through custom facts and execs, rather than creating config files
15:58 karoshi *through
15:59 purpleidea karoshi: oh, not sure how to get around those two. feel free to suggest a patch... also if you don't purpleidea: me, i won't see your messages
15:59 copec left #gluster
16:00 karoshi purpleidea: what do you mean "get around"? wouldn't gluster create those files itself?
16:00 purpleidea karoshi: my puppet module automatically "peers" you.
16:00 purpleidea (if you want)
16:01 karoshi purpleidea: who is "you" here?
16:01 purpleidea karoshi: the sysadmin
16:01 karoshi purpleidea: I though peering was between servers
16:01 JoeJulian It assumes the peer probe would have worked, and sets the state. I haven't gone through the whole thing. After mucking around with config files, do you restart glusterd?
16:02 purpleidea JoeJulian: me? yes.
16:02 JoeJulian Should work unless your network engineers screw up.
16:02 purpleidea JoeJulian: let me be clear though. The solution in my puppet module is *not ideal* but there isn't a better one if you want to manage that. Earlier Gluster worked GREAT because you could template 1 config file...
16:03 purpleidea JoeJulian: it does work.
16:03 * JoeJulian had been using it since 2.0, so I know what you mean.
16:04 JoeJulian Maybe I'll spend some time after work throwing together a prototype that uses type and provider for you to look at.
16:05 purpleidea JoeJulian: uhhm I guess. I'd rather solve things with XXX in my code though.
16:05 JoeJulian hehe
16:05 purpleidea JoeJulian: and at the moment, I don't have a devel environment, so I can't test :(
16:05 JoeJulian Pfft... Real sysadmins test in production anyway.
16:05 karoshi_ joined #gluster
16:06 purpleidea JoeJulian: yeah, I keep telling people that! Here's a story:
16:07 saurabh joined #gluster
16:07 purpleidea When I started at my last job, my rule was "testing" when all XXX are gone, and ideally most FIXME's for production. Turns out XXX code became production all the time :)
16:07 JoeJulian Although I say that jokingly, many of the rapid-deployment .com's that I know of do that. They figure it's easier to roll back than test.
16:07 purpleidea JoeJulian: sweet
16:09 karoshi_ purpleidea: also what's this about VIPs? I thought native gluster clients automatically failover if the server they're using fails
16:09 * JoeJulian goes back to trying to hack python-esque functinality into a php class...
16:09 purpleidea karoshi_: https://en.wikipedia.org/w​iki/Very_Important_Person
16:09 glusterbot <http://goo.gl/5dHhf> (at en.wikipedia.org)
16:09 purpleidea just kidding...
16:10 purpleidea karoshi_: not exactly.
16:10 JoeJulian ?
16:10 bala joined #gluster
16:10 purpleidea the VIP presents a single ip which moves around so the volfile is always accessible
16:10 JoeJulian Ah, you're doing that instead of rrdns.
16:10 purpleidea JoeJulian: yes! It's BETTER
16:10 JoeJulian lol
16:11 purpleidea JoeJulian: ALSO: https://ttboj.wordpress.com/2012/08/23/how-to​-avoid-cluster-race-conditions-or-how-to-impl​ement-a-distributed-lock-manager-in-puppet/
16:11 glusterbot <http://goo.gl/iYo2g> (at ttboj.wordpress.com)
16:11 karoshi_ doesn't each server have a copy of the vol file?
16:11 purpleidea (i'm somewhat proud of this dlm design)
16:11 JoeJulian karoshi_: Yes.
16:11 purpleidea karoshi_: the VIP is for the clients
16:11 karoshi_ so...the client fails over to another server, which also has the volfile
16:11 purpleidea karoshi_: it can if it needs to.
16:13 purpleidea karoshi_: if you have N servers, and together they form a cluster which is always available (HA), but the client only has one place to ask for a volfile, you've got a new SPOF there. so use a VIP to get a < 3 sec failover when requesting a volfile, which is 0 seconds essentially, because that's only during the mount operation.
16:13 purpleidea (so you have a 3 second slow mount)
16:13 purpleidea worst case scenario
16:14 karoshi_ my understanding was that the client does NOT have only one place to ask for the volfile
16:14 JoeJulian This is how I handle that: http://fpaste.org/5Uw3/
16:14 glusterbot Title: Viewing Paste #289793 (at fpaste.org)
16:14 karoshi_ it receives a list of servers, doesn't it?
16:15 purpleidea JoeJulian: rrdns is perfectly acceptable, but my module benefits from the vip, and in theory it's a bit better... (a bit pedantic)
16:15 Norky the glusterfs-fuse module can take a list of additional volfile servers
16:15 Norky lnasilo0:/tid   /tid    glusterfs       backupvolfile-server=lnasilo1,backupvolfile-s​erver=lnasilo2,backupvolfile-server=lnasilo3
16:15 purpleidea Norky: and you can have to VIP's in that case.
16:15 purpleidea Norky: although it's a bit overkill :P
16:15 Norky using that option doesn't require VIPs
16:16 purpleidea Norky: i didn't think you could have a List... i thought just one backup. is this new?
16:17 Norky pass
16:17 rubbs A friend of mine is using RHES and says that to serve VM images you should tune the volume with : "gluster volume set your-volume group virt" is this necessary on a 3.3.1 volume as well?
16:17 Norky I'm trying to find where this is documented
16:17 purpleidea JoeJulian: i'm off to bed. happy hacking
16:18 JoeJulian G'night
16:18 JoeJulian rubbs: Never heard of a "group" setting...
16:20 Norky purpleidea, hmm, damned ifg I can find a reference which confirms that, I hope I didn't make it up...
16:20 JoeJulian Hmm, it's in there...
16:20 rubbs JoeJulian: that's strange, I'm looking at a redhat Access.redhat.com document and it's recommending that you run that specific command on volumes serving virt images.
16:21 rubbs This is on RHS though, so I didn't know if that was not needed on newer gluster installs, since I know RH tends to be behind the community for stability reasons
16:22 rubbs we didn't have the budget for RHS so I'm building this all on community stuff right now
16:22 JoeJulian I'm looking through the source trying to find out what that does.
16:22 rubbs thanks.
16:23 rubbs JoeJulian: here's wehre I'm reading it from if it helps: https://access.redhat.com/site/documentation/en​-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/​chap-Quick_Start_Guide-Virtual_Preparation.html
16:23 glusterbot <http://goo.gl/YAQcM> (at access.redhat.com)
16:23 rubbs you don't need access for that one.
16:25 JoeJulian Interesting... it appears that's just a way to have a set of options that you can apply to multiple volumes.
16:25 andreask joined #gluster
16:26 rubbs I'm guessing these settings are recommended for vm image serving then?
16:26 JoeJulian That would be my guess as well.
16:26 jbrooks joined #gluster
16:27 rubbs alright then. I'll probably do that.
16:27 rubbs thanks for your help!
16:27 JoeJulian Hmm, eager-lock-enable... I guess that makes sense since you won't have multiple clients trying to lock the same image simultaneously.
16:27 hagarth joined #gluster
16:28 rubbs nod
16:29 gbrand_ joined #gluster
16:30 JoeJulian Hehe, "you /must/ not use the volume for any other storage purpose". Well, not exactly true, but those settings would break simultaneous write access to files from multiple clients so I would change "must" to "recommend against".
16:30 johnmark JoeJulian: they sure would :)
16:31 Mo_ joined #gluster
16:31 JoeJulian From a support standpoint they have to say "must". I generally try to put more trust in my peers and hope that they would understand what they're doing before causing obvious breakage.
16:31 rubbs Is there a way for me to contribute this information back to the current documentation? I couldn't find anything about this sort of thing in the community docs
16:32 JoeJulian Just the wiki.
16:32 kkeithley rubbs: on gluster.org? It's a wiki. Create an account and add/update
16:32 kkeithley Be bold, it's a wiki
16:32 rubbs kkeithley: JoeJulian will do thanks
16:32 rubbs I'll get to it this afternoon
16:33 JoeJulian We need to either get the docs into the wiki again, or convert them to asciidoc (and separated out from the source tree) so they can be easily edited.
16:34 johnmark JoeJulian: hold that thought. I'm in the process of rolling out an asciidoc-awestruct-git doc site this weekend
16:35 johnmark I'm really really tired of mw
16:35 johnmark but don't hold back on updating the wiki - I'll migrate any "fresh content" to the new site anyway
16:37 JoeJulian johnmark: Is the source doc going to be on the forge?
16:37 chirino joined #gluster
16:38 chirino hi
16:38 glusterbot chirino: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:39 rubbs JoeJulian: johnmark: any suggestions on what page I should add it to, or just create a new page about tips for hosting VM images?
16:39 johnmark JoeJulian: yup - the docs repo will be there
16:39 chirino ok.  so does|will glusterfs have a quorum read option?
16:40 johnmark rubbs: there's a howto page wiht links to tips and tricks and a "20 questions style" page
16:40 johnmark rubbs: there's this guy - http://www.gluster.org/community/documentat​ion/index.php/Basic_Gluster_Troubleshooting
16:40 glusterbot <http://goo.gl/7m2Ln> (at www.gluster.org)
16:40 johnmark and this howto links page - http://www.gluster.org/communit​y/documentation/index.php/HowTo
16:40 glusterbot <http://goo.gl/0Y2v2> (at www.gluster.org)
16:41 JoeJulian chirino: http://gluster.org/community/documentation/index​.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-type
16:41 glusterbot <http://goo.gl/dZ3EL> (at gluster.org)
16:42 chirino JoeJulian: that says nothing about quorum reads.
16:42 rastar joined #gluster
16:43 rubbs johnmark: awesome thanks
16:44 rubbs I'll look at them and potentially have something up today
16:44 JoeJulian chirino: Ah, true. What're you trying to accomplish? It may be doing it already...
16:46 chirino I want to avoid getting an inconsistent read from an out of date partition.
16:47 JoeJulian Ok, that's what I thought. That's already handled. There's a self-heal check done on lookup() and if a heal is necessary, you read from the clean copy.
16:48 chirino so the lookup and heal fail if there is no quorum?
16:49 JoeJulian As long as your replicas are not split-brain, you'll get a good read. Split-brain will cause a read failure, which is why you can establish a write quorum.
16:52 JoeJulian Gotta go feed the munchkin. bbl.
16:52 chirino so when does the lookup() occur? when you first open the file?
16:52 JoeJulian open, or stat
16:52 chirino what happens if you keep a file open, and then nodes go down?
16:53 JoeJulian When the server (we don't use the ambiguous word "node" here) comes back online, the client has to re-establish the open fd's (and locks). The lookup happens on that server and the self-heal is done.
16:54 chirino BTW there no way to get a notification if you loose a file lock due to loosing write quorum right?
16:54 JoeJulian Ok... I either have to feed her or let her type... Monitor your logs.....
16:54 chirino I guess writes would start failing.
16:54 chirino ok! go on ahead!
16:56 lpabon joined #gluster
17:00 zaitcev joined #gluster
17:08 hagarth joined #gluster
17:08 ramkrsna joined #gluster
17:08 ramkrsna joined #gluster
17:11 jbrooks joined #gluster
17:12 \_pol joined #gluster
17:17 ash13 joined #gluster
17:30 hagarth joined #gluster
17:45 bet_ joined #gluster
18:03 tristanz joined #gluster
18:23 jskinner_ joined #gluster
18:30 ramkrsna joined #gluster
18:47 failshell joined #gluster
18:47 failshell any way to reconfigure gluster so that any user can run the gluster command?
18:58 eightyeight joined #gluster
19:01 JoeJulian Yes, but the easiest is to add a rule to sudoers.
19:01 JoeJulian failshell: ^
19:03 chirino joined #gluster
19:03 failshell that's what im doing for my Sensu check in the end
19:03 JoeJulian failshell: Otherwise, grant write rights to /var/log/glusterfs/cli.log and set allow-insecure on for all your volumes.
19:03 failshell yeah ok, ill use sudo : )
19:07 samppah @latest
19:07 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
19:09 samppah @qa release
19:09 glusterbot samppah: I do not know about 'qa release', but I do know about these similar topics: 'qa releases'
19:09 samppah @qa releases
19:09 glusterbot samppah: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
19:11 nueces joined #gluster
19:16 \_pol joined #gluster
19:30 puebele1 joined #gluster
19:39 bennyturns joined #gluster
19:40 jskinner_ joined #gluster
19:42 andreask joined #gluster
20:01 jskinn___ joined #gluster
20:10 stat1x left #gluster
20:24 \_pol joined #gluster
20:38 jbrooks joined #gluster
20:46 chirino JoeJulian: around?  I just did some testing and I can get into a situation where I get inconsistent reads.
20:51 eightyeight joined #gluster
20:53 jskinner_ joined #gluster
21:05 chirino JoeJulian: never mind.. I must have done something wrong.
21:10 chirino JoeJulian: actually, yeah I get inconsistent reads after 10 seconds.
21:10 chirino before that I get read errors which is what I was hoping for.
21:11 JoeJulian chirino: Please file a bug report and include your test procedure.
21:11 glusterbot http://goo.gl/UUuCq
21:13 tristanz joined #gluster
21:24 chirino JoeJulian: done: https://bugzilla.redhat.com/show_bug.cgi?id=949096
21:24 glusterbot <http://goo.gl/dGVOc> (at bugzilla.redhat.com)
21:24 glusterbot Bug 949096: high, unspecified, ---, pkarampu, NEW , Inconsistent read on volume configured with cluster.quorum-type auto
21:26 logstashbot` joined #gluster
21:28 plarsen joined #gluster
21:30 JoeJulian test fail. glusterd is not one of the brick ,,(processes)
21:30 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
21:30 logstashbot Title: Question: What processes does glusterfs run during normal operation? (at goo.gl)
21:30 * JoeJulian raises an eyebrow at logstashbot....
21:30 Supermathie logstashbot: mprime
21:30 logstashbot Supermathie: Error: "mprime" is not a valid command.
21:31 semiosis what?!
21:31 semiosis logstashbot: part
21:31 logstashbot left #gluster
21:31 JoeJulian hehe
21:31 semiosis i thought i fixed that already
21:31 Supermathie lol
21:31 JoeJulian chirino: test fail. glusterd is not one of the brick ,,(processes)
21:31 glusterbot information.
21:31 glusterbot chirino: the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more
21:31 Supermathie I was just wondering if I could get these bots stuck in an auto-reply loop.
21:32 * Supermathie whistles innocently...
21:32 JoeJulian No, glusterbot will punish other bots.
21:32 semiosis haha
21:33 chirino JoeJulian: service glusterd stop kills 'em all
21:33 JoeJulian chirino: Also, besides the test being invalid due to that, reads will succeed with quorum set. Only something that would invalidate the data will fail.
21:33 JoeJulian chirino: not on any EL or Fedora distro
21:34 chirino I'm on FC 17
21:34 JoeJulian Like I said...
21:35 chirino see: http://pastie.org/pastes/7333102/text
21:35 glusterbot Title: #7333102 - Pastie (at pastie.org)
21:35 chirino so any change we can get quorum reads?
21:35 chirino chance even
21:36 chirino I'd rather have reads fail than return old data.
21:36 JoeJulian YOU WON"T GET OLD DATA!
21:36 JoeJulian If a call would invalidate the data IT WILL FAIL.
21:36 JoeJulian That means the data CAN NEVER BE OLD!
21:37 chirino did you see my test?
21:38 chirino it's returning old data.
21:38 chirino I write 'update 1' to file, then 'update 2', then read the file and says 'update 1'
21:38 chirino to me that's old data.
21:40 chirino how can I get gluster to fail that read?
21:40 chirino is there something I can configure?
21:41 chirino Are consistent reads just not supported yet?
21:41 glusterbot New news from newglusterbugs: [Bug 949096] Inconsistent read on volume configured with cluster.quorum-type auto <http://goo.gl/dGVOc>
21:42 semiosis chirino: are you doing these writes through a client mount point?
21:42 semiosis nfs or fuse?
21:42 chirino fuse.
21:42 semiosis hmm, what version of glusterfs?
21:42 chirino it's all in the script.
21:42 chirino in the bug
21:42 semiosis heh ok will read up
21:42 semiosis my bad :)
21:43 chirino https://bugzilla.redhat.com/show_bug.cgi?id=949096
21:43 glusterbot <http://goo.gl/dGVOc> (at bugzilla.redhat.com)
21:43 glusterbot Bug 949096: high, unspecified, ---, pkarampu, NEW , Inconsistent read on volume configured with cluster.quorum-type auto
21:44 ash13 hi, I'm relative new to gluster. I have a question I've been trying to figure out:
21:45 ash13 many fuse clients are capable of uid-mapping.  The beauty of this of course is if I have a workstation where i'm UID=500 and a laptop where I'm UID=502, I can access the server from both w/out issue.  I've read that gluster has uid-mapping, but is it not on the client side? how does it work if so?  Or is that some of the HekaFS bits that haven't been integrated yet?
21:45 semiosis chirino: could you retry your test with an extra cat test.txt after you restart glusterd on 21?
21:46 semiosis chirino: i suspect maybe you killed the other two so soon after starting the first that the proactive self heal didnt have a chance to work
21:46 semiosis not sure what the timings are on that, but it seems like a possibility
21:46 chirino that would fix it since it woudl heal
21:46 semiosis doing a read will force an immediate selfheal check
21:46 chirino the point is I never want my app to get old data.
21:46 semiosis ah hm
21:47 chirino if it does it means, your providing eventual consistency.. which wont work for my app.
21:47 semiosis ok another point, you kill the two servers that had the latest write.  what if you let one of those survive?
21:48 semiosis your failure scenario is a bit contrived imho
21:48 chirino your saying it could never happen in real life?
21:48 semiosis hah no you've clearly demonstrated it can happen, this is real life after all :)
21:48 chirino it a totally possible sequence of events.
21:48 semiosis yes of course
21:49 chirino I don't like building app on it works 99% of the time.
21:49 semiosis but how could 21 ever know that the file has been updated while it was dead, if it hasn't had a chance to sync up with the other two?
21:49 semiosis i mean, what system could possibly satisfy that?
21:49 chirino semiosis: it need to have a read quorum when it opens the file.
21:49 chirino or heals the file.
21:49 chirino if it does not, then it shoudl fail.
21:50 semiosis so currently quorum means go read-only if you're all alone, you'd rather have it go totally offline?  seems like a reasonable request
21:51 chirino eys
21:51 chirino yes
21:51 semiosis now that i understand :)
21:52 chirino that will give you a strong guarantee that if the read succeeds it's the latest thing written to the file right?
21:53 semiosis seems right
21:54 chirino BTW.. I think going offline would be a better default than going read only.
21:57 semiosis i just commented on your ticket adding what i hope is a clarifying remark
21:57 chirino thx
21:57 semiosis yw
21:58 semiosis i doubt that bug will be noticed by the core devs until monday
22:00 mrwboilers1 left #gluster
22:04 chirino semiosis: when an app does an fsync, do you know if that blocks until the brick servers have all the writes in memory, or if they also wait for the brick server to fsync the file to disk.
22:07 semiosis chirino: see performance.flush-behind option, that behavior should be configurable
22:07 semiosis ,,(options)
22:07 glusterbot http://goo.gl/dPFAf
22:07 chirino thx!
22:07 semiosis @forget options
22:07 glusterbot semiosis: The operation succeeded.
22:07 a2 chirino, fsync() is strictly returned after data of all replicas reaches stable storage
22:08 semiosis @learn options as see the old 3.2 options page here http://goo.gl/dPFAf or run 'gluster volume set help' on glusterfs 3.3 and newer
22:08 glusterbot semiosis: The operation succeeded.
22:08 semiosis chirino: yw
22:11 glusterbot New news from newglusterbugs: [Bug 928631] Rebalance leaves file handler open <http://goo.gl/3Xruz>
22:32 jbrooks joined #gluster
22:53 jbrooks_ joined #gluster
22:55 fidevo joined #gluster
23:21 alex88_ joined #gluster
23:22 maxiepax joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary