Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-04-11

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 portante joined #gluster-dev
00:24 hagarth joined #gluster-dev
00:41 glusdev_ joined #gluster-dev
00:42 sghosh joined #gluster-dev
00:44 hagarth joined #gluster-dev
01:08 bala1 joined #gluster-dev
01:15 hagarth joined #gluster-dev
01:52 mohankumar joined #gluster-dev
02:10 jbrooks joined #gluster-dev
02:26 mohankumar__ joined #gluster-dev
03:40 pai joined #gluster-dev
04:02 bharata joined #gluster-dev
04:30 pai joined #gluster-dev
04:38 badone joined #gluster-dev
04:59 rastar joined #gluster-dev
05:02 kshlm|AFK joined #gluster-dev
05:03 kkeithley joined #gluster-dev
05:40 rgustafs joined #gluster-dev
06:30 hagarth joined #gluster-dev
06:34 ollivera joined #gluster-dev
06:50 hagarth joined #gluster-dev
07:19 ndevos semiosis: changes related to the hardening compile flags have been upstreamed - http://review.gluster.org/4311
07:30 glusdev_ joined #gluster-dev
08:16 gbrand_ joined #gluster-dev
08:20 gbrand___ joined #gluster-dev
09:08 rastar joined #gluster-dev
09:18 lalatenduM joined #gluster-dev
09:21 hagarth joined #gluster-dev
10:42 ollivera joined #gluster-dev
11:09 edward1 joined #gluster-dev
12:07 kkeithley1 joined #gluster-dev
13:19 a2 joined #gluster-dev
13:22 hagarth joined #gluster-dev
13:32 H__ Can one lower the source brick IOPS load that follows from a replace-brick by rsyncing the source brick directly to the destination brick before starting replace-brick ?
13:46 hagarth joined #gluster-dev
14:04 hagarth joined #gluster-dev
14:11 semiosis ndevos: thats great!
14:19 gbrand_ joined #gluster-dev
14:19 msvbhat_ joined #gluster-dev
14:22 hagarth joined #gluster-dev
14:30 rastar joined #gluster-dev
15:03 jdarcy joined #gluster-dev
15:13 hagarth joined #gluster-dev
15:33 hagarth joined #gluster-dev
15:58 awheeler_ Is this weeks release going to be 3.4 beta or 3.4 Alpha 3?
16:01 jclift_ joined #gluster-dev
16:49 jclift_ Gah.
16:50 jclift_ Is it a known thing that GlusterFS screws with name resolving?
16:57 bala joined #gluster-dev
17:36 johnmark jclift_: oh? how so?
17:36 johnmark awheeler_: alpha3
17:36 awheeler_ johnmark: Thank you
17:37 jclift_ johnmark: For some unknown reason, all lookups for hosts that would otherwise be not resolved are instead getting resolved to www.gluster.org ip address.
17:37 jclift_ johnmark: NFI why.
17:38 jclift_ johnmark: There's a change it's the router from the ISP doing something funky.
17:38 jclift_ johnmark: I'll look into it later.
17:38 jclift_ s/change/chance/
17:38 jclift_ ... but it's only happening for boxes that GlusterFS is installed on. :/
17:39 johnmark jclift_: ha! that's... weird
17:40 johnmark oh right, forgot about this section of the code: if (!resolve) goto http://www.gluster.org/
17:40 johnmark woah, talk about a cool marketing trick!
17:40 johnmark imagines the number of hits we cna generate ;)
17:40 jclift_ It could be something due to my boxes having names like "mybox1.uk.gluster.org", and so the isp's provided router might be doing something just like that. :p
17:41 johnmark oy, ok
17:41 johnmark yeah, I can see how that might happen
17:41 jclift_ Wonder what "router in a microwave" looks like?
17:41 johnmark but now I'm obsessed with devious ideas
17:41 johnmark haha
17:41 jclift_ Might be appropriate.
17:41 jclift_ :)
17:41 jclift_ Anyway, back to generating logs...
17:52 H__ Does 3.3.1 have a tunable that can help alleviate this -> I see about a factor-10 slowdown for rsync crawling a gluster tree after an upgrade from 3.2.5->3.3.1
19:10 kkeithley| johnmark: and???
19:13 johnmark kkeithley|: gah. he's been locked up all day. trying to get a spare moment
19:13 portante johnmark: has anybody spoken to you further about having the Gluster/Swift code (calling it UFO is a real misnomer in my mind, so I am not doing that) into a separate repo?
19:21 kkeithley| portante: Vijay has said yes...   I believe he said once things slow down a bit after the release.
19:22 portante I am of the mind that we should do this now, break gluster/swift integration from Big Bend, make its own release, on its own cadence, and deliverables after the split
19:23 portante there is no good time to do this, now is just as good as any other time
19:24 kkeithley| It's not about good time or bad time, it's about having the cycles to actually do the work
19:24 portante I think we need to make it happen, and burn the cycles now
19:28 jbrooks joined #gluster-dev
19:54 johnmark portante: +1000
19:55 johnmark w00t
19:55 johnmark portante: how can I help?
19:55 johnmark portante: also, what are we doing wrt glance? are we pushing to have gluster-swift back Glance?
19:56 portante We should talk with Avati, but most likely we'll need a way to suck out of the existing repo the current state, or reference it or something (not sure what is the best thing to do here)
19:56 portante with regard to glance, I am not sure, somebody else on the gluster team might know that
19:57 johnmark portante: ah, ok
20:06 portante johnmark: I think we want to have a repo upstream that contains the current gluster/swift integration subtree of glusterfs (pulled out and independent of glusterfs), and a full fork of the upstream swift tree as a sub module (used to stage upstream swift requests)
20:06 portante but I don't have the git knowledge to make all that happen smoothly
20:08 johnmark portante: hrm. ok.
20:08 portante is hrm short for hrumph?
20:09 portante ;)
20:09 johnmark no - "hmmmm"  :)  yeah, sounds like it's just a matter of having two repos in the UFO project
20:09 johnmark doesn't sound insanely complicated
20:09 portante no, but we have to decide about how to track the history.
20:09 johnmark portante: which history? swift's?
20:10 portante do we just say we have "forked" from a certain point in the glusterfs tree for the gluster/swift code? Or do we try to move the entire history into that new repo?
20:10 johnmark portante: would we still need ot patch against the swift source? or are we beyond that now?
20:10 kkeithley| why would swift be in the gluster-swift, a.k.a. ufo, git repo?
20:11 portante just as a way to organize changes we would want to propose upstream to swift, so there is a common point for those in the project
20:11 kkeithley| we have to deal with both ROD (?) and openstack-swift as upstreams
20:11 johnmark kkeithley|: I'm getting a headache
20:11 portante I don't know how RDO fits in here
20:12 portante anybody got any aspirin?
20:12 kkeithley| yeah, but why not just a clone, or use the openstack-swift gerrit?
20:12 johnmark :0
20:12 johnmark kkeithley|: sure, we could do that
20:12 johnmark we could easily clone it - we don't need to track each checkin, just the releases
20:12 a2 portante, do you think using a swift clone is more ideal, than starting with an empty git?
20:13 a2 taht way, the "diffs" can be "patches" in the gluster branch
20:13 johnmark a2: that's what I assumed we did anyway
20:13 a2 and a rebase to grizzly would be creating a new branch and forward porting them
20:13 a2 johnmark, hmmm?
20:13 portante a2: not sure I understand the three lines of discussion
20:14 a2 portante, I understand the requirement is to move the stuff in ufo/ directory out of glusterfs.git
20:14 johnmark a2: so we would branch for each swift codebase
20:14 johnmark er release
20:14 portante a2: k
20:14 a2 portante, right?
20:14 portante a2: right
20:14 johnmark a2: that's what we're discussing, yes
20:14 a2 so, the question is, move it out where..
20:14 a2 option a) into an empty git
20:15 a2 option b) into a clone of the upstream swift git
20:15 kkeithley| I still don't know what problem we're solving by extracting ufo from the gluster.git repo. Why does it matter which repo has the source?
20:15 johnmark kkeithley|: because it's essentially a separate project - does it depend on changes to glusterfs.git?
20:15 a2 instead of committing modified code and diff into glusterfs.git, you would have different branches in this swift clone
20:16 portante option c) into a clone of the upstream glusterfs git tree which we then strip out all of the glusterfs stuff, leaving just ufo
20:16 a2 kkeithley|, it is more natural to have UFO (sorry peter) code (both patches to upstream swift and gluster module) as patches against the swift repo
20:16 portante a2: but I don't think we want them as patches against swift
20:17 portante I think we want them as a module that layers on top of swift
20:17 a2 that's fine..
20:17 portante Assuming the upstream swift LFS patch is coming, then our code could be entirely separate
20:17 portante which is ideal, if the interfaces are defined well enough
20:18 kkeithley| johnmark: more natural and it's a separate project just sounds like semantics to me. But whatever, I'm not going to stand in the way. I'm just going to raise my eyebrow and wonder.
20:18 johnmark portante: kkeithley| fair enough :)
20:18 portante I have to chat with my boss now, biab
20:18 johnmark kkeithley|: oops, that was to you, not portante
20:18 a2 portante, ok, ttyl
20:19 johnmark kkeithley|: can you think of an instance where UFO could make a separate release that shouldn't depend on glusterfs-core?
20:19 johnmark and vice versa?
20:19 a2 johnmark, without dependency? that doesn't make sense
20:19 johnmark a2: given the state of swift, I can see
20:19 kkeithley| We did make a release of UFO in the middle of 3.3.x
20:20 a2 glusterfs does not depend on ufo at all.. but ufo depends on gluster
20:20 johnmark a time where we would want ot release a new version of UFO
20:20 a2 you can separate out their release cycles
20:20 johnmark a2: right
20:20 a2 yes, which is exactly why its natural to have it in a separeate repo
20:20 johnmark a2: I guess the main question is, does separating the release cycles help or hurt development?
20:20 johnmark a2: perfect
20:20 kkeithley| We declared UFO 1.1, we tagged the tree (I hope) and we made a tarball.
20:21 a2 i think it helps, it is mostly separate code (python), does not link with the rest of glsuterfs.git, can be released separately, so the right thing is to move it out to a separeate repo
20:21 kkeithley| And with 3.4.0 we'll release 1.2. It's merely coincidental that we're doing them at the same time. It just happened to work out that way.
20:22 johnmark kkeithley|: therte's also the issue of lots of developers and users not even knowing that we have this swift piece at all
20:22 a2 we have UFO against folsom, grizzly, etc. ideally each of those versions would be separate branches in this new repo, rather than "replaced code" in glusterfs.git
20:22 kkeithley| Right. Like I said, I'm not going to stand in the way.
20:22 johnmark a2: awesome - and what you woudl do is carry a set of patches in a separate repo
20:22 johnmark a2: got it
20:22 kkeithley| set of patches?
20:23 kkeithley| We've been trying to get away from patches
20:23 a2 today in glusterfs.git you can have only ONE version of ufo, either against grizzly, or folsom, or whatever, we cannot maintain two versions at the same time -- which is a limitation
20:23 johnmark kkeithley|: true.
20:23 johnmark a2: good point
20:23 johnmark the main question for me is one of presentation - how exactly do we structure the repo
20:24 a2 when we do a UFO rebase against grizzly-next, we don't want to "replace" code anywhere, just create a new branch in the new repo
20:24 kkeithley| Of lead, follow, or get out of the way, usually I lead, but this time I'm getting out of the way ;-)
20:25 johnmark I think portante's idea of having a repo of swift, overlain with UFO is the best possibility
20:25 a2 kkeithley|, what are your thoughts?
20:25 kkeithley| a2: that's the most compelling reason I've heard.
20:25 johnmark kkeithley|: very admirable :)
20:26 johnmark so for ever swift release, we pull down a clone, create a branch, and add our special bits?
20:26 johnmark then it woudl be super easy to release tarballs and such
20:26 a2 johnmark, not necessarily.. whether we want to clone upstream swift or not - I dont' have a preference
20:26 johnmark ok
20:27 johnmark a2: I think that would make it easier for someone new to UFO or GlusterFS find their footing
20:27 kkeithley| I expect the real question is how much do we really expect that once we cut over to havana-based UFO — and whatever comes after havana down the road — will we bother to maintain the previous release.
20:28 * kkeithley| thinks it hasn't been hard to make ufo tarballs thus far
20:28 a2 kkeithley|, we have seen situations where deployed versions keep getting fixes and backports even if upstream has progressed..
20:28 a2 so preserving old branches will likely remain a requirement
20:28 a2 (think "pdq")
20:29 kkeithley| a2: right, that's the first compelling argument anyone has made. "natural" and "it's a separate project" didn't seem particularly compelling to me, from a technical standpoint.
20:30 johnmark kkeithley|: we speak different languages :)
20:31 johnmark speaking of different languages, I'm having a go at packstack
20:31 a2 it actually is a separate project :-) ufo just "depends" on glusterfs, is not a part of glusterfs..
20:31 kkeithley| yeah, you speak marketing or some strange dialect
20:31 johnmark how hard woudl it be to wrap in UFO or our cinder integration with packstack?
20:32 johnmark kkeithley|: lol
20:32 johnmark exactly
20:32 johnmark engineers are from Mars...
20:32 johnmark marketeers are from... alpha centauri?
20:32 kkeithley| A whole other galaxy
20:32 kkeithley| so far removed from this one...
20:33 kkeithley| ;-)
20:33 a2 ufo, qemu, samba-vfs -- all the same pattern, they just depend on glusterfs (either a gluster mount or libgfapi)
20:33 a2 each of them are clsoer to their respective upstreams than to glusterfs
20:33 johnmark a2: that reminds me, I need ot ask our LTC brethren if they have an interest in the forge
20:34 a2 ideally the respective integration code would be *completely* upstream (like ufo), or in a separate interim repo till they eventually make it upstream (like ufo)
20:34 a2 ideally the respective integration code would be *completely* upstream (like qemu), or in a separate interim repo till they eventually make it upstream (like ufo)
20:35 johnmark I knew what you meant :)
20:35 a2 kkeithley|, do you have an alternative point of view?
20:36 kkeithley| Do you mean, e.g., trying to get openstack-swift to take ownership of the UFO bits?
20:37 a2 yep, and also samba-vfs to make it into sourc3/modules/vfs_glusterfs.c
20:37 kkeithley| We're already running into issues in Fedora where oVirt/vdsm shipped a package in F19 that wants glusterfs-3.4.0.
20:38 kkeithley| When Samba and openstack-swift, with their own release schedules— It makes me wonder what can of worms we're going to open.
20:38 kkeithley| Going that route.
20:38 a2 kkeithley|, peeling out ufo into a separate repo/release/project would only fix that dependency mess me thinks.. glusterfs-ufo-X.Y release jsut depends on "glusterfs" (any version)
20:39 johnmark kkeithley|: plus, if we can get these pieces into their upstream codebases
20:39 johnmark then it will be up to them to make sure they work with each release
20:39 kkeithley| johnmark. lol. That worked so well with oVirt/vdsm
20:40 johnmark I mean, we'll also be workign with them, but it will add a dependency upstream that takes some load off of us
20:40 johnmark kkeithley|: ha... ovirt is not always the right project to follow :)
20:41 johnmark kkeithley|: that was an example of overaggressive packagers = they didn'
20:41 johnmark t need to release packages with a hard dependency on 3.4
20:41 a2 kkeithley|, we should push for the gluster integration bits to trickle down from the respective upstream project releases, that's the cleanest with no "mess"
20:41 johnmark "down from" or "up into"?
20:41 kkeithley| Look, I'm not arguing against it. Other than Avati's supposition that we made need branches being the only compelling technical argument I've heard.
20:41 johnmark kkeithley|: that's a fair statement
20:43 johnmark in other news, packstack is pretty cool
20:43 * johnmark looks into adding a GlusterFS bundle
20:47 kkeithley| And with that I'd say it's decided? We'll have a separate repo.
20:47 kkeithley| ?
20:47 a2 i think peter had some quesetions/concerns
20:48 johnmark kkeithley|: sounds good to me.
20:48 johnmark a2: yeah, I'll wait until he comes back
20:48 a2 about carrying over the history of the current patches
20:48 johnmark although I liked where he was going
20:48 a2 (i personally don't think that's a "big" issue.. the history is always there in glusterfs.git)
20:48 kkeithley| I think we want the complete revision history of all the files.
20:49 a2 ok
20:49 johnmark kkeithley|: right - thta's easy to do
20:49 a2 johnmark, i don't think it is trivial
20:49 johnmark a2: to include a complete history? git clone --mirror into a new repo
20:49 kkeithley| So now I'm going to play my own semantic sugar card and say if they're separate projects with separate repos then no one should ever have to look at the gluster.git repo for revision history ;-)
20:49 johnmark and then strip out the parts you don't want
20:50 johnmark kkeithley|: I agree :)
20:50 kkeithley| sometimes we do speak the same language
20:50 johnmark NNNOOOOOOOO
20:51 a2 most of the UFO patches involve some "side effects", like editing configure.ac, glusterfs.spec etc. so that would make pulling out the UFO commits into a separate repo not a zero-touch operation
20:51 johnmark oh.
20:52 kkeithley| The glusterfs.spec is pretty modular. We can take out all the swift and ufo bits. (The swift bits were going to come out eventually anyway.)
20:52 a2 or as someone (kkeithley?) suggested, clone glusterfs.git, and git rm away all non-ufo stuff
20:52 johnmark a2: that's what I was thinking
20:52 kkeithley| I'm not sure what's in configure.ac that's swift or ufo related other than the subdirs in the source.
20:53 kkeithley| git clone --mirror wasn't my idea.
20:53 a2 kkeithley|, if we can do a git log ufo/, short list the commits, and cherry-pick them out to a new empty repo, resolving conflictts manually on the way (which might be trivial, but still conflicts), that should be sufficient?
20:55 kkeithley| probably. I was better at svn, still don't know all of git's bells and whistles.
20:55 a2 it's conceptaually the same you would do in svn too
20:55 a2 i think the biggest piece would be glsuterfs.spec.in
20:56 kkeithley| right
20:56 a2 ufo was just adding a "section" into the spec file, now it would need to have a new independent spec file of its own
20:56 a2 and that's where i suspect most of the conflicts would be
20:56 portante a2, johnmark, kkeithley|: can one of you summarize all this? I am heading out with our group to BBC for some wings, can pick this up tomorrow
20:57 johnmark portante: I can summarize for you at BBC, if you'll buy me a beer :)
20:57 portante done
20:57 johnmark actually, if you read the last 5 lines, you'll get the gist of it
20:58 johnmark portante: I think a2 wanted to hear from you how you prefer to do it
20:58 kkeithley| right, but that's what I meant when I said the glusterfs.spec file is pretty modular. It should be mostly straight forward to cut out the swift and ufo bits leaving core gluster in the glusterfs.spec file
20:59 johnmark or if you had concerns with what he proposed - "if we can do a git log ufo/, short list the commits, and  cherry-pick them out to a new empty repo, resolving conflictts manually on  the way (which might be trivial, but still conflicts), that should be  sufficient?
20:59 johnmark "
20:59 a2 kkeithley|, got it!
21:02 johnmark portante: I'll come out for a brief appearance, and then I'll have to head home
22:56 jbrooks joined #gluster-dev
23:05 H__ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary