Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-03-26

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:00 yinyin joined #gluster-dev
03:01 vshankar joined #gluster-dev
03:54 bulde joined #gluster-dev
04:06 sgowda joined #gluster-dev
04:13 rastar joined #gluster-dev
04:20 bala1 joined #gluster-dev
04:23 sripathi joined #gluster-dev
04:24 bala1 joined #gluster-dev
04:30 vshankar joined #gluster-dev
04:36 hagarth joined #gluster-dev
04:57 deepakcs joined #gluster-dev
05:15 lalatenduM joined #gluster-dev
05:23 sripathi joined #gluster-dev
05:25 mohankumar joined #gluster-dev
05:26 raghu joined #gluster-dev
05:56 rastar joined #gluster-dev
06:12 sgowda joined #gluster-dev
06:14 sripathi joined #gluster-dev
06:44 sahina joined #gluster-dev
06:57 hagarth joined #gluster-dev
07:09 sgowda joined #gluster-dev
07:10 jules_ joined #gluster-dev
07:25 xavih joined #gluster-dev
07:26 badone can anyone give me a brief description of what a fd_lk_ctx_node is used for ?
07:27 badone I imagine teh first part is file decriptor lock but what does the ctx_node represent?
07:44 hagarth joined #gluster-dev
09:20 _ilbot joined #gluster-dev
09:28 test_ joined #gluster-dev
09:36 puebele joined #gluster-dev
09:46 test_ joined #gluster-dev
09:48 xavih joined #gluster-dev
10:33 sahina joined #gluster-dev
10:43 jclift joined #gluster-dev
10:51 sgowda joined #gluster-dev
11:16 sgowda joined #gluster-dev
11:23 hagarth joined #gluster-dev
11:51 kkeithley1 joined #gluster-dev
11:57 hagarth joined #gluster-dev
12:11 lalatenduM joined #gluster-dev
12:23 hagarth joined #gluster-dev
12:46 jdarcy joined #gluster-dev
12:48 bala joined #gluster-dev
12:51 awheeler_ joined #gluster-dev
12:58 bala joined #gluster-dev
12:58 mohankumar joined #gluster-dev
12:58 kkeithley1 what are the numbers for the readiness call?
13:00 jdarcy I thought it was an IRC "call"
13:01 kkeithley1 oh, okay
13:03 jdarcy Heh.  There's an Apple employee named Sam Sung.  Really.
13:05 johnmark greetz
13:05 johnmark IRC baby
13:06 johnmark unless you guys want to do phone
13:06 johnmark sorry, was a bit late driving in. @ robbins rd
13:07 jdarcy Should we be here, or #gluster-meeting?
13:07 kkeithley1 IRC is fine for me
13:07 johnmark eh, doesn't matter to me. here is fine
13:07 johnmark hagarth: you around?
13:07 johnmark avati_: you asleep? :)
13:08 kkeithley1 btw, vdsm shipped in f19 with Requires: glusterfs >= 3.4.0 and people are complaining
13:08 johnmark shit
13:08 jdarcy How is it even possible for something to ship in a release with a dependency that's not also in the release?
13:08 johnmark kkeithley1: that's retarded
13:09 johnmark oh but f19 wont' be out for a couple of months, right?
13:09 johnmark jdarcy: 3.4 should be in f19. I think
13:09 hagarth johnmark: just got off a call now
13:09 kkeithley1 well, f19 isn't really real yet either and broken dependencies aren't that uncommon.
13:10 jdarcy OK, so it's still a problem for rawhide users, but not others?
13:10 johnmark kkeithley1: right. I'm not going to worry too much about it :)
13:10 johnmark hagarth: howdy!
13:10 hagarth johnmark: doing good, thanks!
13:10 johnmark hagarth: cool :)
13:10 kkeithley1 yes, I was planning on updating 3.4.0 into f18 and f19
13:10 johnmark kkeithley1: excellent
13:10 johnmark ok, ding ding ding
13:11 johnmark so it *sounds* like we have a couple of patches in the review queue outstanding?
13:11 jdarcy More than a couple.
13:11 johnmark jdarcy: doh
13:11 hagarth johnmark: more than a couple.
13:11 johnmark jdarcy: elaborate please
13:11 * johnmark checks the tracking bug
13:12 hagarth jdarcy: please go ahead, I will follow up with my list
13:12 jdarcy I don't think we've even added the ext4- and XFS-related fixes to that list, and they're critical.
13:12 johnmark jdarcy: right. and those are the ones I was thinking of
13:12 hagarth +1
13:12 jdarcy Plus we don't even have bugs for all of the glusterd race conditions that we know we'll find.
13:12 johnmark jdarcy: srsly?
13:12 hagarth I am planning to run some tests with glusterd this week
13:13 johnmark hagarth: thank you. can you post which tests you plan to run and how?
13:13 hagarth but am not sure if I will be able to catch all races
13:13 johnmark I think we cna get more testing if we start to publish our test plans
13:13 jdarcy We've recognized a fundamental danger related to the changes that have gone in recently, but it's impossible to gauge its true impact without a lot more testing.
13:13 johnmark jdarcy: ok
13:13 hagarth johnmark: sure, will do that.
13:13 kkeithley1 jdarcy: wrt vdsm and 3.4.0, rawhide is f20 now, fwiw
13:14 johnmark kkeithley1: ah, ok
13:14 johnmark hagarth: thank you. is there anyone who canhelp with that? KP?
13:14 johnmark that sounds like something an intern/flunky/scrub could help with
13:15 hagarth johnmark: I think I can reach out to KP
13:15 jdarcy Until *at least* the ext4/XFS problems and the glusterd issues have been resolved, I think it's safe to say that we are definitively *not* ready for another release - even another alpha.
13:15 johnmark hagarth: ok.
13:15 johnmark jdarcy: agreed
13:15 johnmark I think we should target the release for summit
13:15 hagarth johnmark: how about planning a test day next week?
13:15 johnmark because that's how long it will take
13:15 johnmark hagarth: +1. I can arrange that
13:16 jdarcy Summit is ten weeks away.
13:16 hagarth we can do 24 hour test day - I can take care of 12 hours
13:16 johnmark I'm also going to start bringin gin jclift for these things, because he can be super helpful
13:16 johnmark hagarth: excellent
13:16 hagarth johnmark: let us aim for an earlier release
13:16 johnmark jdarcy: do you think that's aggressive?
13:16 jdarcy Can we realistically do betas at, say, two and six weeks from today?
13:16 johnmark hagarth: agreed, but let's  be realistic
13:16 johnmark jdarcy: possibly?
13:16 hagarth jdarcy: I think we can
13:17 johnmark jdarcy: depends on how quickly we fix the major issues
13:17 hagarth that gives me enough time to start writing about new features in 3.4 too :)
13:17 johnmark for the blocker bugs, we're at least a week away from having patches reviewed and merged, I'm guessing
13:17 johnmark hagarth: excellent :)
13:17 jdarcy I'd say we're closer on the ext4/XFS stuff.
13:18 hagarth yeah, we're close there. we need rdma and op-version to be knocked off as well.
13:18 johnmark hagarth:
13:18 johnmark ok, so looking at 895528
13:19 jdarcy hagarth: Should we schedule a meeting tomorrow for the glusterd issues?  I don't think we even have a common understanding of the issues yet, let alone consensus on a way forward.
13:20 hagarth jdarcy: that sounds like a good idea. Can we do it 8:30 am your time?
13:20 * johnmark notes time
13:20 hagarth jdarcy: We can rope in Krishnan as well for that meeting.
13:20 jdarcy https://bugzilla.redhat.com/show_bug.cgi?id=918917
13:20 glusterbot Bug 918917: unspecified, unspecified, ---, vbellur, NEW , 3.4 Beta1 Tracker
13:20 johnmark hagarth: perfect
13:21 jdarcy hagarth: I'm fine with that, but that's 5:30 Avati's time.
13:21 hagarth oops, missed that!
13:21 johnmark jdarcy: how late can you attend meetings?
13:21 johnmark kkeithley1: ^^^
13:21 kkeithley1 as late as needs be
13:21 johnmark because I'm guessing we're going to have to time-shift by 12 hours
13:22 kkeithley1 as long as I don't have something that conflicts
13:22 jdarcy johnmark: This is important.  I can do middle of the night if I have to.
13:22 johnmark so if we do 9 or 10pm, that will be ok?
13:22 * johnmark looks at his world clock
13:22 kkeithley1 yes
13:22 jdarcy johnmark: Easy.
13:22 hagarth early morning works for me too
13:22 johnmark hagarth: give me a range that works for you
13:22 johnmark and to pull in kp and possibly pranith, as needed
13:23 hagarth 9:30 PM EST suits me, 10:00 PM would work better for kp I guess.
13:23 hagarth can we target 10:00 PM tomorrow night your time?
13:24 hagarth I will have KP lined up on my Thursday morning for that.
13:24 jdarcy Later is slightly better for me, actually.  Better chance that my wife will have gone to bed by then, so I'll have downstairs to myself.
13:24 johnmark hagarth: that works for me. I could even do 10:30 pm
13:24 johnmark jdarcy: +1 :)
13:24 hagarth great, so shall we target 10:30 PM ?
13:24 johnmark ok, so 10:30pm EDT, 8am IT, right?
13:24 jdarcy LGTM
13:25 johnmark hagarth: perfect
13:25 hagarth is it EDT now? I always get confused between ST & DT.
13:25 johnmark hagarth: yeah, we "sprang forward"
13:25 johnmark like the spring chickens we are
13:25 jdarcy Spring forward, fall back.
13:25 hagarth ah ok :)
13:25 johnmark you are lucky to not worry about that :)
13:25 johnmark cool. will send out the invite - that is 13 hours from now, just so everyone is aware :)
13:26 jdarcy I guess I should update the tracker.  As always, feel free to add stuff yourself or send me email.
13:26 hagarth I will update the tracker with my list later in the day.
13:26 kkeithley1 10pm tonight. That might just be what gets me back in this time zone ;-)
13:27 jdarcy 918917, not 895528, just to be clear.
13:27 hagarth jdarcy: ok
13:27 johnmark kkeithley1: that's 10:30, not 10 :)
13:27 jdarcy If I don't see an invite within an hour, I'll send one myself, just so we can let Zimbra handle all the conversions.  ;)
13:27 johnmark jdarcy: awesome
13:28 kkeithley1 for the purpose of getting back in this time zone it's all the same
13:28 johnmark and I'm guessing that we can safely plan a test day for middle of next week, possibly Thursday
13:28 johnmark kkeithley1: :)
13:28 johnmark poor kaleb
13:28 johnmark ok then!
13:29 johnmark there are other items that we can discuss later, like who should maintain the 3.3.x release
13:29 johnmark nominations welcome
13:29 hagarth johnmark: sure - I have been working on a proposal on governance of the repository which I hope to be done by end of this week.
13:29 jdarcy Maybe I'll write something to let us simulate power-fail testing with VMs, for the xattrop/XFS problem.
13:30 hagarth I will run that through you all after I'm done.
13:30 hagarth jdarcy: that would be great!
13:30 johnmark hagarth: fantastic
13:31 johnmark jdarcy: hoo ah!
13:31 jdarcy I nominate Vijay or Kaleb for 3.3.  I feel bad about that, but I would (a) do a terrible job and (b) burn out even faster.
13:32 jdarcy (crickets)
13:32 hagarth jdarcy: no problem, am used to that
13:34 hagarth ok, anything else for today?
13:34 johnmark hagarth: I think we're done
13:34 hagarth ok here go the AIs:
13:35 hagarth 1) Meeting in ~ +13 hours around glusterd
13:35 * jclift is here
13:35 jclift Meeting still in progress?
13:35 jdarcy Should we discuss how to decide what gets backported to 3.3, or just leave that to the discretion of whoever maintains it?
13:36 hagarth Can we have voting for that in Trello?
13:36 hagarth jclift: we are winding up.
13:36 jclift hagarth: Heh, next one then. :)
13:37 johnmark hagarth: agreed
13:37 hagarth 2) Everybody to add dependencies to the blocker - 918917
13:37 johnmark hagarth: voting in trello, that is
13:37 hagarth 3) Aim for the first beta in 2 weeks from now.
13:37 johnmark jclift: will share with you the ugly details
13:37 hagarth anything more from today?
13:37 johnmark sounds about right
13:38 kkeithley1 Yes, voting in trello
13:38 johnmark hagarth: if we have a test day next week, we'll need fresh QA builds to do it on
13:38 hagarth johnmark: let us have that. I hope to have more patches in by then.
13:39 johnmark hagarth: ok
13:39 hagarth 4) Voting in trello for backports
13:39 hagarth 5) Test Day next week and one more QA build by then.
13:39 johnmark hagarth: let me help you get trello in shape
13:39 hagarth johnmark: cool!
13:40 johnmark I think there are topics there that need to be managed, purged, added, etc.
13:40 kkeithley1 what's the blocker BZ?
13:40 johnmark 918917
13:40 johnmark beta tracker
13:40 johnmark https://bugzilla.redhat.com/sh​owdependencytree.cgi?id=918917
13:40 johnmark glusterbot: ping
13:40 glusterbot pong
13:42 * hagarth sets off on the adventure ride that leads him home.
13:42 jclift Did we get that BZ fixed where alpha2 doesn't work with swift fulsom, only with swift grizzly?
13:42 * jclift goes looking for the BZ
13:42 hagarth later folks
13:42 jclift :)
13:43 johnmark leter
13:43 johnmark er later
13:43 johnmark hagarth: thanks!
13:43 johnmark jclift: not sure. if you find the BZ, post it here
13:44 kkeithley1 jclift: dunno, I wasn't aware of a problem. the/my alpha2 RPMs seems to work okay when I (minimally) tested them.
13:44 jclift 923228
13:44 kkeithley1 my alpha2 rpms on download.gluster.org
13:44 jclift Apparently it's a dup, but BZ is refusing to load the dup for me.\
13:45 johnmark ok, zimbra invite sent
13:45 jclift kkeithley1: Hmmm, it's marked as a dup of 923580, but the reporter of "923228" is saying it shouldn't be.
13:47 jclift Yeah, that "NOTABUG" thing doesn't make sense.  It sounds like this is an actual problem, potentially serious.
13:47 jclift awheeler_: ping, you around?
13:48 awheeler_ jclift: Yes
13:48 jclift awheeler_: What's your take on 923228 BZ?
13:49 awheeler_ I find it odd, and possible, but the explanation is incomplete if so.
13:49 jclift awheeler_: It seems to have been marked as a dup, but reading the dup I'm not sure why only working with an upstream master branch of OpenStack instead of one RH also provides the community is considered "notabug".
13:49 awheeler_ You'll note I added some comments to 923580
13:52 awheeler_ jclift: Mine is def a bug.
13:52 johnmark jclift: what do you mean "doesn't work with folsom"?
13:53 jclift awheeler_: ^^^
13:53 johnmark if you mean, it doesn't work when you have swift installed and you try to install gluster-swift
13:53 johnmark then yes, that's a known issue
13:53 johnmark and requires that you use either one or the other
13:54 johnmark I don't know that it's something we can fix, except to make sure that you can't install both at the same time with dependency checks
13:54 kkeithley1 sounds like a patch didn't get applied, or a patch was reversed
13:54 johnmark kkeithley1: hrm
13:55 kkeithley1 as a function of building the rpm that is
13:55 jclift My point is that we should get someone that knows this area of stuff to look at it, as if there really is a problem there it'll affect more people once they start using it.
13:55 johnmark awheeler_: I recommend you describe your bug to kkeithley1 or portante - they've spent the most time with it
13:55 awheeler_ Do the swift tests get run by the regression tests, or just the gluster tests?
13:55 * jclift butts out now
13:55 awheeler_ kkeithley1: https://bugzilla.redhat.com/show_bug.cgi?id=923228
13:55 glusterbot Bug 923228: medium, unspecified, ---, junaid, CLOSED DUPLICATE, 3.4 Alpha 2 Breaks swift file posting
13:57 awheeler_ kkeithley1: Summary: The 1.7.4 call to file.mkstemp is expecting to be yileded 2 args, but only 1 is yeilded
13:57 * johnmark looks at 923580
13:58 johnmark awheeler_: when you installed the alpha2 rpms, UFO wouldn't work at all?
13:58 johnmark did you also have openstack installed?
13:58 awheeler_ It didn't work at all, I used only the Alpha2 RPMs
13:59 johnmark oh!
13:59 awheeler_ I only have openstack-utils instal
13:59 johnmark awheeler_: I don't think we were able to reproduce that
13:59 johnmark awheeler_: if you pull down the bits from master, do you see the same thing?
14:00 johnmark or from the 3.4 branch
14:00 awheeler_ I haven't tried -- I like RPMs.
14:00 * jclift wonders if there's a way to check how many args mktemp() wants, then feed it the appropriate ones.  i.e. work for both situations
14:00 johnmark awheeler_: agreed :)
14:00 kkeithley1 I'm pretty sure I know what it is. I can only work on one thing at a time though, and right now I've got three going. ;-)
14:00 jclift awheeler_: In the meantime: http://www.gluster.org/community/do​cumentation/index.php/CompilingRPMS
14:00 awheeler_ kkeithley1: Heh, I know what you mean.
14:01 jclift awheeler_: I've gone through that doc a _bunch_ of times with Fedora 16-18 and CentOS 6.4 over the weekend and yesterday.  That doc is streamlined to a fine art now and will only take about 10 mins of time to create rpms from start to finish.
14:01 jclift (cut-n-paste only the whole way)
14:01 jclift Just saying. :D
14:02 awheeler_ Cool, I'll give that whirl in a few.
14:02 jclift kkeithley1: Should we reopen that BZ ?
14:03 johnmark jclift: only if it hasn't been fixed in master or the 3.4 branch
14:03 kkeithley1 yeah, sure.
14:04 awheeler_ What version of swift does the master use: 1.7.4, or 1.7.6?
14:04 kkeithley1 we're still using 1.7.4
14:05 kkeithley1 we want to use the 1.7.6 rpm in koji, but that's only available for f19 and f20 atm
14:05 awheeler_ Then this is still a bug, looking at the code -- unless there's a patch that changes the number of args expected or yielded
14:07 awheeler_ https://github.com/gluster/glusterfs/blob/mas​ter/ufo/gluster/swift/common/DiskFile.py#L327
14:07 awheeler_ https://github.com/openstack/swift/​blob/1.7.4/swift/obj/server.py#L550
14:08 kkeithley1 I haven't merged the spec file sync work into 3.4. To build the 3.4 alpha rpms I used the fedora spec file and I probably omitted one of the patches.
14:08 awheeler_ I tried converting the function to return 2 values, but it broke down in other way.s
14:08 awheeler_ Ah, and I'm using CentOS
14:09 awheeler_ Could it be that it works for the Fedora build and not CentOS, or is it likely missing for all of them?
14:09 jclift awheeler_: Try the compile rpms instructions and see if they work.  Go on. :D
14:09 kkeithley1 it's not any different on CentOS. I use the same spec file for the epel rpms that are on download.gluster.org
14:09 sgowda joined #gluster-dev
14:10 awheeler_ kkeithley1: Excellent.  BTW, on a completely different topic, I have Keystone working perfectly with UFO now -- there was a race condition in the memcache fd access.
14:10 portante awheeler_: oh, do tell ...
14:10 kkeithley1 oh, that's excellent. Maybe you could write up what you did
14:12 awheeler_ They have patched it somewhat upstream, but I took a different tack and replaced the python-memcached calls with pylibmc which has a pool option.
14:13 portante awheeler_: can't you point me at the upstream patch?
14:13 awheeler_ That, and I realized (unrelated to Keystone) that I didn't have rpcbind running.  Things became more stable after I did that (wasn'ut using NFS, so didn't realize it had to be running anyway)
14:14 awheeler_ portante: They moved the auth_token.py from openstack-keystone into the python-keystonclient: https://github.com/openstack/python-keystone​client/tree/master/keystoneclient/middleware
14:14 johnmark awheeler_: interesting!
14:16 awheeler_ portante: And here's their memcache replacement/wrapper: https://github.com/openstack/python-k​eystoneclient/blob/master/keystonecli​ent/openstack/common/memorycache.py
14:17 * awheeler_ back in a bit
14:18 maxiz joined #gluster-dev
14:22 wushudoin joined #gluster-dev
14:26 rastar joined #gluster-dev
15:19 bala joined #gluster-dev
15:22 lpabon joined #gluster-dev
15:29 * awheeler_ back
15:30 awheeler_ jclift: Cutting-and-pasting you're build docs.
15:31 awheeler_ kkeithley1: I can write up what I did.
15:35 awheeler_ This is the keystone bug for the race condition: https://bugs.launchpad.net/keystone/+bug/1020127
15:37 awheeler_ And here's the work they're doing on patching it: https://review.openstack.org/#/c/12356/
15:37 awheeler_ But that's not the direction they seem to be going anymore, which is why I went my own way.
15:43 bgpepi joined #gluster-dev
15:47 awheeler_ jclift: Docs seems to be working well -- any docs on how to do the swift unit tests?
15:48 puebele1 joined #gluster-dev
15:52 hagarth joined #gluster-dev
15:56 awheeler_ jclift: Works like a champ, now to test...
16:10 jclift awheeler_: No idea how to do swift unit tests, so no docs for it yet.
16:11 jclift awheeler_: I'm hitting Cinder first (right now), and going to doc that.
16:12 awheeler_ Ah, ok.  BTW, my new gluster RPMs can access the volume created by the 3.3.1-11 RPMs.
16:12 awheeler_ Well, they can, but there's an error
16:12 jclift Doh
16:12 awheeler_ and the backend file is fine, so weird.
16:12 jclift awheeler_: As a thought, if you figure out the swift unit test running steps, can you chuck that in a wiki page or etherpad or something?
16:13 awheeler_ Sure
16:13 jclift awheeler_: Just thinking they'd be super useful (at least for me), when I come to looking at swift properly. :D
16:13 awheeler_ Agreed
16:16 awheeler_ Yup -- deleted the volume and re-created it and now it's fine.
16:21 awheeler_ The bug exists in git right now -- just rebuilt according to your docs and seeing the exact same error -- bug 923228
16:21 glusterbot Bug http://goo.gl/nec4f medium, unspecified, ---, junaid, ASSIGNED , 3.4 Alpha 2 Breaks swift file posting
16:21 jclift Damn
16:23 awheeler_ I fully expected that, as the git repo doesn't show any patches being applied to fix it.
16:23 jclift awheeler_: Yeah
16:23 awheeler_ Now I guess I can figure out how to run the tests and then write one that breaks.  :-)
16:24 jclift :)
16:28 sgowda joined #gluster-dev
16:32 92AAAA2SQ joined #gluster-dev
16:32 johnmark awheeler_: hrm. so you're saying that UFO doesn't work at all?
16:32 * johnmark checks the release-3.4 branch
16:32 johnmark because if you can't post an object, that's pretty broken
16:33 johnmark awheeler_: did you re-open the bug?
16:33 awheeler_ Agreed.  I can auth, but not post.  And I'm not sure it's limited to posting -- just running swift-bench which starts with the post
16:33 awheeler_ Yes, it is re-opened.
16:33 awheeler_ bug 923228
16:33 glusterbot Bug http://goo.gl/nec4f high, high, ---, junaid, ASSIGNED , 3.4 Alpha 2 Breaks swift file posting
16:33 johnmark awheeler_: ok
16:37 awheeler_ So, a GET appears to not work as well, but doesn't err, it just doesn't return a listing of files.
16:39 awheeler_ I've not used swift-init all start before, I've been using swift-init main start.  What's with the all vs main?
16:40 awheeler_ Is that glusterfs vs openstack?
16:51 puebele1 joined #gluster-dev
16:53 rastar joined #gluster-dev
17:00 awheeler_ Looks like there are some differences in the alpha1 spec file, so rebuilding with that spec instead (kojibuild is 1, for example)
17:07 vshankar joined #gluster-dev
17:13 rastar joined #gluster-dev
17:16 awheeler_ Nah, no dice.
17:18 jclift awheeler_: Hmmm, don't suppose you can just patch the broken mktemp() function so it only uses the right amount of args?
17:18 jclift awheeler_: Or maybe adjust the spec file to pull in swift 1.7.6 instead?
17:18 awheeler_ Tried, and then I run into a metadata error
17:18 * jclift sighs
17:18 jclift Damn
17:19 awheeler_ I thought alpha1 worked, so I'm checking to see if that's the case, and how it's different.
17:20 awheeler_ Nope, it has the same bug.  I must not have checked it.
17:21 sghosh joined #gluster-dev
17:25 awheeler_ Ok, if I take the v3.4.0qa5 version of DiskFile.py, the PUT now works -- but GET now fails. lol
17:27 awheeler_ Ok, fixed that as well.
17:27 awheeler_ So, I do have a working Alpha2 system now.
17:29 bala joined #gluster-dev
17:45 awheeler_ Ok, got it all fixed up.
17:48 awheeler_ But now delete is broken, lol
17:50 awheeler_ Actually, now my other bug is showing itself: bug 924792
17:50 glusterbot Bug http://goo.gl/Smv7Z medium, unspecified, ---, junaid, ASSIGNED , Gluster-swift does not allow operations on multiple volumes concurrently.
17:50 awheeler_ Basically, it's trying to access the wrong volume
17:50 jclift awheeler_: Please turn the fixes into patches? :)
17:51 awheeler_ I'm going to post the patch to the bug, and someone else can decide (kkeithley1 perhaps?) if it's the correct fix.
17:51 awheeler_ As it reverts changes.
17:59 awheeler_ Done.
18:00 awheeler_ Patch is attached to bug 923228
18:00 glusterbot Bug http://goo.gl/nec4f high, high, ---, junaid, ASSIGNED , 3.4 Alpha 2 Breaks swift file posting
18:01 awheeler_ portante: This appears to be related to your patch: https://github.com/gluster/glusterfs/commi​t/1338ad168a4a468636909322dace9dc9f750dd13
18:02 awheeler_ I have only reverted 2 lines
18:04 awheeler_ s/reverted/changed/
18:08 kkeithley1 I think we have more work to do on ufo and I'm thinking I jumped the gun by putting the current ufo code in the alpha and alpha2 rpms.
18:10 awheeler_ Well, aside from the mulit-volume issue, 3.3.1-11 seems to work quite well.
18:10 awheeler_ s/mulit/multi/
18:10 kkeithley1 per se, ufo releases are not tightly coupled to the glusterfs releases; the fact that they're in the same git repo not withstanding
18:10 kkeithley1 yes
18:11 JoeJulian_ Should ufo be untied from glusterfs git repo?
18:11 kkeithley1 portante has suggested it in the past
18:12 JoeJulian_ Who do we need to make that happen? hagarth?
18:13 kkeithley1 I suppose
18:14 JoeJulian_ It makes sense to me. It doesn't seem to have the same production cycle focus nor the same goals.
18:14 kkeithley1 reverting to ufo-1.1, which is the same as what's in 3.3.1-x, appears to work. I'm inclined to respin the 3.4.0alpha2 rpms using that.
18:16 kkeithley1 And maybe we can get ufo-1.2alpha into shape in time for the beta.
18:16 hagarth JoeJulian_: we have been thinking of making this better operationally, will converge on an approach in some time.
18:20 JoeJulian joined #gluster-dev
18:27 awheeler_ Well, I just updated my patch for that bug, and so far, it looks like the multi-volume issue isn't plaguing me anymore.
18:27 awheeler_ I've done two rounds of swift-bench, with 1000 puts, 10,000 and 1000 gets, and 1000 DEL with no error.
18:29 awheeler_ (Using keystone)
18:29 kkeithley1 okay, well, maybe I'll hold off on a respin with the ufo-1.1 then.
18:29 kkeithley1 does using keystone require changes in either swift or ufo code?
18:31 awheeler_ no, just a 'fixed' auth_token.py file
18:32 awheeler_ Looks like I submitted the wrong patch...
18:33 awheeler_ Ok, fixed it: https://bugzilla.redhat.com/attach​ment.cgi?id=716667&action=diff
18:34 awheeler_ Third time testing and no errors, so it's looking like 3.4Alpha2 with my patch is stable.
18:37 kkeithley1 good, then I won't respin 3.4.0alpha2 with ufo-1.1. Will just proceed with ufo1.2alpha and add your fix. thanks
18:39 kkeithley1 <keen eye for the obvious mode>that's what the alpha was for after all</keen eye for the obvious mode>
18:40 awheeler_ :-) cool
18:48 awheeler_ Ok, I spoke too soon, the multi-volume issue remains.  :(
18:48 kkeithley1 but everything else works?
18:48 awheeler_ I just got lucky.  There seems to be some randomness to it.
18:49 awheeler_ But, yes, all else works.  If you only use one volume, all is well.
18:49 awheeler_ With the gluster-swift-gen-builders
19:17 jclift Hmmm, doing a Google for "gluster" and "anaconda" doesn't show much in the way of people trying to set a Gluster volume as the root device, for network booting.
19:17 jclift i.e. in same vein as iSCSI booting, etc.
19:17 * jclift was just curious
19:20 JoeJulian Last time I remember seeing anything about a glusterfs root, fuse didn't support mmap, thus preventing the loading of dynamic libraries through a fuse mount. That was years ago though.
19:28 jclift Was just curious anyway. :)
19:30 sghosh joined #gluster-dev
19:32 kkeithley1 whew
19:41 kkeithley1 well, that wasn't what I had planned on doing today
19:43 johnmark kkeithley1: eh?
19:59 awheeler_ jclift: Should be possible to do a diskless boot using the NFS client for a gluster volume.
20:00 jclift awheeler_: Interesting thought.  Yeah, hadn't thought of that.
20:08 kkeithley1 backport glusterfs.spec.in sync and ufo
20:08 kkeithley1 for 3.4
20:32 jclift johnmark: Ahhh, it's good you got to this already: http://serverfault.com/questions/491​116/gluster-whats-a-brick-vs-a-node
20:32 jclift johnmark: Google Alerts popped it up for me yesterday, but I got distracted. :(
21:09 awheeler_ Here's a quick-and-dirty, and possibly missing some steps doc on GlusterFS and Keystone: https://gist.github.com/awheeler/5249137
21:09 awheeler_ It is not at all polished.  I'll rewrite it and test it more thoroughly as time permits.
21:14 a2 kkeithley, is the glusterfs.spec backport good for commit?
21:14 a2 or still requires review?
21:31 johnmark awheeler_: that's awesome - thanks!
21:32 awheeler_ johnmark: You're welcome.  Let me know if you run into any issues.
22:05 bgpepi joined #gluster-dev
22:20 wushudoin joined #gluster-dev
23:38 jclift awheeler_: I'd like to copy that gist into a wiki page and link it into the How To's.  But I'm not sure what to do with the patch file.  Hmmm....
23:50 yinyin joined #gluster-dev
23:58 jclift awheeler_: First cut.  It looks ok, but could use your eye over it to make sure I didn't nuke anything when copying-n-pasting and formatting.
23:58 jclift http://www.gluster.org/community/documentat​ion/index.php/GlusterFS_Keystone_Quickstart

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary