Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-04-24

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:45 portante|ltp joined #gluster-dev
01:12 H__ joined #gluster-dev
01:47 bala joined #gluster-dev
01:51 bharata joined #gluster-dev
02:14 nickw joined #gluster-dev
02:57 nickw joined #gluster-dev
03:07 vshankar joined #gluster-dev
03:42 aravindavk joined #gluster-dev
04:00 sgowda joined #gluster-dev
04:01 itisravi joined #gluster-dev
04:01 anmol joined #gluster-dev
05:03 itisravi joined #gluster-dev
05:06 bala joined #gluster-dev
05:18 bulde joined #gluster-dev
05:23 hagarth joined #gluster-dev
05:38 lalatenduM joined #gluster-dev
05:58 raghu joined #gluster-dev
06:01 bulde1 joined #gluster-dev
06:15 rastar joined #gluster-dev
06:23 77CAAZ3QU joined #gluster-dev
06:42 puebele joined #gluster-dev
06:49 ndevos kkeithley, a2: yes, centos6 has issues running the latest fedora in mock :-/
06:53 hagarth joined #gluster-dev
06:56 ndevos kkeithley: building in epel5 is only interesting so far, in that the buildrequires must be available - no epel6 or newer versions
07:03 bulde joined #gluster-dev
07:20 hagarth joined #gluster-dev
07:21 bala joined #gluster-dev
07:47 rastar joined #gluster-dev
09:52 bala joined #gluster-dev
10:18 hagarth joined #gluster-dev
10:20 bala joined #gluster-dev
10:22 bulde joined #gluster-dev
10:33 yinyin joined #gluster-dev
10:50 sgowda joined #gluster-dev
11:19 sgowda joined #gluster-dev
11:19 lpabon joined #gluster-dev
11:36 edward1 joined #gluster-dev
12:51 jclift_ joined #gluster-dev
12:55 lala_ joined #gluster-dev
13:04 mohankumar joined #gluster-dev
13:25 bulde joined #gluster-dev
14:02 kkeithley ndevos: so, as a heads up, you'll be able to review soon, now that I've figured out a fix for rpm.t; the epel-5 rpmbuild doesn't like the OCF resource agents as a noarch rpm. Works fine in epel-6 though.
14:04 ndevos kkeithley: oh, interesting - maybe there is something about that in the EPEL packaging guidelines too?
14:07 ndevos kkeithley: right, http://fedoraproject.org/wiki/EPEL:Packaging says this about  noarch subpacakges
14:07 ndevos EL 5 and earlier do not support noarch subpackages. If your build fails due to unpackaged debuginfo files ensure that the BuildArch: noarch is wrapped in an if to make sure its not used on EL-5 and earlier.
14:07 kkeithley ahah
14:08 ndevos did you solve that in a similat way?
14:09 kkeithley atm, no.  I haven't built the 3.4.0 alpha* release for el5 in koji/bodhi, and thus far I've simply removed the epel-5 mock build in the rpm.t
14:10 kkeithley thus my query about the merits of even building epel-5 in mock
14:11 kkeithley and that aside, I've tripped over something in TAP and Prove. It seems to do a very rudimentary "parse" of the test file to figure out how many tests you had "planned"
14:11 kkeithley if the number you ran is less than the plan, you get an error.
14:12 ndevos oh, and I guess you broke the loop in rpm.t
14:12 kkeithley Since there are five occurances of TEST in rpm.t, removing epel-5 means that with the do loop only running epel-6, there are only four runs of TEST
14:13 ndevos I dont remember exactly how that was done, but there is a counter somewhere and rpm.t fakes test, I think
14:14 kkeithley Now I'm worried that if I were to keep my "fix" for the TEST in the do loop, but add back the epel-5 mock build, I'll get an error for running more tests than were in the "plan"
14:14 kkeithley damned if I do, damned if I don't. ;-)
14:15 kkeithley have to try it and see.
14:15 ndevos oh, I've been there±
14:21 wushudoin joined #gluster-dev
14:23 kkeithley fwiw, I don't see anything in rpm.t that looks like it's "faking" TEST
14:25 kkeithley but if you want to look at the change I made in my latest trial at http://review.gluster.org/4876 (note that I will try to put the epel-5 build back in after wrapping the noarch in glusterfs.spec.in.)
14:25 kkeithley see what you think
14:28 kkeithley and I just rfc'd a new changeset with epel-5 back in, but still look at change to the TEST inside the do loop
14:36 nicolasw joined #gluster-dev
14:50 ndevos kkeithley: ah, right, http://review.gluster.org/​#/c/4876/5/tests/basic/rpm.t is pretty clear what is happening, maybe just revert that?
14:55 mohankumar how do i enable trace xlator?
14:56 mohankumar gluster volume set <volname> trace on says successful, but could not see anything related to trace xlator in volume file
14:58 aravindavk joined #gluster-dev
15:02 kkeithley ndevos: huh?
15:06 kkeithley that works. It works with only one test (epel-6) in the do loop.
15:07 ndevos kkeithley: well, the loop has 2 passes, and the if/else inside the loop both have a TEST that gets counted - that matches up, changing that to one 'TEST $MOCKCMD' misses one TEST
15:08 kkeithley the loop has two passes when both epel-5 and epel-6 are run
15:08 kkeithley with just epel-6, one pass, and TAP whines about running fewer tests than "the plan"
15:09 ndevos and that is with your MOCKCMD introduction?
15:09 kkeithley that's what happens before I made that change
15:10 kkeithley with that change, TAP doesn't whine about fewer tests
15:10 ndevos yeah, TAP does not parse the for loop - it only greps for TEST
15:10 kkeithley I think TAP is counting the occurances of TEST. There are five occurances, and as it happens with the existing test, there are five invocations of TEST
15:11 ndevos correct, I think that logic is in tests/include.rc
15:12 kkeithley ah, that's where that is
15:12 ndevos I've introduced SKIP_TESTS for faking tests, you could add that below the for-loop
15:13 ndevos it's used early in rpm.t as well, in case there is no need to try and build the rpms
15:13 kkeithley let me see what happens with my MOCKCMD and both epel-5 and epel-6 mock builds.
15:13 kkeithley should know in a couple of minutes
15:16 ndevos I think you get passed 6 expected 5 - or something similar :D
15:17 kkeithley yes, planned 4, ran 5
15:17 kkeithley :-(
15:18 kkeithley but if I revert my MOCKCMD change, then things only work when there are two mock builds.
15:20 ndevos correct, maybe there should be a function in tests/include.rc that increases the test-count - and you can call that function in the loop
15:20 kkeithley yes, I'm just looking for that now
15:21 kkeithley but there isn't one. But I can hack it in rpm.t
15:22 ndevos looks like you just need to increase $testcnt - just for number-of-loops - 1
15:22 kkeithley correct
15:23 ndevos oh, wait, maybe write a TEST_IN_LOOP function for tests/include.rc and call TEST_IN_LOOP $MOCKCMD
15:24 ndevos that TEST_IN_LOOP should not get counted in the initial $testcnt, it would just do "$testcnt++ && TEST $@" or similar
15:25 kkeithley that's a good idea
15:30 lalatenduM joined #gluster-dev
15:31 kkeithley the grep is ^[[:space:]]*TEST. I think TEST_IN_LOOP will match that.
15:32 kkeithley maybe change it to ^[[:space:]]*TEST[[:space:]]?
15:32 kkeithley and the other keywords of course
15:34 kkeithley ^[[:space:]]*(EXPECT|TEST|EXPECT_​WITHIN|EXPECT_KEYWORD)[[:space:]]    ?
15:38 ndevos looks good to me, kkeithley
15:39 rastar joined #gluster-dev
15:42 jbrooks joined #gluster-dev
15:47 * kkeithley is sad: function TEST_IN_LOOP { ... TEST ... }      ./tests/basic/../include.rc: line 170: TEST: command not found
15:47 kkeithley back to the drawing board
15:50 kkeithley _TEST !
15:53 * jclift_ wonders if anyone will create a "Glush" project.  "Gluster Translators written in sh" :)
15:55 kkeithley funny guy
15:55 jclift_ s/funny guy/sometimes you're an idiot justin/
15:55 jclift_ :)
15:56 jclift_ Hmm... and for efficiency s/sometimes/
15:57 kkeithley I bet you could actually do it with the Korn shell
16:00 jclift_ How much do you bet?  Show me? :D
16:01 ndevos glawk?
16:02 ndevos kkeithley: yeah, _TEST :D
16:06 awheeler_ Just write a python translator that spawns bash.  Done!  ;-p
16:15 johnmark jclift_: ha!
16:15 jclift_ Heh
16:16 * jclift_ is pretty sure that looking into this Glupy thing is how Alice fell into the rabbit hole
16:16 jclift_ Rough conceptual understanding now in head
16:16 jclift_ But, needs a bunch more work to the Glupy code to be useful.
16:17 jclift_ Atm it implements only file lookup() and create() options.  The rest need to be added to Glupy codebase itself so other things can leverage from that.
16:17 jclift_ That being said... looking into how hard/easy adding a few more of the basic operations might be.
16:17 jclift_ Mixture of C and Python coding needed.  Let's see what I get working by tomorrow. :/
16:17 jclift_ 50/50
16:18 jclift_ This might be a boondoggle for someone that's just a newbie to python, ctypes, etc.
16:18 jclift_ (or an amazingly good learning experience :>)
16:18 jclift_ Meh, what's life without a bit of challenge?
16:44 awheeler_ portante: Ping
16:46 jclift_ Interesting.  The glupy translator I'm working on isn't being called when used through an NFS mount.
16:46 johnmark jclift_: haha! wow, you go
16:46 jclift_ Ahh, of course.
16:46 jclift_ Idiot justin again.
16:47 jclift_ So far I've been manually wedging the translator in client fuse vol file, and mounting with glusterfs -f xxx
16:47 jclift_ That doesn't help for nfs mounts, which don't use glusterfs -f xxx
16:54 johnmark jclift_: what's xtina's irc nick again?
16:55 johnmark haha :P
16:59 jclift_ christina on RH IRC
17:00 jclift_ johnmark: Is this the Google Hangouts one, or the IRC one, or dial in one?
17:00 jclift_ For the meeting
17:01 johnmark jclift_: was doing a call
17:01 johnmark because there are hangout issues
17:01 jclift_ np, just asking
17:01 johnmark jclift_: no worries :)
17:03 * jclift_ in
17:11 hagarth joined #gluster-dev
17:24 mohankumar a2: avati: reminder for reviewing multi brick patches
17:24 mohankumar a2: avati: also i posted a rfc mail to gluster-devel
17:34 johnmark jclift_: does christina ever show up in these parts?
17:34 jclift_ RH internal IRC
17:34 jclift_ Definitely on #london channel ;)
17:35 portante awheeler_: pong
17:35 awheeler_ Looking at the reviews for the multi-vol submission, I'm unclear of how to move forward.
17:36 portante for 3.3 or 3.4?
17:37 awheeler_ Both, and with respect to 3.3, you comment about ufo being in 3.3 is a little confusing, since ufo is indeed in the 3.3 release.
17:38 portante awheeler_: let me take a second look and get back to you ... need to get lunch before the cafe closes
17:38 awheeler_ portante: Ok
17:46 jclift_ kkeithley: Hmmm, is your new fedora spec sync patch applicable to this? https://bugzilla.redhat.com/show_bug.cgi?id=954190
17:46 glusterbot Bug 954190: medium, medium, ---, amarts, NEW , /etc/init.d/glusterfsd missing from upstream git compiled rpms
17:46 * jclift_ suspects it might fix the problem there too
17:48 jclift_ Actually no.  Seems to touch completely different part of the spec.  Oh well.
17:48 jclift_ Ignore that then. :)
17:49 kkeithley not per se, no. The Fedora glusterfs packaging that I inherited has a bunch of stuff in it that's not in upstream. Two different upstreams.
17:49 aravindavk joined #gluster-dev
17:49 kkeithley My patch only applies to synching the specs between the two. 954190 is another step toward overall synch.
17:50 kkeithley although extras/LinuxRPM pulls the fedora packaging bits and should have an /etc/init.d/glusterfsd in the rpms that are built there
17:56 kkeithley I hope that was somewhat clearer than mud
17:56 jclift_ Pretty sure it "used" to create the /etc/init.d/glusterfsd a few weeks ago
17:57 jclift_ Meh, other things more important atm
18:17 jclift_ Interestingly, it does turn out to be possible to start NFS server manually with hacked volfile.
18:17 jclift_ glusterfs -s localhost --volfile-id gluster/nfs --debug -p /var/lib/glusterd/nfs/run/nfs.pid -S /var/run/8a8ed19a0555768dc7670fd63e686283.socket
18:18 jclift_ Having hacked the glupy translator into gluster/nfs/ dir's volfile.
18:42 lpabon joined #gluster-dev
18:44 JoeJulian joined #gluster-dev
18:50 portante awheeler_: back, but leaving for a meeting in 10 minutes
18:51 * portante checks 3.3 ...
18:53 portante awheeler_: I don't see a ufo directory in 3.3, do you? See: https://github.com/gluster/​glusterfs/tree/release-3.3
18:54 portante What I see is the swift/1.4.8/ directory hierarchy which is Essex based, and so pre-swift.diff refactorying
18:54 portante which means the backport of the ring changes does not apply to that branch
18:57 portante awheeler_: I have marked the 3.3 submits with "-2", as they should not be submitted there at all.
18:57 kkeithley oh, er, maybe it's coming back to me. Didn't we do folsom in the Fedora packaging, independent of upstream glusterfs
18:57 awheeler_ portante: Interesting -- they were in the tree that I checked out, but then perhaps checkout doens't clean everything up?
18:57 portante Note that for pre-swift.diff refactoring, the Ring files are *not* consulted for the mapping of account to gluster volume. That work was done in order to remove the swift.diff file.
18:58 awheeler_ So, where should that cherry-pick have been submitted, because 3.3 does indeed have swift capability, even if not a UFO dir.  (that explains some weird things I was seeing)
18:59 kkeithley ndevos: still around?
18:59 awheeler_ how do I switch between different git branches and have it delete incorrect subdirs, such as when I switch between 3.4 and 3.3?
19:00 portante See commit: https://github.com/portante/glusterfs/commi​t/e8d95655d5e73462723799d20e59bc4f21bdf973
19:01 kkeithley ndevos: still missing something. tests 4 and 5 run and pass in the first part, but at the end the Summary Report says 4 and 5 failed and "Bad plan.  You planned 3 tests but ran 5"
19:01 kkeithley any thoughts
19:04 kkeithley portante, awheeler_: okay, looking at the %changelog in the "fedora" packaging of 3.3.1, (rpms in my fedorapeople.org yum repo). 3.3.1-3 was essex, 3.3.1-4 was essex minus most of the swift.diff, 3.3.1.5 switched to folsom,.
19:07 kkeithley So I think we get awheeler_'s fix for master and release-3.4 accepted upstream. Then we backport the fix to the fedora 3.3.1-13 source to create a patch for 3.3.1-14.
19:07 kkeithley awheeler_ should abandon his release-3.3 changes
19:08 kkeithley as for how to switch between git branches and delete incorrect subdirs, that's more git black magic I'm afraid.
19:12 kkeithley sorry for the fire drill
19:12 * kkeithley has too many irons in the fire
19:57 jclift_ Yay OSX.  Get the return type wrong in a callback in the NFS server, and both terminal + Finder hang.
19:57 * jclift_ sighs
19:58 jclift_ Dammit I think I have to restart the desktop.  OSX is sucking today. :/
20:02 jclift_ Is there a reference page somewhere of all the Gluster API calls?
20:02 * jclift_ went looking and didn't see anything.
20:02 jclift_ Which makes is damn friggin hard to know what return codes things expect. :(
20:03 jclift_ Maybe I just didn't see something that's obvious... ?
20:50 jclift_ Well, my first "translator" seems to function ok now.
20:51 jclift_ Good thing it's super simple and only needed to do lookup()'s
20:51 jclift_ https://github.com/justinclift​/glupy/blob/master/negative.py
20:51 jclift_ Yes, directly ripped from jdarcy's code and modified to suit.
20:53 johnmark sweet
21:16 awheeler_ kkeithley: Ok, I shall abandon the 3.3 branch changes.
21:17 awheeler_ Abandoned
21:25 H__ Question : I need to make space on some nearly filled bricks fast. Would this work -> read files from the nearly-full brick, erase those on the gluster volume and then reinsert them in the gluster volume. Would this effectively rebalance them over all bricks ?
21:29 jclift_ semiosis: ^^^ ?
21:30 jclift_ Actually, portante might be around and know the answer too. :)
21:30 semiosis jclift_: H__ asked in #gluster too he's getting help there.  imho it's not a -dev question :)
21:30 semiosis getting help from JoeJulian btw
21:31 jclift_ semiosis: Ahhh, good point. :)
21:31 jclift_ Yeah, not -dev
21:35 johnmark semiosis: indeed :)
21:39 jbrooks joined #gluster-dev
23:12 * jclift_ wonders what it would take to add dedup functionality to xfs itself
23:13 jclift_ More effort than just me. :/
23:21 jclift_ Hmmmm, starting the glusterd service regenerates the nfs volfile.
23:21 jclift_ Anyone know if there's a way to tell that to not happen?
23:21 jclift_ i.e. to not overwrite changes manually done?
23:21 jclift_ ... or is there a better way to inject custom translators into things?
23:28 jclift_ Ugh... could manually hack the translator into glusterfsd/src/glusterfsd.c ... but that seems pretty non-optimal too.

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary