Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-03-27

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 * jclift kicks the "Upload File" page on the website
00:06 jclift Apparently "plain text" files aren't on the list of ok uploads.  (ie .patch files).
00:06 jclift Even in god mode.
00:06 * jclift kicks it again :)
00:10 JoeJulian iirc, that's extension specific. Perhaps .txt?
00:15 jclift JoeJulian: The wiki seems to do file type detection, and text/plain isn't on the list.
00:15 jclift "File extension ".patch" does not match the detected MIME type of the file (text/plain)."
00:15 jclift "Permitted file types: png, gif, jpg, jpeg, doc, xls, mpp, pdf, ppt, tiff, bmp, docx, xlsx, pptx, ps, odt, ods, odp, odg."
00:16 jclift JoeJulian: Hmmm, I might try ".odt" extension, then see if I can rename it after.
00:17 jclift No go.
00:17 jclift wikipedia--
00:18 jclift Emailed admin. Lets see what happens.
00:23 jclift Sleep time.  'nite all
00:25 lpabon joined #gluster-dev
00:35 yinyin joined #gluster-dev
00:53 yinyin joined #gluster-dev
01:09 jules_ joined #gluster-dev
01:12 bulde joined #gluster-dev
01:27 kkeithley1 a2: re: is the glusterfs.spec backport good for commit? or still requires review?
01:27 a2 kkeithley, ?
01:27 kkeithley1 It's a straight git cherry-pick from master. It was reviewed when it was merged to master.
01:28 a2 ok great
01:28 kkeithley1 I'd say it's good for commit
01:28 kkeithley1 passed Verification/regression
01:28 a2 merged
01:30 * a2 gets on webex with intuit
01:31 bala joined #gluster-dev
02:05 avati joined #gluster-dev
02:10 jules_ joined #gluster-dev
02:18 krishnan_p joined #gluster-dev
02:29 avati__ joined #gluster-dev
02:29 johnmark fyi, for call: http://download.gluster.org:9252/p/glusterd
02:33 jdarcy joined #gluster-dev
02:39 hagarth joined #gluster-dev
02:47 bulde joined #gluster-dev
02:48 krishnan_p joined #gluster-dev
02:51 bharata joined #gluster-dev
02:53 jdarcy joined #gluster-dev
02:54 bulde :O
03:10 vshankar joined #gluster-dev
03:21 aravindavk joined #gluster-dev
03:44 bala joined #gluster-dev
03:47 bulde joined #gluster-dev
04:11 rastar joined #gluster-dev
04:16 bala joined #gluster-dev
04:16 krishnan_p joined #gluster-dev
04:18 hagarth joined #gluster-dev
04:20 mohankumar joined #gluster-dev
04:25 rastar joined #gluster-dev
04:30 avati joined #gluster-dev
04:30 rastar1 joined #gluster-dev
04:34 sripathi joined #gluster-dev
04:34 sgowda joined #gluster-dev
04:35 lalatenduM joined #gluster-dev
04:37 bulde joined #gluster-dev
04:38 bala joined #gluster-dev
04:48 deepakcs joined #gluster-dev
04:51 badone can anyone give me a brief description of what a fd_lk_ctx_node is used for ?
04:56 badone joined #gluster-dev
05:12 test joined #gluster-dev
05:18 bala joined #gluster-dev
05:19 pai joined #gluster-dev
05:20 krishnan_p badone, let me try.
05:21 krishnan_p badone, the fd_lk_ctx_node is an element in the list of locks 'clients' (fuse and nfs) maintain.
05:22 krishnan_p This list is maintained so that on a reconnect with the server, the clients would reclaim the posix locks from the server(s) that went down and came back.
05:26 avati__ joined #gluster-dev
05:41 raghu joined #gluster-dev
05:55 yinyin joined #gluster-dev
05:57 bulde joined #gluster-dev
06:02 sripathi1 joined #gluster-dev
06:07 bala joined #gluster-dev
06:23 krishnan_p joined #gluster-dev
06:30 rastar joined #gluster-dev
06:46 bulde raghu: arround?
06:47 bulde raghu: regarding http://review.gluster.org/4702, for now make the argp option parsing valid only on GNU/Linux
06:48 bulde that way we will not have issues of disasters in other operating systems till we have support
07:09 krishnan_p joined #gluster-dev
07:15 sripathi joined #gluster-dev
07:35 sripathi joined #gluster-dev
07:35 badone krishnan_p: on a per file basis?
07:35 badone krishnan_p: and thanks :)
08:25 krishnan_p badone, it is per fd. On every successful lk fop on an fd, the 'client' adds/removes locks to/from the list in fd_lk_ctx
08:25 badone krishnan_p: ok, so could add up to quite a few then
08:26 badone krishnan_p: thanks again and good to cross paths with you again :)
08:29 krishnan_p The no. of elements in the list in fd_lk_ctx (of an fd), is proportional to no. of locks taken on that fd.
08:30 krishnan_p It is proportional and not necessarily equal, since we merge locks from the same owner that are 'contiguous', in terms of their extents.
08:31 krishnan_p badone, good to hear again from you :-)
08:32 badone krishnan_p: I'm interested in how much memory we could leak if we are leaking fd_lk_ctx_nodes
08:33 badone krishnan_p: like this https://bugzilla.redhat.com/show_bug.cgi?id=921770
08:33 glusterbot Bug 921770: is not accessible.
08:38 bulde badone: it depends on number of calls made to the mount point
08:39 badone bulde: sure
08:39 bulde for example in 5mins, you could leak 280MB, if you are running the test program only (corrected bug: https://bugzilla.redhat.co​m/show_bug.cgi?id=834465)
08:39 glusterbot Bug 834465: urgent, medium, 3.4.0, rabhat, ON_QA , Memory leak in fuse-bridge
08:40 badone bulde: thanks you for that
08:41 badone bulde: yes, I have seen evidence of it and it seems considerable
08:43 badone bulde: given the fd_lk_ctx_node contains at least one other struct it's not surprising. Even a small leak at high repitition is going to be substantial
08:45 badone bulde, krishnan_p: thanks guys :)
09:20 rastar1 joined #gluster-dev
09:27 rastar joined #gluster-dev
09:48 maxiz joined #gluster-dev
09:49 hagarth joined #gluster-dev
09:55 bala joined #gluster-dev
10:20 test_ joined #gluster-dev
10:27 bala joined #gluster-dev
10:36 pai joined #gluster-dev
10:50 krishnan_p joined #gluster-dev
10:51 maxiz joined #gluster-dev
11:01 rastar joined #gluster-dev
11:54 krishnan_p joined #gluster-dev
12:00 rastar1 joined #gluster-dev
12:10 jclift joined #gluster-dev
12:25 maxiz joined #gluster-dev
12:47 hagarth joined #gluster-dev
12:54 dorkyspice joined #gluster-dev
12:55 lalatenduM joined #gluster-dev
12:57 awheeler_ jclift: Thanks for putting that in the wiki, it looks good.  I have made some edits, and created a github repo to hold the patch, and other Gluster related scripts I've written.  I'm not familiar with the openstack-db command.
13:01 mohankumar joined #gluster-dev
13:03 jclift awheeler_: Excellent. :)
13:04 awheeler_ jclift: Does the mysql database created by that command get used by keystone to hold it's auth stuff, or?
13:04 jclift awheeler_: I'll try and draw some attention to it appropriately.  Prob email gluster-users and gluster-dev for feedback. :)
13:05 jclift awheeler_: I'm not really sure.  I haven't looked under the covers with the OpenStack stuff much yet.  More just followed through instruction guides to get to point X and then worked out stuff from there for whatever I'm doing.
13:05 jclift So, knowledge isn't much past surface level yet.
13:05 awheeler_ got
13:05 awheeler_ s/got/got it/
13:21 hagarth joined #gluster-dev
13:30 bulde joined #gluster-dev
13:56 jclift portante kkeithley: You guys need to figure out which swift versions we're supporting in 3.4. :D
13:56 portante Yes
13:57 portante 1.7.6 at least
13:57 jclift portante: Just to be super clear about it.  The RPM building process for 3.4 automatically downloads and pulls in 1.7.4, for Folsom.
13:58 kkeithley hey, it's my job to have the keen eye for the obvious. ;-)
13:58 jclift So, it sounds like we better figure if we want to pull down 1.7.6 automatically instead.
13:58 jclift kkeithley: Sorry, just practising. :D
13:58 kkeithley yes, we were just discussing that in #rhs
13:58 jclift Aha.  Another channel to know about.
13:59 kkeithley upstream/downstream, but yeah
14:00 kkeithley I'll abandon those patches. As our colleagues in Banglalore might say, "I'll do the needful thing"
14:00 jclift Heh
14:01 * portante smiles quietly to myself
14:03 kkeithley So, for the record, it's the case that our 3.4 (and HEAD) are (have been) ready for the switch to 1.7.6 for some time, so we should do that. Speak now or forever hold your peace.
14:05 kkeithley And in doing that it also means that for f19 and f20 we may be able to use the openstack-swift rpms instead of rolling our own.
14:05 portante I'll second that motion
14:06 kkeithley s/may be able/should be able/s
14:08 awheeler_ Sounds good to me as well -- does that mean, that without applying my patch, pulling in 1.7.6 should just work?
14:08 awheeler_ Or, I guess as you just said, we might be able to use their 1.7.6 rpm.
14:09 wushudoin joined #gluster-dev
14:14 kkeithley pulling in 1.7.6 and not not applying the patches we have for 1.7.4 etc.
14:15 kkeithley Or for f19 and f20/rawhide not pulling in 1.7.6 and using the openstack-swift rpms instead
14:16 puebele joined #gluster-dev
14:21 kkeithley awheeler: to be clear, with 1.7.6 your patch isn't needed. Or it shouldn't be. Until I try it for myself I can't confirm or deny anything.
14:33 awheeler_ Makes sense.  So, will you be re-rolling Alpha2 then?
14:34 kkeithley I'm inclined to say no. I think we just chalk this up as a bug against alpha and alpha2 and say it'll be fixed in the beta, which will be out soon.
14:35 kkeithley because there's more involved than just pulling in 1.7.6
14:54 puebele1 joined #gluster-dev
14:56 mohankumar joined #gluster-dev
15:12 jclift|afk And it would confuse people.
15:12 kkeithley agreed
15:12 awheeler_ Sounds good.  On a different topic, how do I run the ufo unittests.sh?
15:17 kkeithley cd ufo && ./unittests.sh I believe
15:17 awheeler_ It wants an /etc/swift/test.conf among other things
15:17 kkeithley hmmm. portante ^^^
15:17 awheeler_ are there docs?
15:18 jclift|relocatin awheeler_: Does this help? http://docs.openstack.org/develo​per/swift/development_saio.html
15:18 jclift awheeler_: Asking because it mentions the /etc/swift/test.conf file in it.
15:19 awheeler_ right, seems like it.  We have no such test.conf in the ufo dir.  So, these are really swift tests, not ufo tests then?
15:19 jclift If so, it was the first match for "/etc/swift/test.conf" in Google.  Just saying. :D
15:19 awheeler_ :)
15:19 jclift Try it out I guess. :D
15:21 awheeler_ I'll do that, right after I figure out why my replace brick is failing.
15:21 awheeler_ using 3.4 Alpha2 with the patch.
15:23 johnmark kkeithley: +1 to 1.7.6
15:24 kkeithley yeah, that was always the plan. The Gods have decreed that it will happen sooner
15:24 puebele1 joined #gluster-dev
15:24 johnmark kkeithley: praise the gawds!
15:25 rastar joined #gluster-dev
15:25 jclift Cthulu grants you a wish?
15:25 jclift Argh.  I need to head into office.
15:26 kkeithley Winter is coming
15:27 awheeler_ lol
15:28 awheeler_ Seeing this error on a failed replace brick: 0-system-replace-brick: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
15:28 awheeler_ Any thoughts?
15:30 jclift|travel iptables still enabled?
15:31 awheeler_ yes, and disabling it didn't immediately help.  I've rebooted, so it's back on.
15:31 jclift|travel Dammit, I really have to head out.
15:43 puebele1 joined #gluster-dev
15:49 rastar1 joined #gluster-dev
15:53 bala joined #gluster-dev
16:05 hagarth joined #gluster-dev
16:29 awheeler_ Looks like the issue is new to 3.4, don't have it with 3.3.1-11.
16:29 awheeler_ 3.4 Alpha2 that is.
18:03 lalatenduM joined #gluster-dev
18:10 awheeler_ Based on the lack of a test.conf file in the glusterfs git repo, I'm guessing there is no automatic testing being done for the ufo code as there is for the glusterfs code.  What can we do to make that happen?
18:12 kkeithley portante: ^^^
18:13 kkeithley there's no test.conf in the swift-1.7.x.tar.gz file either
18:13 portante awheeler_: test.conf file?
18:14 portante do you mean unit tests?
18:14 awheeler_ yes
18:14 awheeler_ the test.conf is in the swift git repo.
18:17 portante but the glusterfs rpms are not swift, they are gluster ufo code, so why can't we run the unit tests?
18:17 portante Has somebody tried and it failed?
18:50 lpabon joined #gluster-dev
18:57 awheeler_ I guess what I'm suggesting is that they could be automatically run, just like tests are for glusterfs.
18:58 awheeler_ All of the pieces needed to run those tests are not in the glusterfs ufo repo, and there's no documentation that I could find on how to run them.
19:01 portante awheeler_: the ufo tests are run by ufo/unittests.sh from the root of the glusterfs tree, and they require openstack swift RPMs for 1.7.6, which should already be specified in the spec file.
19:02 awheeler_ The rpm build scripts in the git repo pull in 1.7.4, not 1.7.6.  Are you saying that 1.7.6 places test.conf in /etc/swift?
19:02 portante you don't need test.conf in /etc/swift to run unit tests
19:02 portante and I believe Kaleb is fixing things so that the RPMs pull in 1.7.4
19:02 portante sorry
19:02 portante 1.7.6
19:03 awheeler_ Ok, well, when I run the unittest.sh, I get an error which traces back to a missing conf file, which needs a unit_test section, and the default is to look for that in /etc/swift/test.conf
19:04 portante the unit tests will warn about the missing file, but that does not prevent them from running
19:04 awheeler_ So, if I run unittest.sh from the root, rather than from ufo, it might work better?
19:04 portante can you share the errors you are seeing?
19:04 portante I don't think it matters, as that script just cds the proper location in the tree to run the testes
19:04 portante tests
19:04 awheeler_ Sure, it'll take me a few to get it set up.
19:05 portante k thx
19:05 kkeithley Someone's back from PyCon ;-)
19:05 awheeler_ And, specifically, I am following this doc: http://www.gluster.org/community/do​cumentation/index.php/CompilingRPMS
19:06 portante kkeithley: ;)
19:10 awheeler_ portante: Here are the first two lines after I install python-nose, and run unit-test:
19:10 awheeler_ that is ufo/unittest.sh
19:10 awheeler_ # ./unittests.sh
19:10 awheeler_ nose.plugins.cover: ERROR: Coverage not available: unable to import coverage module
19:10 awheeler_ Unable to read test config /etc/swift/test.conf - file not found
19:11 kkeithley dum dedum dedum, I'll be so glad when I'm not packaging swift as part of gluster-ufo
19:11 portante those errors are not preventing the test run
19:11 awheeler_ ..........................​........ERROR:root:Unlink failed on /tmp/tmpWNbriD/vol0/bar/z err: None
19:12 awheeler_ I'll make a pastebin
19:12 awheeler_ or gist
19:12 portante are you running this as root?
19:13 portante does the output report a final number of tests run?
19:13 awheeler_ http://pastebin.com/m8WaMigN
19:13 awheeler_ 127 tests
19:13 awheeler_ that paste is a subset.
19:13 * awheeler_ back in a few
19:19 awheeler_ back
19:20 awheeler_ So, what is being assumed here?   I have no volumes.
19:21 portante but what is shown at the end of the run, does it say success or failure?
19:23 awheeler_ Here's the full output: http://pastebin.com/d55bEzyD
19:23 portante yes, the tests ran fine. :)
19:23 portante No errors
19:24 awheeler_ Umm, I have a hard time accepting that with all of the stack traces.
19:24 portante what you are seeing for output is the exceptions that happen during the test run that it is actually checking for
19:24 portante unfortuantely, the nose test framework does not always hide them for some reason
19:24 awheeler_ Ok , perhaps this isn't the type of testing I am looking for then.
19:24 portante what testing are you looking for?
19:25 portante these are unit tests of the individual modules
19:25 awheeler_ I was hoping to find a framework that would actually test swift, and specifically accessing accounts and such.
19:25 portante that is the swift functional tests which are also not run as part of the installation
19:25 awheeler_ So, that the gluster swift integration works.  Writing test cases that break when bugs aren't fixed.
19:26 portante that is why we have the existing swift functional tests
19:26 portante no?
19:26 awheeler_ So the issue we are having right now with 1.7.4 being bundled instead of 1.7.6, how do I write a case that breaks to reflect that it is in fact broken.
19:26 portante Just run those against a gluster installation
19:26 awheeler_ Where can I find those tests?
19:26 portante just run the existing swift functional tests
19:27 awheeler_ nm, I see them in the swift repo.  The problem I alluded to is a gluster git issue, and we should be able to write a test that will fail in the case we are in now.
19:28 portante one could write a unit test that will do that today, it just is not written yet
19:32 awheeler_ So, it seems like it would be valuable for code review if all of the swift functional tests were also run.
19:32 awheeler_ There's a bug  I can
19:33 awheeler_ can't submit a test for as a result.
19:33 awheeler_ bug 924792
19:33 glusterbot Bug http://goo.gl/Smv7Z medium, unspecified, ---, junaid, ASSIGNED , Gluster-swift does not allow operations on multiple volumes concurrently.
19:33 portante yes, that would be gredat to get fixed
19:34 awheeler_ And the gluster build system builds and tests RPMs, I think.  Would love to leverage that capability for the UFO stuff.
19:35 awheeler_ It's a funky error, and requires two volumes to be created for two different accounts to be seen.
19:35 portante yes, we already run the unit tests in the gluster build system, we just need to write a test to show this specific problem as a unit tests
19:35 portante test
19:36 portante wait, how did we get on to this bug from the unit tests errors?
19:36 awheeler_ I didn't, it's unrelated.
19:36 portante oh
19:36 portante sorry
19:36 awheeler_ I just wanted to write a test for it that would fail, and couldn't figure out how to get the tests to run (so I thought) to find out if I could write one.
19:37 portante I think we can do that in the unit tests, but it will require mocking out some low-level routines to make that happen
19:38 awheeler_ The bug is somewhere between the ring generation, the proxy server, and the container-server
19:38 awheeler_ Can the unittest test that kind of scenario?
19:38 awheeler_ Seems like it would be possible, actually.  Hmmm
19:39 awheeler_ Somehow the drive is defaulting to the first one in the ring, rather than the one associated with the account.
19:39 awheeler_ But only for container-server
19:40 awheeler_ Or rather, container-server receives a request for the wrong drive, from somewhere.   Not sure who talks to container-server.
19:42 awheeler_ Hmm, once I install nose-cov, all looks well on those tests.
19:42 awheeler_ No ugly mess, nice and pretty.
19:42 portante cool
19:43 awheeler_ even without test.conf.  :)
19:46 awheeler_ Thank you portante
19:48 awheeler_ So, I guess the challenge is testing against the swift stuff as it integrates with ufo code.
20:04 kkeithley basically yes. But the swift unit tests run against the actual swift code. Since UFO is "the Swift API" it's seems like it should be possible to refactor the swift unit tests to run against UFO.
20:04 awheeler_ Agreed.  So, what service calls container-server?
20:04 kkeithley proxy-server
20:05 kkeithley IIRC
20:05 portante also the object-server at times
20:05 awheeler_ Well, but that's just the intermediary, right?  Or does it actually make decisions?
20:06 awheeler_ If you look at the three lines of output in my bug report, the first two are for the correct drive, but the last is for the wrong drive, and it always seems to be the container server getting the wrong drive.
20:06 awheeler_ So, assuming that's from the container server, whoever is talking to it is the one that provided the wrong info.
20:08 portante it is because did not have a way to map account to device as stored in the rings
20:10 awheeler_ Not quite sure I followed you there.  Can you restate what is?
20:14 portante sec, on a call
20:29 portante so the bug was that gluster used the ring structure to add one or more devices, but did not have a way to later take an account and match it to the proper device in the rings. It was just taking the first, I believe, or something like that.
20:50 awheeler_ Yes, that's what the bug is about.  There is a patch that was submitted for this bug that partially corrected this, but I found a situation where it doesn't work.
20:51 awheeler_ Which I'd like to write a test for.
20:52 awheeler_ The patch included a unit test, which isn't thorough enough to catch this situation.
22:38 awheeler_ Figured out the multi-volume problem
23:21 awheeler_ posted a patch to bug 924792
23:21 glusterbot Bug http://goo.gl/Smv7Z medium, unspecified, ---, junaid, ASSIGNED , Gluster-swift does not allow operations on multiple volumes concurrently.

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary