Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-07-02

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:12 bala joined #gluster-dev
02:13 semiosis is it normal/expected for glusterfs to fail to compile when configured with --disable-shared?
02:19 jclift_ kkeithley: ^^
02:19 jclift_ semiosis: I don't think so.
02:23 semiosis jclift_: thanks i'll ask again tmrw when kkeithley & co are around and file a bug if appropriate
02:23 jclift_ np
02:44 vshankar joined #gluster-dev
02:59 shubhendu joined #gluster-dev
03:06 bharata-rao joined #gluster-dev
03:45 mohankumar joined #gluster-dev
03:48 bharata-rao joined #gluster-dev
04:51 bala joined #gluster-dev
04:52 krishnan_p joined #gluster-dev
04:59 bala joined #gluster-dev
05:01 hagarth joined #gluster-dev
05:06 raghu joined #gluster-dev
05:19 shubhendu joined #gluster-dev
05:22 bharata-rao joined #gluster-dev
05:46 bala joined #gluster-dev
05:48 shubhendu joined #gluster-dev
05:49 rastar joined #gluster-dev
05:53 hagarth joined #gluster-dev
05:58 rgustafs joined #gluster-dev
06:01 JoeJulian joined #gluster-dev
06:44 shubhendu joined #gluster-dev
06:54 deepakcs joined #gluster-dev
07:09 raghu joined #gluster-dev
07:13 bharata-rao joined #gluster-dev
07:16 kanagaraj joined #gluster-dev
07:45 rgustafs joined #gluster-dev
07:51 hagarth joined #gluster-dev
08:21 bulde joined #gluster-dev
08:50 bharata-rao joined #gluster-dev
09:52 bharata-rao joined #gluster-dev
10:28 hagarth joined #gluster-dev
10:32 edward1 joined #gluster-dev
10:52 kkeithley1 joined #gluster-dev
11:24 hagarth joined #gluster-dev
11:54 bulde joined #gluster-dev
12:07 krishnan_p joined #gluster-dev
12:09 hagarth joined #gluster-dev
12:28 bulde joined #gluster-dev
12:45 hagarth joined #gluster-dev
12:57 deepakcs joined #gluster-dev
13:03 edward1 joined #gluster-dev
13:16 awheeler_ joined #gluster-dev
13:23 blues-man joined #gluster-dev
13:24 awheeler_ Am I correct there are no RPMs (Alpha/Beta/QA) yet?
13:24 awheeler_ for g4s that is
13:25 semiosis awheeler_: i just had glusterbot send you a link in #gluster to the qa releases site, where you can find rpms
13:25 awheeler_ Thanks semiosis:  That has gluster-for-swift RPMs?
13:26 semiosis idk
13:27 awheeler_ Ah, that's my specific question.  Meant to say that in the original question and editing removed it, lol.
13:27 semiosis oh ok, doesnt look like there are swift rpms
13:29 awheeler_ right.  I know it's split out now: http://www.gluster.org/2013/06/glusterfs​-3-4-and-swift-where-are-all-the-pieces.
13:30 awheeler_ And kkeithley doesn't mention if the new g4s will be versioned based upon swift rather than glusterfs, which is what I had thought would be the case.
13:31 kkeithley_ Correct. One RHEL6 and F19 use glusterfs-* + openstack-swift-* + glusterfs-ufo
13:32 * semiosis afk
13:32 kkeithley_ On F18 and earlier use glusterfs-* + glusterfs-swift-* + glusterfs-ufo-*
13:32 awheeler_ ah, so it will be called glusterfs-ufo?  Not g4s?
13:33 kkeithley_ At some point there'll be $something-g4s, yes, but for now it's still glusterfs-ufo
13:33 awheeler_ ok, cool.  and no RPMs yet?
13:34 kkeithley_ There are 3.4.0beta4 rpms in http://download.gluster.org/pub​/gluster/glusterfs/qa-releases/
13:35 kkeithley_ All based on 1.8.0 grizzly FWIW
13:35 awheeler_ Right, my bad, it was the naming that threw me.  So versioning with swift will happen later?
13:35 kkeithley_ Probably at the same time we switch to g4s naming
13:37 awheeler_ Got it.  Cool.  So do you know if an upgrade from a 3.3.1-15 cluster will support an online upgrade?  So far 3.4 volumes don't seem compatible with 3.3 ones.
13:38 kkeithley_ I use the same volumes in my testing for 3.3 and 3.4. I'm pretty sure the on-disk format between 3.3 and 3.4 hasn't changed.
13:38 kkeithley_ What are you seeing that doesn't work?
13:46 lpabon joined #gluster-dev
13:50 jclift joined #gluster-dev
13:51 hagarth joined #gluster-dev
14:06 wushudoin joined #gluster-dev
14:13 puebele joined #gluster-dev
14:16 jclift hagarth johnmark: Any objection to adding an iozone run to the GlusterFS perf testing framework?
14:17 jclift hagarth: Not sure how to process it's results into a PASS/FAIL thing atm.  Probably more useful as a data logging thing (especially if we can upload to central repo).
14:17 jclift s/hagarth:/hagarth johnmark:/
14:27 Technicool joined #gluster-dev
14:39 awheeler_ kkeithley: It was a few months ago, but there was a new default value in the 3.4 volumes that made the 3.3 volumes not work.  Read behind or something.
14:41 semiosis so i tried compiling 3.4beta4 configured with --disable-shared and the compile failed.  should I file a bug about that?
14:47 kkeithley_ awheeler: I guess I don't push hard enough to trip over that or something.
14:56 mohankumar joined #gluster-dev
14:59 awheeler_ kkeithley:  I'll do some testing.  So, to your knowledge, 3.3->3.4 should be no downtime?
15:03 semiosis i got an email from someone using my ubuntu packages and he said it was a smooth upgrade
15:03 semiosis all he did was install the newer package & reboot on each server/client
15:06 jclift semiosis: reboot on each server client... does that mean the gluster was operating ok with some of the peers as 3.3 and others at 3.4 at the same time (whilst upgrading & rebooting hosts)
15:06 jclift s/server client/server/
15:06 semiosis not sure
15:06 jclift k
15:08 awheeler_ Seems like that could not work since the ports changed from 3.3 to 3.4, right?
15:10 semiosis hmm depends on whether or not client will ask glusterd for the port number when trying to reconnect to the brick
15:11 jclift semiosis: If it doesn't, then maybe a useful patch
15:11 semiosis +1
15:11 awheeler_ But the nodes talk to each other on the new ports, don't they?
15:11 semiosis glusterd is still 24007 afaik, for client-server & server-server comms
15:12 awheeler_ so the per-brick port is only used by clients, not by the intra-node communication?
15:12 semiosis not sure with shd & stuff like that
15:13 awheeler_ gluster volume status from a 3.3 box would not see the bricks as live for 3.4 nodes -- right?
15:13 semiosis no idea
15:14 kkeithley_ awheeler: yes, AFAIK, 3.3->3.4 should be no downtime
15:18 * kkeithley_ tries to find in the scrollback where he thought were were discussing the on-disk format of the bricks between 3.3 and 3.4 being compatible.
15:20 awheeler_ kkeithley_: Right, both are questions for me.  I'll do some testing today of the latter and see if it's still an issue for me.
15:21 blues-man joined #gluster-dev
15:30 bala joined #gluster-dev
15:50 _Bryan_ joined #gluster-dev
15:54 mohankumar avati: a2: review for my BD v2 patches please
16:05 edward1 joined #gluster-dev
16:40 jclift Is anyone around that know the gluster testing framework at all?
16:40 jclift I can figure out what a "TEST xxx" does.  But what does an "EXPECT xxx" do?
16:41 * jclift is looking for docs, and no finding anything
16:41 jclift s/no/not/
16:42 hagarth joined #gluster-dev
16:42 jclift Aha, looks like there is a website for this TAP stuff
16:44 jclift No, I take that back.  The website has no useful information either.
16:44 jclift kkeithley: Any idea where to find useful info about this TAP thing we're using?
16:55 _Bryan_ joined #gluster-dev
17:06 jclift Ahhh, now I get it.
17:06 jclift We're not actually using any of the existing TAP frameworks.
17:06 jclift We've written our own using shell scripting, and we're _calling_ it a TAP framework since it looks somewhat similar to other TAP stuff.
17:06 jclift :(
17:07 jclift ... and we haven't written any doc. :(
17:07 jclift *grumble*
17:26 awheeler_ kkeithley_: Just upgrade from 3.3 to 3.4, and didn't see the filesystem problems i saw before.
17:29 jclift Anyone know how to skip tests in our framework?
17:29 jclift It doesn't seem to be accepting "# SKIP", as per the specs
17:39 ndevos jclift: there is a SKIP_TESTS that will skip all remaining tests
17:39 jclift ndevos: Yeah, saw that.
17:39 ndevos well, all remaining tests in your test case
17:40 jclift I'm just having trouble with basic understanding of how our tests are written, as it seems to be something we've developed ourselves adhoc and not documented anywhere. :(
17:40 ndevos jclift: I think that is correct :-/
17:41 jclift ... and it's written in .sh, so I can't even put a debugging on the damn stuff and trace it
17:41 jclift <insert many swear words here>
17:41 jclift :)
17:41 jclift s/debugging/debugger/
17:42 ndevos that is indeed an issue I have been through too...
17:43 ndevos best is to build the test-environment on a local machine/vm and run the scripts there, but that is not always trivial
17:43 jclift Heh.  If test environments could simulate RDMA, that might work. ;)
17:45 ndevos oh, but I am not sure if the build machine has ib hardware...
17:47 jclift ndevos: The first thing I'm doing is adding the _possibility_ for RDMA testing to occurr
17:47 jclift occur
17:47 jclift So, adding a "PROTOCOL" line to include.rc
17:47 jclift Then updating the various "volume create foo" lines to include  "protocol $PROTOCOL"
17:48 jclift Having PROTOCOL = tcp as the default (in theory) should let existing things keep on working as they've already done
17:48 jclift But, anyone wanting to test RDMA stuff should then be able to by setting it to "rdma"
17:49 jclift Unfortunately, although that sounds good in theory it's not working in practise with mount.t, pump.t, and bd.t.
17:49 jclift So, I'm just trying to figure out wtf. :D
17:50 jclift Once this is fixed, that should make these "RDMA Test Days" better
17:50 * jclift thinks we should add multi-machine support to this test framework too
17:50 ndevos why would that not work? you just updated all tests to include "protocol $PROTOCOL" and you're done?
17:52 jclift Yeah, I thought it would be that easy too.
17:53 jclift But, it's not working according to that theory, so I'm now trying to figure out what's wrong.  It doesn't *seem* to be anything dumb on my part (eg typo)
17:57 ndevos hmm, I dont really see why there would be an issue, include.rc provides other variables too
17:57 ndevos got a patch?
18:01 jclift To show the brokenness?  Sure, give me a sec
18:01 * jclift needs to temporarily remove a shitload of "wtf is this doing? type cruft"
18:03 ndevos oh, anything raw is fine
18:09 jclift ndevos: Hmmm, it looks like the problem might be simpler than this.  I just tried the untouched tree (eg without my PROTOCOL addition) and it's barfing in the same spots.
18:10 jclift Probably just means my test system isn't setup the way it needs to be atm.
18:10 * jclift fixes this first
18:10 ndevos ah, okay, let me know (tomorrow) if you have any other issues with the tests, I'm happy to look at them
18:11 * jclift wonders if a patch to re-implement the test framework in Python would be accepted
18:13 ndevos well, you will also need to rewrite all tests...
18:18 jclift So, it seems like the Python world uses a different framework based upon unittest, so there's not been any real adoption of TAP
18:18 * jclift really isn't up for getting into this too much more
18:19 jclift I think I get why we've just been extending our own framework now
18:19 jclift :)
18:40 kkeithley_ wheeler_: Re: Just upgrade from 3.3 to 3.4, and didn't see the filesystem problems i saw before.    ---- excellent news
18:42 awheeler_ kkeithley_:Agreed.  I did have to manually uninstall glusterfs-swift in order to do the upgrade though.  I had hoped that something would deprecate that so it wouldn't be necessary.
18:44 kkeithley_ I don't have control over the openstack-swift packaging. "We" should open a bugzilla asking them to add an Obsoletes: glusterfs-swift*.
18:44 awheeler_ kkeithley_:  It seems, since glusterfs-ufo is depending upon openstack-swift, it could also deperecate glusterfs-swift.
18:45 kkeithley_ yeah, hmm, now that you mention it I thought I did that. Let me check.
18:47 kkeithley_ Nope, just my imagination.... running away with me.
18:48 awheeler_ That would be a pretty clean solution IMHO.
18:49 kkeithley_ yup, albeit with more %if fedora >= 19 && rhel >= 6 spaghetti
18:49 kkeithley_ yum
18:50 kkeithley_ but I already have that so...
18:51 awheeler_ well, no need to do that really is there?  It's the same package being deprecated across all distros yes?  Maybe a different one being added though?
18:55 kkeithley_ We still need glusterfs-swift-* for Fedora 18 and earlier.   If we use the RDO openstack-swift packages that are available for F18 (they're really f19 packages in an f18 yum repo) then it would be only for 17, and that's going to EOL before too long, so———
19:04 kkeithley_ It's not a big deal, I've already got %if spaghetti for a Requires: openstack-swift, I'll just add a matching Obsoletes: glusterfs-swift.
20:24 edward1 joined #gluster-dev
20:41 blues-man joined #gluster-dev
21:00 badone joined #gluster-dev
21:11 badone joined #gluster-dev
21:14 awheeler_ kkeithley_: excellent, thank you
22:01 jclift Interesting, every time I run the prove test with bug-861015, I come back a few minutes later and the box has rebooted

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary