Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-05-10

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 yinyin joined #gluster-dev
00:49 bala joined #gluster-dev
00:59 hagarth joined #gluster-dev
01:55 mohankumar avati_: a2_ ping
02:22 bharata joined #gluster-dev
03:47 shubhendu joined #gluster-dev
03:58 robert7811 joined #gluster-dev
03:58 robert7811 Does anyone know if bug 812515 in the bug tracker was resolved for release v3.3.1?
03:58 glusterbot Bug http://goo.gl/Y7CQJ unspecified, unspecified, 3.3.0beta, pkarampu, VERIFIED , [glusterfs-3.3.0qa34]: self-heal operation makes brick processes CPU usage very high
04:31 deepakcs joined #gluster-dev
04:40 deepakcs kshlm, Hi, wanted to understand the operational version stuff... is that present for older version of glusterd to work with newer, or more ?
04:44 bala joined #gluster-dev
05:13 bala joined #gluster-dev
05:22 avati_ mohankumar: ping
05:29 deepakcs bala, hi
05:35 rastar joined #gluster-dev
05:41 fabien joined #gluster-dev
05:45 raghu joined #gluster-dev
06:00 aravindavk joined #gluster-dev
06:02 rgustafs joined #gluster-dev
06:03 bala joined #gluster-dev
06:04 rastar joined #gluster-dev
06:16 lalatenduM joined #gluster-dev
07:03 bulde joined #gluster-dev
07:43 rgustafs joined #gluster-dev
08:15 hagarth joined #gluster-dev
08:16 rgustafs joined #gluster-dev
08:41 shubhendu_ joined #gluster-dev
08:54 avati joined #gluster-dev
09:00 xavih joined #gluster-dev
09:00 avati_ joined #gluster-dev
09:14 bala1 joined #gluster-dev
09:15 lala_ joined #gluster-dev
09:17 bala1 joined #gluster-dev
09:21 JoeJulian_ joined #gluster-dev
09:22 kkeithley joined #gluster-dev
09:23 bharata joined #gluster-dev
09:47 rgustafs joined #gluster-dev
09:48 vshankar joined #gluster-dev
10:24 aravindavk joined #gluster-dev
10:25 shubhendu_ joined #gluster-dev
10:45 aravindavk joined #gluster-dev
10:53 rastar joined #gluster-dev
11:55 mohankumar joined #gluster-dev
12:04 fabien__ joined #gluster-dev
12:06 shubhendu joined #gluster-dev
12:21 rastar joined #gluster-dev
13:39 rastar joined #gluster-dev
13:45 portante kkeithley: you there?
13:46 portante Can you review http://review.gluster.org/#/c/4966/?
13:47 kkeithley yes
13:48 portante thanks
13:48 kkeithley didn't I already review it?
13:48 kkeithley oh, that was patch set 1
13:49 portante yes, I just made changes based on the review.
13:58 wushudoin joined #gluster-dev
14:21 nickw joined #gluster-dev
14:22 Supermathie a2_: can't remember if I responded - without write-behind it still fails in the same way
14:34 jclift joined #gluster-dev
14:58 bala joined #gluster-dev
15:10 mohankumar joined #gluster-dev
15:20 edward1 joined #gluster-dev
15:37 portante kkeithley: thanks for the review
15:37 portante avati: you around?
15:37 portante question about getting jenkins job to add +1 verified for a UFO unit test run
15:49 avati portante: if jenkins is doign -1, gerrit won't let you merge
15:50 avati Supermathie: do you have a 'thread aplly all bt full' without write-behind?
15:59 Supermathie avati: no
15:59 Supermathie wait i have a coredump... oh right. no symbols
15:59 lalatenduM joined #gluster-dev
16:06 Supermathie YEEEEEEEEEESH
16:06 Supermathie 86.28    4935.78 us       4.00 us 57046149.00 us       16601252       WRITE
16:12 awheeler_ joined #gluster-dev
16:13 awheeler_ portante: ping
16:18 awheeler joined #gluster-dev
16:27 a2_ Supermathie ?
16:30 awheeler portante: wondering what the status of getting the tox build changes into 3.4 is.  Looks like it's looking for the tox.ini file now, but that's not in 3.4 yet: http://review.gluster.org/#/c/4947/
16:31 portante avati, awheeler: I believe 3.4 is just going to use latest gluster-swift upstream, so none of that is necessary.
16:32 portante avati: can you confirm?
16:32 portante avati: regarding jenkins +1 verified, isn't that a good thing?
16:33 portante kkeithley: regarding ^^^, are we at a point where we can get the upstream 3.4 packaging to use gluster-swift repo directly?
16:34 awheeler portante: Ok, then we still need to get the multi-volume changes integrated into 3.4, as 3.3 has multi-volume support (even before the ring stuff)
16:36 awheeler and ufounit is still not running the tests.
16:36 awheeler portante: Oh, hmm, so all the open review stuff I have is now obsolete?  Have the equivalents been opened for gluster-swift?
16:37 kkeithley swimming as far upstream as possible, the Fedora (and EPEL) builds of 3.4.0-0.x.beta1 use upstream openstack-swift on f20/rawhide, f19, and el6.
16:38 kkeithley Those are the fedora/epel openstack-swift RPMs.
16:40 kkeithley In a perfect world the next step would be to do the same for f18 using the upstream RDO openstack-swift RPMs. That's a little trickier because nothing in Fedora-land can have a dependency on 3rd party repos like the RDO repo.
16:40 kkeithley In the mean time f18 and f17 use _our_ glusterfs-swift RPMs.
16:40 kkeithley And I need to submit the changes to the glusterfs.spec.in to sync with what's in Fedora.
16:41 a2_ kkeithley, portante: so will there be a backport of #4970 to release-3.4?
16:42 a2_ portante, on #4970 I had a comment on the change to dht
16:43 kkeithley wrt to #4970, we need to carefully coordinate that for release-3.4
16:44 kkeithley Note too that the Fedora and EPEL builds above are what I put in http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.0beta1/ for testing.
16:44 awheeler_ joined #gluster-dev
16:46 a2_ kkeithley, yes.. it will be a relief if we can complete the glusterfs / glusterfs-swift split by 3.4 GA
16:46 kkeithley that's the goal, yes
16:47 awheeler_ kkeithley: And what of the outstanding 3.4 backports?
16:47 fabien joined #gluster-dev
16:47 a2_ awheeler, all UFO patches on glusterfs.git will be abandoned.. you will need to resubmit them to glusterfs-swift.git
16:47 a2_ (master, release-3.4, all)
16:49 jclift kkeithley: That sounds like it could make my integration of Gluster + Swift into RDO packstack be a tad more complicated that it otherwise could be?
16:50 kkeithley jclift: how so?
16:50 jclift It sounds like there's different versions of swift that are being in use by different sources of openstack?
16:50 kkeithley jclift: it's all grizzly, 1.8.0
16:51 jclift k.  Hopefully all good. :D
16:51 kkeithley My quick-and-dirty test on F19alpha using openstack-swift-1.8.0 (instead of glusterfs-swift-1.8.0) worked
16:53 kkeithley Looks to me like the RDO RPMs are the Fedora RPMs, and the only strange thing is that the RDO F18 repo has fc19 and fc20 RPMs in it, no actual fc18 RPMs.
16:54 portante a2_, kkeithley, awheeler: are we all aligned on what is to happen then?
16:54 a2_ portante, i have the open question on the dht change in #4970
16:55 portante yes, which I am hopefully going to answer shortly
16:56 portante do you know why the dht change is there in the first place?
16:57 portante I don't see that key name being used in the gluster-swift code
16:57 portante we use "user.test.key1"
16:57 portante Personally, that dht code should just be removed
16:57 portante a2_: ^^^
17:04 bulde joined #gluster-dev
17:04 a2_ portante, yes, please remove it
17:06 jclift That's weird, Gluster nfs server isn't working again for me from OSX desktop.
17:07 fabien can I try ?
17:07 jclift k, at least this time I've got time to figure out wtf is causing it
17:07 jclift fabien: Hmmmm... that could be complicated.  The gluster instance is in a VM running on my desktop.
17:08 fabien I see
17:08 jclift fabien: The weird thing is that it reliably worked yesterday when I stopped the vm, rolled it back to a git snapshot from about a month ago, and rebuilt from there.
17:08 jclift fabien: Which OSX version(s) did you try yesterday?
17:09 fabien latest 10.8.3
17:09 fabien but with command line mount
17:09 a2_ jclift, can you clear portmap/rpcbind from both client and server and restart?
17:09 fabien (I guess Apple-K is using nfs4 by default ?)
17:10 jclift fabien: I'm on 10.7.5.  Maybe that has something to do with it?
17:10 fabien my problem was nfs-common installed but portmap not started
17:10 fabien on server.
17:10 jclift Yeah, checked that this time.  Port 111 is listening on the server, and I can make a connection to it from the client using telnet.
17:11 jclift Definitely no iptables running at all on the server
17:11 fabien I remember that with 10.6 or 10.7, you couldn't mount nfs AT ALL with Apple-K
17:11 a2_ jclift, i mean, what about registratinos?
17:11 jclift What's Apple-K?
17:11 a2_ *registrations
17:11 fabien (saying something like not-supported)
17:11 jclift a2_: Any idea how to check that?
17:11 portante a2_, kkeithley: removed, resubmitted with updated bug ID as well
17:11 a2_ rpcinfo -p
17:11 jclift On client or server?
17:12 fabien keys apple+K for Connect To Server
17:12 a2_ server
17:12 portante a2_, kkeithley: can we get this committed today then?
17:12 jclift a2 fabien: http://fpaste.org/11536/13682059/
17:12 kkeithley AFAIK I'm just waiting for the split to happen. Once that happens it just means I get my g4s/ufo tarball from gluster-swift.git instead of glusterfs.git
17:12 fabien so, with 10.6 or 10.7 (not sure), couldn't mount like you do with ads of smb. But it was possible in Disk Utility (File/Mount NFS).
17:13 fabien I was surprised to see it's possible again with the same way we do for smb/afs
17:13 jclift fabien: Ahhh.  Now I get what you mean.  Apple-K is the name of the keystroke.
17:13 jclift fabien: Yeah, that's what I'm doing
17:13 jclift nfs://192.168.1.83/firstvol
17:13 jclift Normally works fine
17:14 a2_ portante, i'm good with the patch
17:14 fabien now NFS mounts (even mounted by command line) are shown as usb-drive or samba share
17:14 jclift Interesting
17:14 jclift Oh well, I've finished up the last main task I was on, and was just about to start a new one.
17:14 jclift Might as well take some time to figure out wtf is the problem with this first
17:15 fabien may be it was in 10.6 that you couldn't anymore mount nfs like you do
17:15 jclift As it seems directly related to the git commit I'm building from
17:15 a2_ jclift,  100003    3   tcp  38467  nfs is the problem
17:15 a2_ stale registration.. we changed the port number to 2049
17:15 a2_ not sure why that is not reflected in portmap
17:15 jclift a2_: Ahhh.
17:15 jclift What should it look like?
17:15 jclift Ahhh, 2049 instead
17:15 fabien anyway, with 10.8, nfs://IP/cloud work, but then finder complain about problem and propose to disconnect. also, impossible to LIST folders.
17:15 jclift Got it
17:15 a2_ s/38467/2049/
17:15 fabien with command line I send you, it works
17:16 fabien Yesterday I try Git before and after the port 2049, both worked (well, the 2nd time I try after seeing the portmap problem)
17:16 jclift With 10.7, the Apple-K command works.  mount -t nfs from command line also works.
17:16 a2_ portante, backport to release-3.4[3?] will also be most appreciated!
17:17 jclift OSX finder does "weird shit" though, which results in all kinds of stupid crap behaviour then bulk deleting files though.
17:17 portante certainly, once it is accepted upstream ... ;) hint hint
17:17 jclift I'll try just rebooting the server, and see if that lets the portmap registration work properly
17:17 fabien I have not so much problem with 10.8 when using command line mounted
17:18 fabien problem is that there many changes, but not change log at apple :(
17:18 fabien same for samba...
17:18 jclift Yeah, they're a pain in that way
17:18 jclift fabien: Any idea if the NFS/SMB mounting part of OSX is in their released source code?
17:19 fabien no idea
17:19 jclift Well, they put most of their non UI code online: http://opensource.apple.com/release/mac-os-x-1083/
17:19 fabien I haven't try with 10.7 and gluster, but with what you say, it seems it is really not the same that my 10.8
17:20 jclift No idea :)
17:20 jclift Ahh, here we go: http://opensource.apple.com/source/NFS/NFS-73/
17:20 fabien I see a mount_nfs there
17:20 jclift That's the NFS related source for 10.8.3
17:20 fabien great
17:21 jclift Now if/when you have problems with it, there's at least some chance you can figure out what it might be doing wrong
17:22 fabien good news, thanks
17:22 fabien I was talking about this on 10.6 : http://kampmeier.com/chris/blog/?p=43
17:22 jclift a2_: Heh, rebooting the gluster server and it now works fine.  rpcinfo -p shows the right port number now too.
17:23 jclift Surprised I didn't try the reboot test yesterday, but it looks like I didn't.
17:23 jclift a2_: Thanks btw. :)
17:23 fabien as they say at microsoft : "when in doubt : reboot"
17:23 fabien ;)
17:23 jclift :D
17:24 a2_ jclift, no problem :)
17:26 fabien jclift: and you have no list or disconnect problem when mounting in 10.7 using finder menu to mount ?
17:27 jclift Hmmm, depends on what you mean by "problem"?
17:27 jclift A lot of the time, if I try to disconnect an NFS share shortly after mounting it (ie within a minute), then I'm told I can't do so because it's in use by Chrome.
17:28 * jclift has no idea wtf chrome is doing, and isn't real impressed by that
17:28 jclift The _main_ problem I have with NFS from OSX, is when deleting files.
17:28 fabien for me, mounting via the menu (without any options), listing directory is impossible (took forever), and after few minutes of using, finder complain about unexpected error and ask me if i want to ignore or disconnect/umount. in fact, if I choose ignore, it's still well mounted (I can use it in terminal, also listing directory work in terminal)
17:29 jclift I have a test directory structure with about 30k small files in it spread across a few hundred directories.
17:29 jclift Interesting.  That's not a problem I've hit
17:29 fabien the only way to use it in finder was mounting in terminal with the options I send you
17:29 jclift Interesting.  Those options just made things slower for me, but didn't "improve" the experience.
17:30 fabien so I can browse directories, copy / delete stuff and no more error window showing
17:30 jclift I'm able to list directories fine and stuff.  With the weird exception that some subdirectories with many files in them only show up the first 50 or so.
17:30 fabien browsing directories is a good experience for the users ;)
17:30 fabien try with 10.8 ;)
17:30 jclift Yeah, being able to see your files is good for people :D
17:31 jclift Nah, I've avoided 10.8 so far, and no real desire to change
17:31 fabien I understand
17:31 jclift Guess I'll have to when 10.9 is out
17:31 fabien I could give you a teamviewer if you wanna see what it does
17:32 jclift Nah.  I actually tried it once.  Seemed ok at the time, but didn't grab me as a "must upgrade" type of thing
17:32 Supermathie a2_: PONG
17:32 fabien seems the nfs part is not the same (again)
17:33 a2_ Supermathie, do you have 'thread apply all bt full' from NFS without write-behind when it was choking?
17:33 jclift fabien: Have you looked at your /etc/nfs.conf and done "man nfs.conf" ?
17:33 jclift Seems to be a bunch of options in there
17:34 jclift According to the man page at least
17:34 Supermathie a2_: Nope, I have a corefile but no symbols for it. I think.
17:34 fabien no, I'll see that. Thanks.
17:35 fabien I was tryingh
17:35 fabien as a mac user
17:35 fabien and when finder had problem with listing, ls or ls -l was working !
17:36 jclift Yeah, that sounds about right.  OSX Finder has the bad rep for NFS, but command line tools dont
17:38 a2_ Supermathie, uh, does the core at least give the function names?
17:40 portante a2_: do you have time for a Jenkins mystery which I think is affecting the "smoke" test Jenkins job?
17:41 Supermathie a2_: sorry no
17:41 lpabon joined #gluster-dev
17:45 deepakcs joined #gluster-dev
17:46 portante lpabon, a2_: I think we need to sort out this Jenkins mystery
17:47 lpabon how urgent is it?
17:47 lpabon is the current 'hack' not working?
17:49 portante the hack is working, and it turns out that the hack is also present in the "regression" jenkins job for glusterfs
17:50 portante but it is not present for rh-bugid or smoke jenkins jobs, as far as I can tell
17:50 portante And, according to the documentation for Jenkins/Git/Gerrit integration, we should not need to do that.
17:50 portante to do that hack
17:51 lpabon good point.  Maybe we need to setup a temp Gerrit / jenkins to play around with until we understand what the issue is.
17:51 lpabon that is probably the easiest way
17:52 kkeithley jclift: I spent a bit of time last night looking at glusterfs on Mac OS some more. I found osxfuse, the successor to macfuse——  And as I looked I eventually remembered that we—glusterfs—don't use the system's <fuse.h> anyway.
17:52 kkeithley We carry our own contrib/fuse-include/fuse_kernel.h, or for Mac OS X,  fuse_kernel_macfuse.h.
17:52 kkeithley But....
17:53 jclift Interesting
17:53 portante lpabon: perhaps
17:53 portante though that sounds like a lot of work
17:53 kkeithley fuse_kernel.h has moved forward, we've not got readdirplus and notify. But fuse_kernel_macfuse.h has been rotting.
17:53 lpabon probably, but at least it gives us a point of reference.  We can the apply this knowledge to the current build system.
17:54 kkeithley s/not/now/
17:54 kkeithley weve now got readdirplus and notify.
17:54 portante lpabon: okay
17:55 lpabon I just think if it was easy, it would have been fixed by now.  I looked at the configuration of the current job and it is setup as described in the Jenkins documentation.  So I guess all there is to do is play around until we get it right
17:55 lpabon just my $0.002
17:55 kkeithley lpabon: that much huh? Cheapskate. ;-)
17:55 portante ;)
17:55 a2_ lpabon, portante: was jenkins upgraded?
17:55 lpabon lol,
17:55 portante a2_: yes
17:55 a2_ the plugin?
17:55 a2_ *gerrit plugin
17:55 portante yes
17:55 portante yes
17:55 jclift kkeithley: Hmmm, any idea what we should do?
17:55 a2_ ok
17:56 portante take a look at what the regression jenkins job does, it explicitly checkouts out the code to test
17:56 portante where as, smoke does not
17:56 a2_ last ditch - try setting up a job "from scratch" without copying another job?
17:56 portante we did that
17:56 a2_ portante, that is for a reason.. regression's patch is specified as an input field
17:57 jclift kkeithley: Any idea why we carry our own fuse?  Was it something needed from a while ago that might not be needed any more?
17:57 a2_ whereas smoke and rh-bugid's patches come in through the "Trigger"
17:58 portante I'll create a new job to try to show what I mean ...
17:58 a2_ i understand what you mean.. and i acknowledge the mystery
17:58 kkeithley I asked this a long time ago, and luckily a2_ is here to correct me after I misremember the answer he gave me some time back. IIRC it comes down to controlling which version of the fuse interface we get across all the different distros.
17:58 a2_ but regression is "supposed" to checkout the patch because of the way the job is implemented
17:58 kkeithley Or something like that.
17:59 jclift kkeithley: No worries.
17:59 a2_ kkeithley?
18:00 jclift a2_: The version of fuse that we carry in tree doesn't seem to be OSX friendly.  So, I was wondering why we carry our own version anyway, and if it's still needed, etc.
18:00 kkeithley a2_: ^^^ why we have our own fuse_kernel.h rather than use /usr/include/fuse.h
18:00 kkeithley /usr/include/fuse/fuse.h
18:00 jclift It's pretty common for us to get requests from OSX users wanting at last the client to work on OSX for connecting to Gluster servers.
18:00 a2_ jclift, ah.. we use our own libfuse, because we mount "specially" (for e.g, in an selinux compatible way, which the stock libfuse does not)
18:01 jclift (especially since the NFS client implementation on OSX is pretty shit :>)
18:01 jclift k
18:01 a2_ as part of carrying our own libfuse, we need to carry all the headers
18:01 jclift Yeah
18:01 * jclift wonders how much effort it would take to make our fuse OSX compatible
18:01 jclift Sounds like a major pita
18:02 a2_ and also at one point in the past, libfuse versions were waaay too divergent in various distros, carrying our own libfuse normalized the situation a bit too
18:02 jclift Yeah, that makes sense
18:02 a2_ jclift, it is serious porting work
18:02 jclift Yeah
18:03 jclift Hmmm, people have mentioned that the libgfapi stuff is a better approach than fuse.  So we'll probably be moving our codebase to use that instead of fuse at some future point
18:03 jclift That might make for easier OSX compatibility at that point then
18:04 jclift Doesn't sound like we'll have a solution any time soon.  Oh well. ;)
18:04 a2_ ??
18:04 a2_ libgfapi will not help you mount anything
18:04 kkeithley libgfapi doesn't give us native client mounts
18:04 a2_ so if you want a mountable filesystem in os/x, gfapi is the wrong thing to expect from
18:05 jclift Heh, this is what happens when I mis-understand someone's talk about things
18:05 jclift I suppose I'll understand it better when later on I look into libgfapi properly and figure out what it does :D
18:05 a2_ there are two options - fix fuse-bridge to work well with osxfuse, or use NFS/CIFS
18:06 a2_ (i.e, to have something "mounted")
18:06 jclift It's ok.  I understand. :D
18:07 awheeler_ portante: Not clear.  Where is the new gluster-swift git repo?
18:07 kkeithley libgfapi lets you bypass the fuse mount and write clients that talk native protocol direct to a glusterfsd
18:07 jclift Ahhh
18:08 kkeithley ssh://git.gluster.com/gluster-swift.git
18:08 kkeithley awheeler_: ^^^
18:08 jclift kkeithley: Ahh, now I get it
18:08 kkeithley I don't suppose we have a github mirror of glustler-swift yet.
18:08 a2_ github mirror would be nice
18:09 lpabon afaik there will be mirrors on Forge.gluster.org and github
18:10 fabien a2_: cifs was not working with Finder (hang when trying to copy .DS_Store files and (i guess) their extended attributes. Fuse would be great, NFS is working with Finder if I mount by hand in terminal.
18:10 awheeler_ kkeithley: Got it.  Since the repo is changing, not clear on how cherry-picking is going to work here.
18:10 fabien for now, NFS is the only way I can have a quite usable connection to gluster volume
18:10 awheeler_ kkeithley: All of my dependencies have already been accepted into master, so I guess I can cherry-pick out of master?
18:11 fabien from macosx that all my lusers use ;)
18:11 kkeithley awheeler_: the commit-Id will be different, but it should work the same. cherry-pick from master
18:11 awheeler joined #gluster-dev
18:12 portante lpabon: I fairly sure the forge.gluster.org mirror is already setup.
18:13 portante I don't have access to the glusterfs github account to set that up, does anybody else have that access?
18:13 portante a2_, lpabon: I think I solved the mystery
18:13 lpabon O.o
18:13 portante cute
18:13 portante raised eye brows, I suppose?
18:14 kkeithley portante gets the Ellery Queen Award
18:14 lpabon lol
18:14 lpabon X-D
18:14 kkeithley Unless he would prefer the Sherlock Holmes award
18:14 portante kkeithley: do expound on that
18:14 kkeithley You solved a mystery!
18:14 a2_ lpabon, portante, kkeithley: https://github.com/gluster/gluster-swift
18:15 portante a2_: nice!
18:15 a2_ portante, what was the mystery?
18:16 portante basically, when setting up the repo to reference, one has to set the following string in the refspec field: "refs/changes/*:refs/changes/*"
18:16 portante then, in the "Branches to build" field, one puts $GERRIT_REFSPEC
18:16 a2_ what about smoke and rh-bugid?
18:17 kkeithley portante: you've never heard of Ellery Queen?
18:18 portante go figure, this is documented at: https://wiki.jenkins-ci.org/display/JENKINS/Ger​rit+Trigger#GerritTrigger-UsagewiththeGitPlugin
18:18 portante kkeithley: no, sorry
18:18 portante would love to know the reference
18:19 portante a2_: If you want, I can modify smoke and rh-bugid to fix this
18:19 a2_ portante, i meant, smoke and rh-bugid don't seem to need that fix..
18:19 portante technically, it does not really affect rh-bugid
18:19 portante really?
18:19 portante how does smoke get the right code?
18:19 a2_ smoke and rh-bugid *DO* pickup the patch to be tested (somehow)
18:20 a2_ portante, it's already getting the right code.. rh-bugid already does fail jobs which have bad bug-id
18:21 portante unless the smoke scripts are checking out the code explicitly, I don't think they are getting the right code
18:21 portante rh-bugid is a bit different though
18:22 portante it works off of the commit message, which is not in the checkout code
18:22 a2_ i'm very certain smoke tests the proper patch..
18:22 a2_ or at least it has been testing proper patches till a couple weeks ago
18:22 portante hmmm
18:23 portante take a look at the demo jobs
18:23 portante job #4 was run after I changed it to work just like the smoke configuration
18:24 a2_ maybe because smoke and rh-bugid have been around since before upgrades of jenkins/plugin/gerrit (don't know which matter), maybe they are working in some "backward compatibility" mode.. and newly created jobs need to specify the above refspec
18:24 portante the README file I had changed to add a like: "this is the real real change", and a cat of the file does not show up
18:24 a2_ portante, did you try that on glusterfs.git smoke test?
18:24 portante I have not tried that there, but can if you want me too, I did not want to disrupt glusterfs work
18:25 portante I can do that now, if you would like
18:26 portante take a look at the parameters like, which lists the commit id that changed, and the console output from a given smoke test and you'll see that the id checked out and the id in the parameters are different (at least in the one I looked at)
18:27 a2_ smoke.sh picked up the proper commit id (seen in console log) while ufo-unit was picking wrong commit id
18:27 portante smoke.sh?
18:27 a2_ oops
18:27 a2_ smoke test
18:27 portante really?
18:27 portante can you give me a smoke job id in which that is true?
18:28 a2_ pretty much every smoke job.. let me get you one
18:29 a2_ http://build.gluster.org/job/smoke/2345/console
18:30 portante really, I see it checking out master only
18:30 portante the patch revision for that change is: dd9f65b5a564135c2676e37775f38e01e9e27547
18:31 a2_ Commencing build of Revision a56dca94c3b174637074be46e9a537ba0ca02c4b (origin/master)
18:31 a2_ Checking out Revision a56dca94c3b174637074be46e9a537ba0ca02c4b (origin/master)
18:31 a2_ ah
18:31 a2_ wait
18:31 portante that is not the right patch ide
18:32 portante a56 was Junaid's accepted commit on the tree
18:33 a2_ yes! this was not how things were behaving :O
18:33 portante yes
18:33 a2_ we have always gotten compile failures in smoke test when the patch had compile issues
18:33 portante yes
18:33 portante so let me fix those jobs and we'll have to retrigger smoke tests
18:33 a2_ yes please!
18:34 a2_ did the jenkins upgrade "change" this behavior?
18:34 awheeler there doesn't appear to be a release-3.4 associated with gluster-swift
18:34 a2_ awheeler, there won't be
18:35 awheeler sooo, when 3.4 releases, where will the code come from?
18:35 a2_ gluster-swift branches will be according to swift releases and branching, decoupled from glusterfs releaes and branches
18:35 a2_ a gluster-swift release will work with any glusterfs release
18:35 a2_ there has never been a dependency between them
18:35 awheeler so are we starting with the code that was in 3.4 or master?
18:36 awheeler If master, then I don't need to backport anything.
18:36 a2_ master, i think.. kkeithley? portante?
18:36 awheeler so, shall I just abandon all of my code reviews then?
18:37 awheeler that were backports to 3.4.
18:37 a2_ yes.. thanks
18:37 a2_ if anything is missing in glusterfs-swift, submit it there
18:38 awheeler so, do we have a sense of a release date/time for gluster-swift?  Or is that a function of grizzly release?
18:38 portante a2_, awheeler: yes, you do not need to backport anything
18:38 portante we are working on that currently, I'll defer to lpabon for messaging on that
18:39 portante a2_: all jobs fixed now
18:39 portante jenkins jobs, that is
18:39 portante we should be good
18:39 portante I can help retrigger jobs if you want
18:40 awheeler portante: Then I'll be abandoning 6 reviews now.
18:40 portante k
18:41 a2_ portante, awesome!
18:43 portante I started with 4970, so that we can put that ufo removal in the can and get it submitted
18:43 portante ;)
18:43 awheeler portante: I've abandoned more than was approved by a wide margin, lol.
18:45 a2_ portante, 4970 is already in
18:45 a2_ merged it sometime back
18:48 portante awheeler: ;) but I don't think anybody is tracking such metrics, since these would have been valid items to do had we not pulled gluster-swift out into its own repo and release cadence
18:48 portante a2_: cool, but you can see now that the patch set id and checkout now match here: http://build.gluster.org/job/smoke/2346/console
18:51 a2_ i'm wondering why rh-bugid is fetching the complete repo everytime
18:51 a2_ and why that is taking so long
18:52 portante I changed that one as well, which should not have made such a different
18:52 portante difference
18:53 a2_ wow.. gerrit is burning a full core to fulfill a git clone
18:53 portante I bet that new way to do thing is the culprit
18:54 portante it is probably pulling all the changes
18:55 a2_ Last Built Revision: Revision 40d026e10013f533c4fee33b87dabc4ca11c94b3 (refs/changes/70/4970/4)
18:55 a2_ Commencing build of Revision dd9f65b5a564135c2676e37775f38e01e9e27547 (refs/changes/69/4969/3)
18:55 a2_ Checking out Revision dd9f65b5a564135c2676e37775f38e01e9e27547 (refs/changes/69/4969/3)
18:55 a2_ all this looks good.. it seems to be doing the "correct" thing now, at least!
18:59 a2_ woaw
18:59 a2_ something is seriously b0rk3n in rh-bugid
18:59 a2_ Commencing build of Revision 6a2680f8eb1e3ffab5bfc2dd33633ae943b764a6 (refs/changes/69/4969/3)
19:00 a2_ and 6a2680f8eb1e3ffab5bfc2dd33633ae943b764a6 is a patch from 2011 !!!
19:01 a2_ for some reason, refs/changes/69/6969/3 is fetching 6a2680f8eb1e3ffab5bfc2dd33633ae943b764a6 instead of 6a2680f8eb1e3ffab5bfc2dd33633ae943b764a6
19:01 a2_ for some reason, refs/changes/69/6969/3 is fetching 6a2680f8eb1e3ffab5bfc2dd33633ae943b764a6 instead of dd9f65b5a564135c2676e37775f38e01e9e27547
19:01 a2_ :|
19:02 a2_ in smoke test, it says :
19:02 a2_ Checking out Revision dd9f65b5a564135c2676e37775f38e01e9e27547 (refs/changes/69/4969/3)
19:03 a2_ whereas in rh-bugid test:
19:03 a2_ Checking out Revision 6a2680f8eb1e3ffab5bfc2dd33633ae943b764a6 (refs/changes/69/4969/3)
19:04 a2_ different commit ids, same refspec :O
19:04 portante let me see
19:09 jclift Hmmmm, I'm going to have a find a way to code a faster UDP json message receiver.
19:09 jclift Current one keeps hitting the thread limit when sending more than a couple hundred json msgs a second from the gluster server.
19:10 jclift At least it's udp, so nothing backs up.  Just drops the overflow on the floor.
19:10 jclift Non-optimal, but makes it resilient. :D
19:11 lpabon jclift: you using a queued non-blocking concurrent model (node.js like) or a threaded model?
19:11 jclift lpabon: threaded model unfortunately
19:11 jclift lpabon: I'm a novice with python, so I copied generic example code from the net
19:12 jclift lpabon: It was "good enough" for the time, but it's now hitting the limits where I'll have to think about it more and learn a better way
19:12 lpabon ah, i'v never done concurrent models in python, but i know you can .. i think swift uses something called eventlet?
19:12 jclift Interesting.
19:13 jclift I did some initial looking into stuff before deciding it was all too complicated (then went with the super simple example code)
19:13 jclift "eventlet" was one of the things people mentioned
19:13 lpabon may want to try python twisted UDP server.. i think that is concurrent also
19:13 jclift Yeah, that's actually what I'm thinking
19:14 jclift Most of this seems to be pain caused by Glupy based translators not holding a connection open between calls to it
19:14 lpabon O.o not good
19:14 jclift So, I can't just open one tcp connect and send messages through it.  Forced to open a new connection with every call tot he translator
19:14 jclift Yeah
19:14 jclift So, with tcp connections it fills the connection table pronto
19:14 lpabon ouch
19:15 jclift At least with udp connections they're stateless so no connection table problem.  The client side (ie the gluster server) is then workable
19:15 jclift But, the problem then shows up on the json receiver as it can't open threads fast enough
19:15 jclift That twisted udp server might be the next thing to try. :D
19:17 lpabon yeah
19:20 jclift Doesn't look too complicated: http://twistedmatrix.com/docume​nts/current/core/howto/udp.html
19:26 lpabon worth a try
19:37 Supermathie jclift: Just so you're aware, python itself is single-threaded. If you want to use >1 core, you need to use >1 process.
19:37 jclift Supermathie: This could get complicated :/
19:37 Supermathie I might use a very lightweight receiver that just pushes message to a redis queue for processing or something.
19:38 jclift I'm just using the json receiver to stick the received messages in a database backend
19:38 jclift Was thinking the database would be the slow part
19:38 jclift But, it seems fine so far
19:38 jclift Though, the first thing I did was turn off automatic commit to disk for each transaction
19:39 jclift But, trying out the twisted thing now.  If it does turn out to need more than 1 process, then I guess I'll figure it out :)
19:49 awheeler joined #gluster-dev
19:55 portante a2_: rh-bugid seems better now
19:55 portante I tried a different tack, let me see if that works with smoke now
19:59 jclift lpabon Supermathie: Converting the code to use Twisted's UDP "reactor" was straightforward.  Haven't seen any "can't create thread" issues so far
19:59 jclift So, probably keep using this Twisted code for now :)
19:59 lpabon awesome!
20:00 portante jclift: what are you working on? Just curious ...
20:00 jclift portante: https://github.com/justinclift/glusterflow
20:00 jclift It has a screenshot there too
20:01 jclift The screenshot looks a bit basic, as the web app isn't real clickable yet
20:01 lpabon Sweeet!! OpenStack summit finally put up the videos
20:02 lpabon I mean.. OpenStack.org finally put up the videos from the summit
20:02 jclift portante: The gist of it is to send a json message to a central data collector for every gluster file operation. eg open(), create(), rm(), lookup(), etc.
20:03 jclift portante: And then have an interactive web based reporting/analysis thing which lets people see what's happening in their cluster both right now and at at point in previous time
20:03 portante sounds like a pcp integration would also be a good way to go (performance co-pilot)
20:04 jclift portante: So, lets say something went weird with a glusterfs cluster at 2am in the morning, people can select time range and see what was actually happening then
20:04 * jclift hasn't seen pcp yet
20:04 jclift Google time :)
20:05 fabien jclift: glusterflow looks interesting !!
20:05 jclift Thanks.  Yeah, I think it was good potential
20:05 jclift s/was/has/
20:08 edward1 joined #gluster-dev
20:22 portante a2_: take a look at this page: http://build.gluster.org/job/smoke/changes
20:22 portante You can see the new behavior pattern started on April 2nd
20:23 lpabon portante: i submitted my first change to gerrit.
20:23 portante nice!
20:45 portante a2_: oh my, yet another discovery that might be the related
20:47 portante there was another difference between rh-bugid and smoke: see https://gist.github.com/portante/5557271
20:47 portante lpabon: see above
20:47 portante avati: ^^^
20:53 Supermathie I have a tool that anybody testing NFS may be interested in: https://github.com/NetDirect/nfsshell
20:56 fabien thanks
20:59 Supermathie It's what I adapted to make a simple test case for https://bugzilla.redhat.com/show_bug.cgi?id=955753
20:59 glusterbot Bug 955753: high, unspecified, ---, vraman, NEW , NFS SETATTR call with a truncate and chmod 440 fails
21:06 a2_ portante, what is the specific difference?
21:06 portante the trigger method
21:07 portante that gist is the output of greping for the buildChooser setting
21:07 portante rh-bugid and release had that set
21:07 portante that is why rh-bugid works
21:07 a2_ well, rh-bugid is failing
21:07 portante set that on smoke, gluster-swift-unit-tests and it works
21:07 a2_ smoke works
21:08 portante last three rh-bugids worked, no?
21:08 a2_ let me check
21:08 a2_ ah, you did some change?
21:09 a2_ so everything is working now?
21:09 portante should
21:09 portante I am retrigger jobs one at a time to make sure
21:13 portante a2_: now the clones don't take a long time, and it is choosing the correct version
21:15 a2_ excellent!
21:15 a2_ that's a good sign
21:16 a2_ not sure how rh-bugid and smoke got different triggers
21:16 a2_ anyways, fixed now!
21:17 portante well, release is using that trigger, and regression could acutally benefit from that trigger
21:17 portante you could get rid of that cherry pick code in regression with that
21:17 portante but, it works today, so we'll know what to do to fix it later
21:18 portante do we keep a wiki page or docspace article on the jenkins setup?
21:26 a2_ nope :S
22:08 portante okay, just retriggered all the builds since April 2nd, which appear to have been affected by this environmental issue
22:08 portante a2_: past April 2nd, the work is probably dead anyways
22:09 a2_ i see a lot of queued up jobs..
22:09 a2_ thanks!
22:10 portante so I changed the priority of regression jobs to be higher than smoke, so that they'll get sorted properly, but lower than rh-bugid, which should run really quickly
22:25 lbalbalba joined #gluster-dev
22:25 lbalbalba left #gluster-dev
22:25 lbalbalba joined #gluster-dev
22:26 lbalbalba w00t: 1st gcov/lcov results of the glusterfs test suite: http://lbalbalba.x90x.net/lcov/glusterfs/ !
22:26 lbalbalba sorry, couldnt resist :P
22:32 a2_ hmm!
23:32 jclift Ugh, that's weird.
23:32 jclift I have a CentOS 6 system that Gluster git is building rpms from fine.
23:32 jclift And a RHEL6 one that isn't, with this strange error|: rpmbuild --define '_topdir /home/jc/glusterfs/extras/LinuxRPM/rpmbuild' -bs rpmbuild/SPECS/glusterfs.spec
23:32 jclift error: line 132: Unknown tag:     %filter_provides_in /usr/lib64/glusterfs/3git/
23:33 * jclift suspects a missing package or something similar
23:33 jclift Anyone seen this before?
23:35 jclift Google is being absolutely useless with assistance for it. :/
23:37 jclift Aha.  "redhat-rpm-config" was the missing package
23:39 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary