Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-01-24

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 Technicool joined #gluster-dev
02:58 bharata joined #gluster-dev
03:38 harshpb joined #gluster-dev
03:39 hagarth joined #gluster-dev
03:55 mohankumar joined #gluster-dev
04:19 sahina joined #gluster-dev
04:44 pai joined #gluster-dev
04:53 sgowda joined #gluster-dev
04:55 sripathi joined #gluster-dev
05:02 sripathi1 joined #gluster-dev
05:22 hagarth joined #gluster-dev
05:23 bharata joined #gluster-dev
05:32 vpshastry joined #gluster-dev
06:02 avati :O
06:04 raghu joined #gluster-dev
06:09 deepakcs joined #gluster-dev
06:23 vpshastry joined #gluster-dev
06:23 bharata joined #gluster-dev
06:23 sripathi joined #gluster-dev
07:22 shireesh joined #gluster-dev
07:46 deepakcs joined #gluster-dev
07:46 ndevos pong a2
08:10 vpshastry joined #gluster-dev
08:29 inodb joined #gluster-dev
08:31 vpshastry joined #gluster-dev
08:38 gbrand_ joined #gluster-dev
08:45 mohankumar joined #gluster-dev
09:00 sripathi joined #gluster-dev
09:15 mohankumar joined #gluster-dev
09:20 sripathi joined #gluster-dev
09:20 sgowda joined #gluster-dev
09:23 vpshastry joined #gluster-dev
09:43 pai_ joined #gluster-dev
09:44 pai_ left #gluster-dev
10:08 mohankumar joined #gluster-dev
10:14 bulde joined #gluster-dev
10:14 sgowda joined #gluster-dev
10:33 hagarth joined #gluster-dev
10:58 shireesh joined #gluster-dev
10:59 spn joined #gluster-dev
11:22 hagarth joined #gluster-dev
11:28 vpshastry joined #gluster-dev
11:44 hagarth joined #gluster-dev
12:00 bulde1 joined #gluster-dev
12:14 polfilm_ joined #gluster-dev
12:22 kkeithley1 joined #gluster-dev
12:27 edward1 joined #gluster-dev
12:49 pai left #gluster-dev
13:18 bulde joined #gluster-dev
13:25 venkat joined #gluster-dev
13:27 mohankumar joined #gluster-dev
13:31 deepakcs joined #gluster-dev
13:57 polfilm_ joined #gluster-dev
13:59 johnmark xavih: ping
14:11 vpshastry joined #gluster-dev
15:01 jbrooks joined #gluster-dev
15:21 wushudoin joined #gluster-dev
15:22 inodb_ joined #gluster-dev
15:30 sripathi joined #gluster-dev
15:52 hagarth joined #gluster-dev
16:03 xavih how do I create an enhancement request on bugzilla ?
16:03 hagarth xavih: set the priority to enhancement
16:03 xavih it doesn't allow me to select the bug type, only once it is created, I can change the bug type to enhancement
16:04 xavih no priority field either :(
16:04 xavih at least on the creation page
16:05 hagarth you can even add [RFE] in the bugzilla headline
16:05 xavih ok, I'll do that :)
16:05 xavih thanks hagarth
17:03 venkat joined #gluster-dev
17:15 vpshastry joined #gluster-dev
18:28 jdarcy Anyone around who has the keys to the Jenkins machine?
18:29 jdarcy It looks like one regression test hung, then it didn't abort properly so the next one hung too.
18:42 Technicool joined #gluster-dev
19:04 kkeithley1 jdarcy: build.gluster.org?
19:51 jdarcy Yep.
19:55 kkeithley1 did the problem solve itself or? You have an account on the machine, or what do you want me to do?
20:00 jdarcy Do I have a regular account on that machine?
20:01 kkeithley1 yes
20:01 kkeithley1 looks like you never gave avati your pub.key though
20:03 kkeithley1 give me your rsa_id.pub and I'll set up your .ssh/authorized_keys file for you
20:04 kkeithley1 er id_rsa.pub
20:09 jdarcy On its way.
20:13 kkeithley1 you should be set
20:16 jdarcy Wow, I hate debugging ssh problems.
20:16 kkeithley1 yup
20:17 kkeithley1 not letting you in?
20:18 kkeithley1 are you  66.187.233.206
20:19 kkeithley1 If so that's /etc/hosts.allow that's bouncing you out
20:20 jdarcy I'd say it's NAT, but then you'd have the same problem.
20:21 kkeithley1 but I'm sshing from a machine in the office, not my home machine
20:21 jdarcy No, that's one of Red Hat's NAT addresses in RDU.
20:21 jdarcy nat-pool-12-rdu.redhat.com
20:21 jdarcy Maybe it just doesn't recognize that as one of ours.
20:21 kkeithley1 It let me in from that address
20:23 kkeithley1 maybe I shouldn't sign out now, in case I can't get back in .
20:24 jdarcy OK, failed from home too.
20:24 kkeithley1 seems like something's messed up with dns since I signed on. ping -c 1 nat-pool-12-rdu.redhat.com gives
20:25 kkeithley1 PING nat-pool-12-rdu.redhat.com.gluster.org (198.61.169.132) ...
20:25 jdarcy ?
20:26 Technicool kkeithley is this one of the new gluster.org boxes or one of the old ones?
20:26 kkeithley1 It's the new jenkins machine that Avati set up three months ago. build.gluster.org
20:28 Technicool thats the 66.x.x.x addy from ^^ ?
20:29 kkeithley1 er, what?
20:29 kkeithley1 s/thats/whats/ ?
20:30 kkeithley1 the 66.x.x.x address should be  nat-pool-12-rdu.redhat.com.gluster.org
20:30 kkeithley1 It's also the address that I'm signed on from
20:30 a2 jdarcy, kkeithley still working on ssh login issue?
20:30 kkeithley1 %who
20:30 Technicool i have a 184 addy for build.gluster.{org,com}
20:30 kkeithley1 kkeithle pts/0        2013-01-24 12:03 (66.187.233.206)
20:30 kkeithley1 root     pts/1        2013-01-24 11:12 (98.207.206.65)
20:31 a2 it's probably because jdarcy has /bin/false as his shell in /etc/passwd?
20:31 a2 jdarcy, try now
20:31 Technicool a2+1
20:32 kkeithley1 actually it was /usr/bin/passwd. Did we both just change it at the same time?
20:32 a2 kkeithley, i just did
20:32 a2 jdarcy, added you to sudoers as well
20:33 a2 jdarcy, the problem with jenkins is, if a job hangs and you abort it from web UI, the script continues in the background :(
20:33 a2 yet to figure out a way to clean up "completely" on a job abort from web UI
20:33 Technicool glad to pretend to help, all   ;-)
20:33 a2 so if have a hung test, safest is to killall -9 glusterfs glusterfsd
20:34 a2 and let the job fail "gracefully" (and vote -1 and all)
20:34 a2 learnt this the hard way
20:34 a2 brb lunch
20:36 jdarcy Still being denied, but maybe it doesn't matter.
20:38 kkeithley1 still getting the  warning: /etc/hosts.allow, line 11: host name/address mismatch: 66.187.233.206 != nat-pool-12-rdu.redhat.com   in /var/log/secure
20:38 kkeithley1 but it's let me in 2+ times in the last five minutes
20:38 kkeithley1 from 66.187.233.206
20:42 kkeithley1 you don't have a .ssh/config that's pointing at some other pub key for *.gluster.org do you?
20:43 jdarcy That's the same key I use all the time for my Gerrit stuff.
20:48 jdarcy So you come in from 66.187.233.206 and it works, but I come in from 66.187.233.206 and get the "name/address mismatch" message.  Seems like the message is a bit misleading.
20:57 kkeithley1 yup
20:57 kkeithley1 strange
20:58 semiosis sounds like address is matched to kkeithley1, when jdarcy tries to come in from there, name doesnt match
20:58 semiosis some kind of user/ip tracking ssh server?
21:02 kkeithley1 it's just a centos 6.3 box with the bundled ssh. It lets me in multiple times, concurrently, from that IP
21:03 kkeithley1 jdarcy: I tried `restorecon -R -v .ssh` (on ~jdarcy/.ssh/) See if that made a diff
21:03 bfoster xavih: FYI. I got some interesting profile results on bug 903175, I'd be curious if you reproduce similar numbers.
21:03 glusterbot Bug http://goo.gl/dCDwb unspecified, unspecified, ---, bfoster, ASSIGNED , Possible performance issue with md-cache and quick-read
21:04 jdarcy Yeah, I was just trying something like that on another machine.
21:04 jdarcy Nope.
21:08 gbrand_ joined #gluster-dev
21:10 kkeithley1 found a blog that claims CentOS 6 selinux (http://blog.firedaemon.com/2011/07/27/passwordless​-root-ssh-public-key-authentication-on-centos-6/) I did their fix on your .ssh/ and .ssh/authorized_keys2 file.
21:10 kkeithley1 s/selinux/selinux is borken/
21:13 kkeithley1 well, wtf
21:14 jdarcy I definitely suspect it's something like that.  Lots of my remote logins suddenly became problematic a month or two ago.
21:19 kkeithley1 the box has selinux disabled
21:20 kkeithley1 so it wasn't that
21:24 jdarcy Interestingly, I have the same problem with kibblesnbits.
21:24 jdarcy Perfectly valid keys, can't be used on that host.
21:25 jdarcy Screw it.  How about if you just "kill -9 -r gluster" so regressions can run again, and we'll solve this some other day?
22:02 kkeithley1 -r ?
22:03 kkeithley1 earlier today I was watching and there were gluster processes starting and stopping. Whatever is running now has been running for a while though.
22:07 jdarcy BTW, I just figured out a similar-symptom problem on kibblesnbits.  Turns out my home directory (not .ssh) had mode 775.  Change to 755 and voila!
22:08 jdarcy (For those not at Red Hat, kibblesnbits is one of our generic shell machines in Westford.  They're all named after brands of dog food.  Get it?)
22:08 a2 oh
22:08 a2 :p
22:09 a2 so are you or not able to login to build.gluster.org?
22:09 a2 or don't you just care anymore :)
22:09 jdarcy Still failing.
22:09 jdarcy Some day I guess we'll have to fix this so I can fix regression failures.
22:09 a2 who added the entries to /etc/hosts.allow?
22:10 * Technicool doesn't use hosts.{allow,deny}
22:10 a2 have purged hosts.* entries.. can you try one last time?
22:11 jdarcy Same thing.  Can you check the permission on ~jdarcy to make sure it's not the same problem?
22:12 a2 your authorized_keys was perm 700
22:12 jdarcy Ssh's error messages are worse than ours, apparently.
22:12 jdarcy That should be OK.  It's what I normally use.
22:12 a2 there are spurious newlines and whitespaces in your authorized_keys
22:13 jdarcy That could be a problem,.
22:14 a2 fixed
22:14 jdarcy And it works!
22:15 a2 yeah, that was it.. it was as though your authorized_keys was s#\ #\n#
22:15 jdarcy So, do we want to kill the daemons, or kill the job from Jenkins first?
22:15 a2 daemons first
22:15 a2 let jenkins always complete naturally
22:16 a2 (till we figure out how to do a proper cleanup via "abort job")
22:16 jdarcy OK, here goes.
22:16 a2 wow, there are three instances of "prove" running
22:17 a2 a killall prove would also help, I guess!
22:18 jdarcy OK, let's see if it clears.
22:18 jdarcy Thanks for fixing the login issue.
22:18 a2 no prob
22:19 jdarcy I'll fire up another run just to make sure we're clean.
22:19 a2 you can do a test run of master HEAD by leaving change_id as 0
22:19 jdarcy Yep.  Off and running.
22:21 a2 i see stale gluster daemons still running
22:21 a2 they should get cleaned up in cleanup() i guess
22:31 jdarcy Looks like "prove" can already run failed tests first, or slowest last.  Just need to figure out where it saves that state so we can restore it run to run.
22:35 jdarcy No way to make it stop when a test fails, though.  Lame.
22:36 jdarcy Ideal would be to run recently-failed tests first, then others, both sets in increasing-time order, and bail after the first failure.
22:37 a2 yeah, looks like it purposely wants to continue even if one script fails and give you a full report of everything that fails
22:38 a2 and we probably need to make rpm.t more intelligent.. if no new files were added or removed, and all changes are limited to .c and .h files, then just exit rpm.t real early
22:39 a2 ndevos, and we probably need to make rpm.t more intelligent.. if no new files were added or removed, and all changes are limited to .c and .h files, then just exit rpm.t real early -- the reason I pinged you yesterday :-)
22:40 a2 rpm.t alone is a solid 5+ minute
22:40 jdarcy Right.  That's a significant percentage of the total test time.
22:42 jdarcy My wife is grilling little bits of our home-made paneer.  Tasty.
22:42 a2 spiced?
22:43 jdarcy Not the paneer itself, but she made a saag sauce as well.  And naan.
22:43 jdarcy Finally got a chance to use some of that cardamom.
22:43 a2 ah!
22:44 a2 I use cardamom everyday in tea (chai)
22:44 jdarcy Yeah, I guess I've probably had it that way too.  Never looked too closely at what was in the various chai blends I've used.
22:45 jdarcy Now that I think about it, that's probably one of the major flavors in most of them.  (BTW test is still running.)
22:46 a2 yep, cardamom and crushed ginger are the most essential ingredients of masala chai
22:49 jdarcy Test succeeded.  :)
22:49 jdarcy So we're good again, until the next not-so-perfect patch.  ;)
22:49 jdarcy And that means it's dinner time.  See you tomorrow!
22:50 a2 see ya
23:11 jbrooks joined #gluster-dev
23:19 xavih bfoster: I've read it. Tomorrow I'll try to find time to do some tests
23:36 foster xavih: cool, no rush. thoughts on the potential md-cache option appreciated as well :)

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary