Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-01-15

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:09 hagarth joined #gluster-dev
01:25 shyam joined #gluster-dev
02:33 bharata-rao joined #gluster-dev
02:42 bala joined #gluster-dev
06:32 ilbot3 joined #gluster-dev
06:32 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
06:40 bala joined #gluster-dev
06:43 pk joined #gluster-dev
06:59 bala joined #gluster-dev
08:22 hagarth joined #gluster-dev
08:44 shyam joined #gluster-dev
08:46 hagarth joined #gluster-dev
09:06 raghu joined #gluster-dev
09:28 kanagaraj joined #gluster-dev
10:00 bala joined #gluster-dev
10:22 sac joined #gluster-dev
10:59 xavih joined #gluster-dev
11:02 kshlm joined #gluster-dev
11:53 shyam joined #gluster-dev
11:59 itisravi_ joined #gluster-dev
12:03 edward2 joined #gluster-dev
12:20 ppai joined #gluster-dev
12:34 pk left #gluster-dev
12:54 hagarth joined #gluster-dev
13:11 hagarth 3.5beta1 is on its way out
13:19 kkeithley got the email, but the tarball isn't there
13:19 kkeithley are we waiting for an rsync or something?
13:21 hagarth let me check
13:25 hagarth done, there was a small problem with the release script - fixed now.
13:25 kkeithley yup, I see it now. Thanks
13:30 ira joined #gluster-dev
13:30 ira joined #gluster-dev
14:02 mohankumar joined #gluster-dev
14:57 kdhananjay joined #gluster-dev
14:58 pk joined #gluster-dev
14:59 johnmark :O
15:28 bala joined #gluster-dev
15:30 jclift Aaargh.  Just noticed that the over-the-wire compression for volumes has super misleading option naming. :(
15:31 jclift "gluster volume set <vol_name> compress on"
15:31 jclift That directly sounds like on disk volume compression. :(
15:31 * jclift thinks "transfer_compression" or similar would have been better.
15:31 ndevos jclift: doesnt the "gluster volume set help" not explain it?
15:31 jclift Oh well
15:32 jclift ndevos: Hmmm, hadn't thought to check it.  Just noticed it in the list of 3.5 changes. :)
15:33 jclift ndevos: That might be a save.  Non-optimal, but semi-forgivable.
15:33 jclift Maybe I'm being too black-and-white today... :)
15:33 ndevos well, be quick and file a bug for it, as long as there is no GA release it can be changed!
15:34 jclift Good thinking.
15:34 * jclift goes off and does that
15:37 jobewan joined #gluster-dev
15:44 jclift Done.
15:44 jclift https://bugzilla.redhat.com/show_bug.cgi?id=1053670
15:44 glusterbot Bug 1053670: high, unspecified, ---, kaushal, NEW , "compress" option name for over-wire-compression is extremely misleading and should be changed
15:45 hagarth1 joined #gluster-dev
15:47 ndevos jclift: you put the 3.5 blocker in there too?
15:47 jclift ndevos: Didn't think of that.
15:47 jclift ndevos: Liking irssi btw.  That plus screen seems like a win.
15:48 ndevos jclift: it definitely is for me :)
15:48 jclift :)
15:51 jclift ndevos: k, I'm stumpted.  How to I put in the 3.5 blocker thing?  Are we doing it via keyword, dependson bz, or ?
15:51 * jclift is looking at other Gluster BZ's but hasn't seen anything like it yet
15:52 ndevos jclift: you open your bug, and add a 'blocks' bug
15:52 hagarth https://bugzilla.redhat.com/show_bug.cgi?id=1049981 is the 3.5 tracker bug
15:52 glusterbot Bug 1049981: unspecified, unspecified, ---, vbellur, NEW , 3.5.0 Tracker
15:52 jclift Excellent guys, thanks. :)
15:52 ndevos jclift: when you've done that the tracker should have been updated with a 'depends on' bug (yours)
15:53 jclift ndevos hagarth: Cool, looks like that worked. :)
15:54 hagarth jclift: cool :)
15:55 ndevos jclift: looks like you're the first :D
15:55 jclift Heh
15:55 jclift And huge typo in my problem description.  s/encryption/compression/  *sigh*
15:56 jclift Well I got the point across anyway
15:58 pk left #gluster-dev
16:00 jclift kkeithley: Maybe power it off, backup the vm, and then do a full yum update after powering it on again?
16:00 jclift Thought, non-updated 6.3 sounds very exploitable. :(  Is a new rebuild overkill?
16:00 kkeithley I don't have access to the host/hypervisor
16:00 kkeithley no doubt
16:01 kkeithley no doubt exploitable. Who has cycles to do a rebuild? Or even just a backup-and-update?
16:01 jclift Well, we know that the regression tests no longer kernel panic modern kernels.
16:01 jclift So, my vote is to just update it when you log in.
16:02 jclift "see what happens"... and fix anything that's broken
16:02 kkeithley lol
16:02 jclift That's from a "minimal effort" perspective tho. :(
16:03 kkeithley I've had bad luck with things not coming back properly when I've rebooted it in the  past
16:03 jclift If we want to backup something first, and you can get a login, then maybe rsync /etc/ and /var/ to another box for safekeeping/reference first
16:03 jclift k
16:03 jclift Who has access to the host?
16:04 kkeithley not sure. Avati and hagarth maybe
16:04 hagarth jclift: a2_ has, I don't.
16:05 jclift If we're wanting to play it absolutely safe, we're going to need a backup before changing anything major.
16:05 kkeithley right
16:06 jclift a2_: You around?  Able to give host level access to build.gluster.org to ppl, or maybe do a vm backup? :D
16:08 kkeithley anyway, besides having empty iptables, I can ssh from some machines here in the office but not others. Everything looks to b.g.o like it's coming from the same place, i.e. redhat.com NAT gateway, confirmed with wireshark. The SYN arrives, but b.g.o just never ACKs certain ones. very strange.
16:10 kkeithley anyway, not wanting to change anything major. The kernel might be major, but it's shouldn't impact anything else.
16:10 kkeithley might be major in some people's eyes
16:11 kkeithley but I'm not getting any warm fuzzies from anyone else about doing that.
16:12 pk1 joined #gluster-dev
16:12 pk1 left #gluster-dev
16:17 jclift kkeithley: Is time on the box not set correctly?
16:17 hagarth joined #gluster-dev
16:18 kkeithley dunno, let me check
16:18 jclift The "some hosts can use encryption between it, but others can't" is a red flag that generally the time setting is ~5 mins out (or some other threshold)
16:19 kkeithley it's about 2 min fast
16:20 jclift Hmmmm.  2 mins doesn't sound enough.
16:20 jclift I thought the threshold for "ssh won't connect" was 5
16:21 jclift Maybe try ntpdating it anyway, so it's in sync with reality, and see what happens then?
16:21 jclift Either that, or maybe the host is doing something strange with IP packets.  Broken physical card on host maybe? (wouldn't explain inconsistent behaviour from different other boxes tho)
16:28 kkeithley on the advice of an eng-ops guy here I tried disabling all the off-load on the card. Didn't make any differene
16:28 kkeithley difference
16:30 shyam joined #gluster-dev
16:37 kkeithley so far the only thing I see in common is all the hosts that can't connect, i.e. that don't get the SYN ACKed, are vms.  E.g. I can ssh from gqa-hv, the RHEL host. I can't ssh from qga-build2, the NetBSD guest running on gqa-hv.
16:38 kkeithley I can ssh from my fedora20 desktop. I can't ssh from any of the vm guests I have running.
16:38 kkeithley But those vm guests can all ssh to other machines, e.g. download.gluster.org, my home gateway, etc., etc.
16:45 shyam1 joined #gluster-dev
16:47 jclift Hmmm...
16:47 jclift It would be good if a2_ was around.
16:48 jclift a2_: Ping Ping Ping
16:51 jclift Without access to the host, we have limited options.  If the vm doesn't come back up... [ugh]
16:51 kkeithley well, he's in California; it's still pretty early there
16:51 jclift Ahhh.
16:51 kkeithley I haven't had problems with the vm coming back up, it's all the Jenkins stuff
16:52 jclift k.  I was just thinking "update the kernel" and see what happens.
16:52 jclift The "see what happens" thing tho...
16:53 kkeithley yeah
16:53 kkeithley I may just do it anyway
17:01 ndk joined #gluster-dev
17:20 mohankumar joined #gluster-dev
17:23 lpabon_ joined #gluster-dev
17:37 lpabon joined #gluster-dev
17:50 mohankumar a2_: could you review my patches?
17:53 mohankumar a2_: http://lists.nongnu.org/archive/html/gluster-devel/2013-12/msg00156.html
18:22 lpabon is build.gluster.org down?
18:27 glusterbot` joined #gluster-dev
18:34 kkeithley lpabon: yes
18:43 johnmark ugh
18:45 avati joined #gluster-dev
18:45 avati :O
18:45 avati jclift: ping
18:52 johnmark VM appears to be down
18:53 johnmark avati: can you log into the main server and reboot the VM?
18:53 johnmark speaking of, what is the main server? dev.gluster.org? git.gluster.org?
18:53 johnmark avati: ^^^
19:10 avati which vm appears to be down? all of them seem to be up and have high enough uptimes to indicate they haven't been down in the last few months..
19:10 avati [root@build ~]# uptime 09:36:20 up 81 days,  9:17,  4 users,  load average: 0.49, 0.15, 0.05
19:11 avati [root@dev ~]# uptime 11:11:17 up 617 days, 21:08,  2 users,  load average: 0.00, 0.00, 0.00
19:11 johnmark build.gluster.org
19:11 johnmark kkeithley: ^^^
19:12 johnmark avati: [johnmark@jmw-f18 ~]$ ping build.gluster.org
19:12 johnmark PING build.gluster.org (184.107.76.12) 56(84) bytes of data.
19:12 avati oh, it stopped working just now for me too
19:13 avati i could ssh and check uptime just a couple mins ago!
19:16 avati somebody shutdown/rebooted build.gluster.org in the last few mins?
19:16 kkeithley yes, me
19:16 kkeithley half an hour ago
19:17 avati booted it
19:17 kkeithley dunno how you were able to sign on, I was signed off almost immediately after I issued the 'init 6'
19:17 avati what had gone wrong?
19:18 avati did it need a reboot?
19:18 kkeithley I updated the kernel to see if it resolves an issue I've been having trying to set up a reverse ssh tunnel
19:19 kkeithley reverse ssh tunnel to jenkins slaves here in the westford lab
19:19 kkeithley do you want the gory details?
19:19 avati nope :)
19:19 avati reverse ssh needs kernel update?
19:20 kkeithley not per se, no.
19:20 avati anyways.. waiting for xfs mount to finish.. still booting
19:20 kkeithley tcp needs a kernel update. Maybe. We'll see once it's back up
19:21 avati xfs takes hell of a long time to mount!
19:22 kkeithley yeah, old versions. It should be better in newer distros. My 4TB xfs volume (with vmguest images) used to take over an hour to mount. I had dchinner in the kernel fs team look at it. Now it's much faster
19:30 johnmark ok - so do I need to have Carlos let us into the portal, or are we good?
19:30 johnmark kkeithley: ^^^
19:31 kkeithley machine is on its way up.
19:31 johnmark ok
19:31 kkeithley is up
19:47 kkeithley we should still have someone in this TZ who can get into the portal, onto the console, to boot/reboot it when necessary
19:53 kkeithley 3.5.0beta1 RPMs available for testing. RPMs are in the YUM repos at http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta1/
19:53 kkeithley Watch for the announcement of the "Gluster Test Weekend" coming this  weekend, and remember, you heard it here first, unless you heard it here third. And feel free to test it  without waiting for the announcement.
19:54 kkeithley That's RPMs for Fedora, RHEL6, and CentOS6.
23:18 ilbot3 joined #gluster-dev
23:18 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
23:18 semiosis joined #gluster-dev
23:24 semiosis joined #gluster-dev
23:27 semiosis_ joined #gluster-dev
23:29 semiosis joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary