Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-06

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 wushudoin joined #gluster-dev
00:57 _ndevos JustinClift, misc: it happened again on build.gluster.org: review.gluster.org[0: 10.3.129.11]: errno=No route to host
01:11 topshare joined #gluster-dev
01:15 hagarth joined #gluster-dev
01:40 rafi joined #gluster-dev
02:12 rjoseph joined #gluster-dev
02:13 ndevos JustinClift, misc: it works when git uses http as transport to get to review.gluster.org, the error happens with the git:// url
02:14 ndevos I've adjusted the nigtly build env for now, but I think its something you will want to look at
02:45 anoopcs joined #gluster-dev
02:54 rjoseph joined #gluster-dev
03:43 nishanth joined #gluster-dev
04:18 bala joined #gluster-dev
04:30 nkhare joined #gluster-dev
04:33 anoopcs joined #gluster-dev
04:37 rafi joined #gluster-dev
04:45 kshlm joined #gluster-dev
04:47 ndarshan joined #gluster-dev
04:53 gem joined #gluster-dev
04:55 dlambrig joined #gluster-dev
04:58 ppai joined #gluster-dev
04:58 kdhananjay joined #gluster-dev
04:59 jiffin joined #gluster-dev
05:09 kdhananjay left #gluster-dev
05:09 Apeksha joined #gluster-dev
05:10 spandit joined #gluster-dev
05:11 deepakcs joined #gluster-dev
05:17 kdhananjay joined #gluster-dev
05:23 kdhananjay joined #gluster-dev
05:25 rjoseph joined #gluster-dev
05:37 aravindavk joined #gluster-dev
05:56 lalatenduM joined #gluster-dev
06:04 bala joined #gluster-dev
06:17 raghu joined #gluster-dev
06:36 overclk joined #gluster-dev
06:50 vimal joined #gluster-dev
07:53 Debloper joined #gluster-dev
08:53 bala1 joined #gluster-dev
08:56 lalatenduM hchiramm++
08:56 glusterbot lalatenduM: hchiramm's karma is now 19
08:56 hchiramm lalatenduM++
08:56 glusterbot hchiramm: lalatenduM's karma is now 71
08:58 tigert hello
08:58 tigert preparing an update to the community page listing the meeting etc
09:08 anrao joined #gluster-dev
09:20 rafi joined #gluster-dev
09:27 tigert should go live soon, let me know if it has some issues ;) (best way to get feedback: push it live!)
09:27 tigert but its hard to make worse what there is now
09:40 kshlm All netbsd-regression runs are hanging.
09:40 kshlm I noticed some earlier today, and there are more now.
09:41 kshlm All of them seem to be hung at afr/resolve.t
09:44 tigert http://glustermm-tigert.rhcloud.com/community/ < this is how it will be once the site syncs and cache flushes
09:52 kshlm Actually its the test after basic/afr/resolve.t, that would be basic/afr/root-squash-self-heal.t
10:42 misc tigert: what about gluster-infra ?
10:43 rjoseph joined #gluster-dev
10:50 tigert misc: will let them know too
10:50 tigert misc: its just that I have asked for feedback and not received so much (everyone being busy with their stuff)
10:50 tigert so I will try a new approach :-)
10:51 tigert misc: autodeployment worked this time btw
10:51 tigert seems it went live fine now
10:52 misc tigert: I was not clear in fact, what about the gluster-infra ml on the page :)
10:52 misc and yeah the autodeploy work for middleman
10:52 misc I have more trouble for the awestruct one
10:53 misc ( in fact, i also have trouble not duplicating code )
10:55 firemanxbr joined #gluster-dev
11:14 anrao joined #gluster-dev
11:21 Debloper joined #gluster-dev
11:36 anrao joined #gluster-dev
12:30 ira joined #gluster-dev
12:51 anoopcs joined #gluster-dev
13:23 dlambrig joined #gluster-dev
13:39 ppai joined #gluster-dev
13:44 JustinClift ndevos: Interesting.
13:46 JustinClift ndevos: Sounds like we need to open another firewall port on build.gluster.org.  Guessing it'll be the port for the git daemon.
13:46 * JustinClift looks
13:46 lalatenduM JustinClift, you are on on #centos-devel ?
13:46 lalatenduM not on*
13:47 JustinClift lalatenduM: Nope
13:47 JustinClift Oh, meeting time?
13:47 lalatenduM JustinClift, nope
13:48 lalatenduM JustinClift, so I forgot to send a reminder mail yesterday abt the meeting. Now I was thinking to cancel the meeting today as others might not have seen the mail for meeting
13:49 lalatenduM JustinClift, what do you think?
13:49 JustinClift lalatenduM: Send the mail, but ask if it's ok to still have and who can attend
13:50 JustinClift lalatenduM: Only cancel it if you don't get a useful response, or people say they can't attend
13:50 JustinClift ndevos: Easy to replicate the problem btw.  git clone git://review.gluster.org/glusterfs
13:51 lalatenduM JustinClift,  I have send a mail before 12 hours or so. But few sig members have requested to send the reminder atleast 24 hours
13:52 JustinClift lalatenduM: Ahhh
13:52 JustinClift In that case, yeah, rescheduling to next Friday maybe?
13:52 lalatenduM JustinClift, yeah
13:52 JustinClift And 24+ hour previous reminder sounds like people's preference then too :)
13:54 misc JustinClift: port 9418 if you want to open something,
13:54 misc I cannot connect, my isp block me :)
13:55 JustinClift misc: Thx, just opened it, and tested.
13:55 JustinClift Working ok now.
13:55 JustinClift ndevos: k, NOW it should be fixed.  And yeah, was another firewall rule thing.
13:56 misc I guess having a bit more heavy monitoring could have detected sooner
13:57 misc even if that's fixed fast, so no big deal
13:57 lpabon joined #gluster-dev
13:58 vipulnayyar joined #gluster-dev
14:10 shaunm joined #gluster-dev
14:11 JustinClift misc: slave23 is acting weird.  ssh connection succeeds, but immedidately closes.  Both through from jenkins trying it, and from me trying at the command line.  Any ideas?
14:13 JustinClift misc: And slave29 isn't responding to pings.  Want to look into it?  If not, I'll just rebuild it. :)
14:19 shyam joined #gluster-dev
14:19 misc JustinClift: ok, looking
14:20 kkeithley wtf, I updated a patch that previously passed the build test. All I did was add comments to a bash script and now it's failing the el5 and el6 devrpms
14:20 misc mhh slave23 is not in salt
14:20 misc salve29 too
14:23 misc JustinClift: slave29 as killed by the /etc/passwd bug
14:23 misc I reboot and fix
14:26 misc JustinClift: slave23 has a iptables segfault
14:32 misc Mar  6 04:19:54 slave23 kernel: nf_conntrack: table full, dropping packet.
14:32 jiffin joined #gluster-dev
14:32 misc lhhh
14:34 ndevos kkeithley: welcome to regression roulette!
14:34 bala joined #gluster-dev
14:35 kkeithley ndevos: indeed, but this isn't even the regression tests.
14:35 ndevos kkeithley: oh, yeah, but those fail sometimes too, although that *that* often
14:36 kkeithley oh, reading back though the gerrit submits, it's actually been failing all along.
14:36 ndevos lol
14:36 kkeithley ganesha-ha scripts and configs were failing in prior commits. Let me fix the .spec file
14:37 ndevos hmm, I've left a comment about that at least twice already :-/
14:37 kkeithley yep, I see those ;-)
14:38 kkeithley maybe we should have a glusterfs-ganesha subpackage?
14:39 ndevos yeah, I guess that makes sense, makes is easy to pull in any requirements too
14:51 misc JustinClift: so slave23 likely faced some kind of corruption
14:51 misc S.5..UG..    /sbin/ifconfig
15:04 misc JustinClift: ok, slave23 is broken beyond repair I think
15:05 misc JustinClift: I try one last thing
15:08 deepakcs joined #gluster-dev
15:09 misc JustinClift: ok, correction, the server was likely rooted
15:10 JustinClift misc: Rooted as in haxxor style rooted?
15:10 JustinClift misc: Or Rooted as in "busted" ?
15:11 JustinClift misc: If haxxor style, should we get RH security team to take a look?  These VM's are generally pretty much patched up properly
15:13 misc JustinClift: rooted as haxor style
15:14 misc I am taking it offline and doing a first pass
15:16 JustinClift misc: Cool.
15:17 JustinClift I've just killed most of the backed up NetBSD7 regression tests, and rebooted the NetBSD 7 slaves they were running on.
15:17 JustinClift (after asking Manu)
15:19 JustinClift misc: If someone's managed to compromise our Jenkins pw, it's kinda strange they didn't take out the rest of the slaves too.
15:19 JustinClift But, yeah, see what you can find. :)
15:28 misc JustinClift: ok yeah, was my fault
15:28 misc forgot to reset root password while I diagnosed it
15:29 JustinClift Heh, so it had no root password or something? ;)
15:29 JustinClift And was then pawned in like 30 seconds ? ;)
15:30 misc it was toto
15:30 misc and it took a few hours to pown
15:30 JustinClift Heh
15:31 JustinClift So, nuke the VM and make a new one then yeah?
15:32 JustinClift misc: So, I guess that means the private key to communicate back to Jenkins master was potentially compromised too
15:33 JustinClift misc: Are you able to tell if they got into the Jenkins master from there?
15:34 wushudoin joined #gluster-dev
15:34 misc JustinClift: I suspect they didn't do anything, this was a automated worm
15:36 JustinClift misc: *whew*
15:37 JustinClift k, we'll go with that then. ;)
15:37 JustinClift misc: Am I ok to nuke that VM?
15:37 misc JustinClift: let me check a bit more
15:37 misc in fact, even the ddos failed since the tool failed to launc h
15:38 JustinClift Heh.  Yeah, it's not even close to a full desktop install of rpms on these things.
15:40 kkeithley fix the problem, not the blame
15:40 johnmark kkeithley: werd
15:41 JustinClift kkeithley: ?
15:42 kkeithley fix the vuln, don't pin the blame
15:43 JustinClift It was the internet that did it :)
15:43 misc wow
15:43 misc the worm was 32 bits, that's why it failed
15:43 JustinClift Wow
15:43 JustinClift That's pretty old school
15:44 misc statically linked, for GNU/Linux 2.6.9,
15:44 JustinClift Well, I guess if you want to set up a honey pot, you'll need to make it 32 bit one. :D
15:45 misc I did in the past
15:45 JustinClift ;)
15:45 JustinClift misc: I'll leave you to kill the VM whenever you've lost interest in it.
15:46 JustinClift misc: slave29's still offline too, if you want to take a look at that?
15:46 kkeithley Anyway, "Fix the problem, not the blame" came from Michael Crighton's book "Disclosure" in 1994. Probably there were other sources of that before 1994, but that's where I learned it.
15:46 JustinClift Cool
15:47 misc JustinClift: tought I saved slave29 :/
15:47 JustinClift misc: 1/2 our Rackspace VM's are just idling at the moment, so no urgency
15:47 JustinClift misc: Hmmm, it's back online
15:47 JustinClift Ok, you probably did and I didn't notice :)
15:47 JustinClift misc: Yep, it's happy again.  Jenkins was able to do stuff with it.
15:48 misc same /etc/passwd bug
15:51 JustinClift misc: Cool.  Hopefully we see less of them now that patch is merged into master
15:52 JustinClift misc: How'd the beers er... yesterday go?
15:54 misc JustinClift: i do not remember :/
15:54 JustinClift :D
15:55 ira joined #gluster-dev
16:08 JustinClift kkeithley: Your latest regression for 9538 is going to barf with a spurious failure again in a few minutes
16:09 JustinClift I'll retrigger it to run again, unless you get to it first ;)
16:09 kkeithley okay
16:23 misc JustinClift: so I killed slave23, you can restore it
16:23 JustinClift misc: Just saw the mailing list thread... looks like a login from India.
16:23 misc JustinClift: it was a automated worm, it tried for a few hours
16:23 JustinClift Cool. ;)
16:23 * JustinClift nukes the VM
16:24 hagarth joined #gluster-dev
16:25 misc https://blog.avast.com/2015/01/06/linux-ddos-t​rojan-hiding-itself-with-an-embedded-rootkit/
16:26 misc or https://www.fireeye.com/blog/threat-res​earch/2015/02/anatomy_of_a_brutef.html
16:40 ira_ joined #gluster-dev
16:44 bala joined #gluster-dev
16:50 vipulnayyar joined #gluster-dev
17:19 ira_ joined #gluster-dev
17:19 shyam joined #gluster-dev
17:53 misc mhh so I managed to find a bug in salt ...
18:02 lalatenduM joined #gluster-dev
18:31 shyam joined #gluster-dev
18:34 shyam Is there a place where one could download nightly RPMs (or equivalent) built from the master branch for Gluster?
18:35 shyam I guess it is here, http://download.gluster.org/pub/gl​uster/glusterfs/nightly/glusterfs/ still poking around... :) so shout in case I am not looking at the right place
18:36 ndevos shyam: that is the right place
18:36 ndevos if its missing, check fedora copr devos/glusterfs
18:37 shyam ndevos: Thanks... found what I was looking for (in the above link)
18:40 ndevos shyam: the package names have the git commit in their version of what was build, check if that is recent enough
18:41 ndevos there were some recent issues with the nightly builds, maybe the packages are a too old?
19:12 wushudoin| joined #gluster-dev
19:29 _shaps_ joined #gluster-dev
20:37 vipulnayyar joined #gluster-dev
21:07 dlambrig left #gluster-dev
21:14 dlambrig joined #gluster-dev
21:17 jobewan joined #gluster-dev
21:19 vipulnayyar joined #gluster-dev
21:24 dlambrig left #gluster-dev
21:45 _shaps_ joined #gluster-dev
21:48 vipulnayyar joined #gluster-dev
21:50 badone_ joined #gluster-dev
22:07 gem joined #gluster-dev
23:22 bala joined #gluster-dev
23:28 badone_ joined #gluster-dev
23:33 badone joined #gluster-dev
23:54 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary