Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-02

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:59 Guest37017 joined #gluster-dev
01:17 bala joined #gluster-dev
02:39 hagarth joined #gluster-dev
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:53 kshlm joined #gluster-dev
03:28 hagarth joined #gluster-dev
03:54 itisravi joined #gluster-dev
04:03 shubhendu joined #gluster-dev
04:24 ndarshan joined #gluster-dev
04:31 badone_ joined #gluster-dev
04:31 spandit joined #gluster-dev
04:36 anoopcs joined #gluster-dev
04:40 atinmu joined #gluster-dev
04:44 nkhare joined #gluster-dev
04:48 jiffin joined #gluster-dev
04:51 soumya joined #gluster-dev
04:54 jiffin1 joined #gluster-dev
04:56 jiffin joined #gluster-dev
04:59 jiffin1 joined #gluster-dev
05:05 ppai joined #gluster-dev
05:15 Apeksha joined #gluster-dev
05:17 Manikandan joined #gluster-dev
05:17 Manikandan_ joined #gluster-dev
05:23 prasanth_ joined #gluster-dev
05:29 vimal joined #gluster-dev
05:31 kdhananjay joined #gluster-dev
05:32 rjoseph joined #gluster-dev
05:34 bala joined #gluster-dev
05:39 hagarth joined #gluster-dev
05:46 raghu joined #gluster-dev
05:48 overclk joined #gluster-dev
06:12 deepakcs joined #gluster-dev
06:44 anrao joined #gluster-dev
06:46 bala joined #gluster-dev
07:12 hagarth joined #gluster-dev
07:19 bala joined #gluster-dev
07:21 lalatenduM joined #gluster-dev
07:32 soumya joined #gluster-dev
07:40 dev-zero joined #gluster-dev
08:35 ndarshan joined #gluster-dev
08:41 pranithk joined #gluster-dev
08:50 hagarth joined #gluster-dev
09:15 RaSTar joined #gluster-dev
09:25 aravindavk joined #gluster-dev
09:28 atinmu joined #gluster-dev
09:51 ndarshan joined #gluster-dev
09:54 hagarth joined #gluster-dev
10:03 kshlm I just killed and retriggered 6 rackspace-regression jobs which were going on for between 19 hours and over 4 days.
10:03 kshlm No two were stuck at the same place.
10:11 ppai joined #gluster-dev
10:16 ira_ joined #gluster-dev
10:56 firemanxbr joined #gluster-dev
11:05 hchiramm lalatenduM++
11:05 ndarshan joined #gluster-dev
11:05 glusterbot hchiramm: lalatenduM's karma is now 69
11:08 kanagaraj joined #gluster-dev
11:14 ndarshan joined #gluster-dev
11:36 rsf joined #gluster-dev
11:54 tigert hmm
12:00 ppai joined #gluster-dev
12:49 ppai joined #gluster-dev
12:49 lalatenduM joined #gluster-dev
12:54 hagarth joined #gluster-dev
13:07 bfoster joined #gluster-dev
13:19 ppai joined #gluster-dev
13:42 kshlm joined #gluster-dev
13:55 raghu left #gluster-dev
14:01 shubhendu joined #gluster-dev
14:06 soumya joined #gluster-dev
14:10 overclk joined #gluster-dev
14:14 hchiramm hagarth, I am not able to hear anything on the bridge..
14:19 tigert joined #gluster-dev
14:30 shyam joined #gluster-dev
15:00 lalatenduM joined #gluster-dev
15:32 dlambrig joined #gluster-dev
15:32 _Bryan_ joined #gluster-dev
15:49 JustinClift Hmmm, some slaves need fixing.
15:49 ira joined #gluster-dev
15:50 bala joined #gluster-dev
15:50 ira joined #gluster-dev
16:09 shyam joined #gluster-dev
16:17 deepakcs joined #gluster-dev
16:21 johnmark joined #gluster-dev
16:41 wushudoin joined #gluster-dev
16:49 vipulnayyar joined #gluster-dev
17:02 jobewan joined #gluster-dev
17:14 shyam joined #gluster-dev
17:18 vipulnayyar joined #gluster-dev
17:24 dlambrig anyone having trouble connecting to review.gluster.org ?
17:41 firemanxbr dlambrig, for me too.
17:42 firemanxbr dlambrig, in infra mailing list JustinClift say about one problem with Rackspace + XEN, I believe he is investigate this issue.
17:44 dlambrig ok
17:52 shyam left #gluster-dev
17:53 firemanxbr dlambrig, gerrit is alive, please re-try
17:55 lalatenduM joined #gluster-dev
17:56 JustinClift dlambrig: Sorry, it was a firewall issue
17:56 dlambrig thanks :)
17:56 JustinClift dlambrig: I've been adjusting the rule set, and it seems like something modified iptables directly and not commited the changes to disk
17:56 JustinClift dlambrig: So, when I did stuff... bang, no web access ;)
17:56 JustinClift Fixed now
18:02 shyam joined #gluster-dev
18:03 JustinClift dlambrig: For those two CR's of yours with hanging regression tests... I'm not sure what to do
18:03 JustinClift dlambrig: Should I leave them hanging so someone can log in and investigate, or should I kill the jobs, and put the slaves back in service?
18:04 JustinClift dlambrig: We have some spare capacity atm (whew :>), so it's not urgent to get them running productively right away
18:05 dlambrig I do not know how soon the AFR folks will get back to you.
18:05 JustinClift dlambrig: I kind of suspect they won't, because it's not one of their CR's, and it's not on a release nor master branch
18:06 JustinClift dlambrig: So, they could say "ask the CR owner"
18:06 JustinClift Which is you. ;)
18:06 dlambrig Justin: you could probably return them to the pool , not sure how my code would have broken a perl script in an AFR test (if I recall your email)
18:06 JustinClift Sure, no worries :)
18:07 dlambrig Justin: ok, thanks :)
18:17 JustinClift ndevos: Your Python reboot-vm script, is that in Git?
18:37 dlambrig justin: is port 22 open in your firewall rules? I just started getting "ssh: connect to host review.gluster.org port 22: Connection refused" when I try to "git clone ssh://dlambrig@review.gluster.org/glusterfs.git"
18:39 JustinClift dlambrig: Gah
18:39 * JustinClift checks
18:39 shyam joined #gluster-dev
18:40 JustinClift dlambrig: Yeah, definitely open
18:40 JustinClift firemanxbr: ^^^
18:40 dlambrig Justin: yeah I just ran nmap, I see it is open
18:40 JustinClift firemanxbr: Is Gerrit not listening on 22 any more?
18:40 firemanxbr JustinClift++
18:41 glusterbot firemanxbr: JustinClift's karma is now 35
18:41 JustinClift firemanxbr: It seems to be listening on 29418 now
18:41 JustinClift tcp        0      0 ::ffff:10.3.129.11:29418    :::*                        LISTEN      27184/GerritCodeRev
18:41 firemanxbr JustinClift, yep, for default 29418
18:41 JustinClift How to change that back to 22?
18:42 firemanxbr JustinClift, hummm, I'm checking in my gerrit.config
18:42 wushudoin joined #gluster-dev
18:42 JustinClift Hmmm, slave20 died
18:42 * JustinClift rebuilds that too
18:43 firemanxbr JustinClift, in our gerrit.config >
18:43 firemanxbr JustinClift, [sshd]
18:43 firemanxbr for example: listenAddress = gerrit.ped.datacom.ind.br:29418
18:43 JustinClift k, I'll look here
18:43 JustinClift 1 min
18:43 JustinClift :)
18:43 firemanxbr JustinClift, Ok :D
18:44 JustinClift [sshd] advertisedaddress = git.gluster.org listenAddress = git.gluster.org:29418
18:44 JustinClift Hmm, bad line wrapping
18:44 JustinClift But yet, it has 29418
18:45 JustinClift I wonder why it normally listens on 22 instead
18:45 firemanxbr JustinClift, my example here: http://ur1.ca/ju2vw
18:45 JustinClift Ahhh... proxy of some sort
18:45 JustinClift Well, I'm guessing
18:45 * JustinClift looks
18:45 firemanxbr JustinClift, you can change in gerrit.config directly: 29418 => 22
18:46 firemanxbr JustinClift, and restarting gerrit :D
18:46 JustinClift I could
18:46 JustinClift But I think that if we normally have a proxy of some sort in the way... I'd better not change 29418 => to 22 without fixing that first
18:46 JustinClift I'm going to restart http on the server and see if that helps
18:47 firemanxbr JustinClift, I believe much better use default port, for example: in oVirt port he using 29418.
18:48 JustinClift firemanxbr: Sure.  Next version of this server should
18:48 JustinClift But, this is our Production machine
18:48 JustinClift There's No way we should change it right now without telling everyone, updating our scripts, etc
18:49 firemanxbr JustinClift, yep :D
18:49 JustinClift I can't see how port 22 way being listened to before
18:49 JustinClift Restarting httpd didn't do anything
18:50 JustinClift I wonder if someone was doing something with iptables, and THAT was remapping 29418 to 22 ?
18:50 JustinClift If that's the case... it should be in a startup script somewhere
18:50 * JustinClift looks
18:50 firemanxbr JustinClift, hummm it's possible
18:52 JustinClift It would explain why 80 and 443 were already open, but not in the /etc/sysconfig/iptables file
18:52 firemanxbr JustinClift, I'm in my work and all ports closed for out :(, I wish help you now :(
18:52 JustinClift It's ok
18:52 firemanxbr JustinClift, strange, hummm service 'firewalld' is alive ?
18:53 JustinClift Normally right now, I'd reboot and Nah
18:53 JustinClift Nah
18:53 JustinClift This is EL5
18:53 JustinClift It's never heard of firewalld :)
18:53 firemanxbr JustinClift, humm okay
18:53 * firemanxbr lol :D
18:53 JustinClift Normally, I'd reboot now and see if the "automatic start bits" get it right
18:53 firemanxbr JustinClift, /etc/rc.d/rc.local ?
18:53 JustinClift Except...
18:53 JustinClift # uptime 10:52:33 up 1028 days, 20:49,  2 users,  load average: 2.11, 2.83, 2.98
18:53 JustinClift ~3 years
18:53 JustinClift Nah, better not reboot it
18:53 firemanxbr me UALLLLL
18:53 * firemanxbr cool :D
18:54 JustinClift This is why we want to migrate off it
18:54 JustinClift firemanxbr: k, I'm going to change the config in Gerrit, and restart it there
18:55 * JustinClift is pretty sure it's NOT the right approach, but screw it
18:55 firemanxbr JustinClift, perfect, better fast :D
18:56 firemanxbr JustinClift, do not worry, we adjust all this it.
18:56 JustinClift ;)
18:56 JustinClift Starting Gerrit Code Review: FAILED
18:56 JustinClift Awesome
18:56 firemanxbr JustinClift, ouch....
18:57 firemanxbr JustinClift, please stop gerrit and looks port 22: netstat -nap | grep 22
18:57 firemanxbr JustinClift, this port can opened in /etc/ssh/sshd_config
18:57 JustinClift Already
18:57 JustinClift Looking that is
18:58 JustinClift Nope
18:58 JustinClift sshd == 21 only
18:58 * JustinClift looks for a log file
18:58 JustinClift Ahhh
18:58 JustinClift I wonder if it's because "review" user is trying to open a low number port
18:58 firemanxbr JustinClift, revert proxy ?
18:59 JustinClift I just don't know what proxy we use for this
18:59 JustinClift Hmmm
18:59 * JustinClift looks through httpd config
18:59 JustinClift Nope
18:59 JustinClift That does the web interface bit
19:00 firemanxbr JustinClift, humm, yep, default por is 29418, you can redirect from iptables, open in gerrit.config: localhost:29418 after external in iptables use 22 for external access.
19:00 JustinClift Aha
19:00 JustinClift Found it
19:00 JustinClift /etc/rc.local
19:00 JustinClift iptables -t nat -A PREROUTING -p tcp --dport 22 -j REDIRECT --to-ports 29418
19:00 JustinClift I'll change the config back to 29418 and run that
19:01 firemanxbr JustinClift, perfect
19:01 firemanxbr JustinClift, configure in gerrit.config open 29418 in localhost
19:01 firemanxbr JustinClift, should work
19:02 firemanxbr listenAddress = localhost:29418
19:02 firemanxbr after [sshd] container
19:02 JustinClift Yep
19:02 JustinClift Trying now
19:03 JustinClift Starting Gerrit Code Review: OK
19:03 firemanxbr JustinClift, but web gui: Service Temporarily Unavailable
19:03 firemanxbr JustinClift, go back :D
19:04 JustinClift Gah
19:04 firemanxbr JustinClift, perfect, congrats :D you master :]
19:04 JustinClift dlambrig: Try the git clone again?
19:04 dlambrig justin: ok..
19:06 dlambrig justin: different this time
19:06 dlambrig justin: freezes
19:06 JustinClift *slam*
19:06 JustinClift Headdesk
19:06 JustinClift (me, not you)
19:06 dlambrig justin: :)
19:06 JustinClift dlambrig: Ok, lets pretend we're a Jenkins slave
19:06 JustinClift What would a jenkins slave dooo
19:06 JustinClift I know
19:07 JustinClift dlambrig: Try again?
19:07 JustinClift :)
19:07 JustinClift I'm hoping the freeze is the same freeze that we see occasionally causing failures, and that it works a 2nd or 3rd time around
19:07 dlambrig justin: sorry, no go :)
19:08 JustinClift firemanxbr: Well, at least now I know what was doing the port 22 setup for Gerrit.  That always confused me before
19:08 JustinClift dlambrig: No worries.  Looking into it
19:08 hagarth joined #gluster-dev
19:08 dlambrig justin: I do not see port 22 open any longer (using nmap)
19:09 firemanxbr JustinClift, :]
19:10 JustinClift Yeah
19:10 firemanxbr JustinClift, best pratices is move all iptables config of: /etc/rc.local to: /etc/sysconfig/iptables(default location)
19:10 JustinClift firemanxbr: Fully agree
19:11 firemanxbr JustinClift, for me http clone anonymous and registers okay :D
19:11 JustinClift However, the /etc/sysconfig/iptables on this host is managed by system-config-securitylevel-tui
19:11 JustinClift Which I've not used until today
19:11 * JustinClift is more familar with editing iptables file directly too
19:11 JustinClift connect to host review.gluster.org port 22: No route to host
19:11 * JustinClift kicks it
19:12 JustinClift firemanxbr: Yeah, I'm going to try merging it into sysconfig/iptables
19:13 firemanxbr JustinClift, cloning okay: http://ur1.ca/ju340
19:13 JustinClift Actually, found something else
19:13 JustinClift firemanxbr: http://fpaste.org/192357/32361514/
19:13 JustinClift It looks like Gerrit is listening on IPv6 only?
19:14 JustinClift No
19:14 JustinClift ::ffff:10.3.129.11:29418
19:14 firemanxbr JustinClift, yes, he looks in /etc/hosts, exists ipv6 in here ?
19:15 JustinClift http://fpaste.org/192358/14253237/
19:15 firemanxbr JustinClift, http://ur1.ca/ju354
19:15 firemanxbr JustinClift, please remove line 5
19:16 firemanxbr JustinClift, and restarting gerrit
19:16 JustinClift Trying
19:16 JustinClift :)
19:16 firemanxbr JustinClift, he looks if exist ipv6 in your /etc/hosts(line 5)
19:18 JustinClift Done that, and restarted Gerrit
19:18 JustinClift It's still listening in the same place
19:18 JustinClift 1 sec
19:18 firemanxbr JustinClift, it's okay ? :D
19:21 JustinClift Trying hard coding IP address in gerrit config
19:22 JustinClift No difference
19:23 JustinClift I suspect that it really is listening on IPv4 interface
19:23 JustinClift If I do this:
19:23 JustinClift telnet 10.3.129.11 29418
19:23 JustinClift (from the host itself)
19:23 JustinClift Then I connect to a Gerrit SSH daemon
19:23 shyam joined #gluster-dev
19:24 JustinClift So, it's likely the iptables thing after all
19:25 aravindavk joined #gluster-dev
19:25 JustinClift firemanxbr: I'm going to try merging that port mapping rule into /etc/sysconfig/iptables after all
19:26 firemanxbr JustinClift, hummm
19:26 aravindavk unable to push patch to review.gerrit, "ssh: connect to host git.gluster.org port 22: Connection timed out"
19:27 aravindavk any workaround?
19:27 JustinClift aravindavk: Yeah, it's something wrong with our iptables config
19:27 JustinClift aravindavk: Working on it atm
19:28 JustinClift aravindavk: Any idea if pushing via http works?
19:28 * JustinClift is unsure if Gerrit can do that
19:30 aravindavk JustinClift: nope "fatal: unable to access 'http://aravindavk@review.gluster.org/glusterfs/': The requested URL returned error: 503"
19:30 firemanxbr JustinClift, I believe be much better use default port and send e-mail for all people for uses new port.
19:34 JustinClift I think I found the problem
19:34 JustinClift 1 min
19:34 JustinClift Nope
19:35 JustinClift firemanxbr: We then have to change ALL of our scripts and .ssh/config files to use the new port
19:35 JustinClift Not practical
19:35 firemanxbr JustinClift, okay, I understood
19:36 firemanxbr JustinClift, other option, is remove iptables redirect, and set in gerrit.config
19:36 firemanxbr JustinClift, but gerrit don't permit this option
19:37 JustinClift firemanxbr: I'm asking misc for help
19:37 JustinClift firemanxbr: Hopefully he's around
19:38 firemanxbr JustinClift, me too :D
19:38 lalatenduM joined #gluster-dev
19:47 firemanxbr JustinClift, I'm learning this documentation:
19:47 firemanxbr http://gerrit.googlecode.com/svn/documentation/2.0/config-gerrit.html#sshd
19:47 firemanxbr please try using this option:
19:48 firemanxbr reuseAddress = true
19:48 firemanxbr JustinClift, I believe great option
19:48 JustinClift Pretty sure that's for something completely different :)
19:49 firemanxbr JustinClift, I don't use this option, but this documentation say some port for push and sshd
19:50 JustinClift I'll try the reuseAddress option
19:50 JustinClift Doesn't hurt
19:50 JustinClift I hope :)
19:50 firemanxbr JustinClift, me too :D
19:53 dlambrig left #gluster-dev
20:06 misc joined #gluster-dev
20:06 misc JustinClift: I think the easiest way to merge is to run iptables script
20:06 misc then run the rules by hand, then iptables-save
20:07 misc ( the rules in /etc/rc.local )
20:07 misc alternatively, I did set the xinetd trick I proposed
20:07 misc JustinClift: in fact, in the long term, maybe having a bastion for ssh access would be a good idea :)
20:08 ndevos JustinClift: the reboot script will land in git when you merge it: https://review.gerrithub.io/213881
20:08 JustinClift ndevos: Ahhh.  I'd forgotten about GerritHub.  I never got around to figuring out how to get things to merge through it properly
20:09 JustinClift ndevos: Do you have skill with iptables?
20:09 ndevos JustinClift: log in and press 'review' +2 and click 'submit'
20:09 JustinClift Later
20:09 JustinClift ndevos: More pressing issue is the iptables on review.gluster.org
20:09 ndevos iptables? thats the past, firewall-cmd tfw!
20:10 ndevos *ftw
20:10 JustinClift Good luck with that on EL5
20:10 ndevos JustinClift: whats the issue?
20:10 JustinClift We have a iptables rule in /etc/rc.local that does this:
20:10 ndevos yuck!
20:11 JustinClift iptables -t nat -A PREROUTING -p tcp --dport 22 -j REDIRECT --to-ports 29418
20:11 JustinClift However, when I enable iptables service, it blows away that rule
20:11 ndevos okay, so connections on port 22 are redirected to port 29418
20:11 JustinClift Yeah
20:11 JustinClift 29418 == gerrit
20:11 JustinClift So, in effect, when people connect to port 22, they're getting Gerrit
20:11 ndevos yeah, /etc/sysconfig/iptables<something> should contain the rules
20:12 JustinClift That's my thought too
20:12 ndevos 'service iptables save' should have been used to save the rules
20:12 JustinClift I made a backup before I screwed with anything (its our prod system after all)
20:12 ndevos you can add rules later, by adding them to /etc/rc.local, but YUCK!
20:12 JustinClift And there's no sign of it in there
20:12 JustinClift So, yeah, it wasn't saved in there
20:13 JustinClift I've merged that rule into the /etc/sysconfig/iptables file manually, withotu really understanding everyting
20:13 JustinClift (ugh)
20:13 JustinClift And ... no joy
20:13 JustinClift Mapping not working
20:13 ndevos well, setup the rules how you need then manually, 'service iptables save' and inspect the /etc/sysconfig file
20:14 ndevos fpaste that /etc/sysconfig/iptables for me?
20:14 JustinClift http://fpaste.org/192391/27258142/
20:14 JustinClift Um, I can't get it working from the command line either, so the save thing doesn't work in this case
20:14 JustinClift I'm hoping it's just something dumb on my part ;)
20:15 ndevos and fpaste the output of 'iptables-save' ?
20:16 JustinClift http://fpaste.org/192392/53273811/
20:16 JustinClift ndevos: I can give you remote root if you want? :)
20:16 JustinClift ndevos: If you don't have it already ;)
20:17 ndevos JustinClift: I dont have it, and dont want it :)
20:17 JustinClift :)
20:18 JustinClift If I telnet to either port locally, it works:
20:18 JustinClift telnet 10.3.129.11 22
20:18 JustinClift From the box itself
20:18 JustinClift But from external, no joy
20:19 ndevos JustinClift: what strikes me as odd, is that there is no rule that allows access to port 29418 ?
20:19 ndevos or 22 for that matter
20:19 JustinClift k, so if I enable an incoming rule to port 22, that might work?
20:19 ndevos yeah, I think so
20:19 ndevos not sure if you need 22 or 29418
20:19 ndevos +1
20:19 JustinClift I'll try both :)
20:20 ndevos you might need that
20:20 JustinClift Yep, that won
20:20 JustinClift Not, to figure out which port was needed
20:20 JustinClift 1 min
20:21 JustinClift ndevos: Opening up 29418 worked
20:21 JustinClift Opening up 22 didn't
20:22 ndevos JustinClift: I guess prerouting applies before input (duh! of course)
20:24 ndevos JustinClift: so, got any more questions about that?
20:24 JustinClift ndevos: Nope, it's all working now.
20:24 JustinClift Thanks. :)
20:24 * JustinClift just commented the iptables file like it should have been
20:24 JustinClift ;)
20:25 JustinClift Next poor bastard will at least be able to see what things are for
20:25 misc so ?
20:25 ndevos JustinClift: DONT USE RC.LOCAL FOR IPTABLES RULES!!!!!!
20:25 ndevos }
20:25 JustinClift misc: We needed to open up tcp 29418, so things to get to the remapped port
20:26 ndevos I'm confident that the mapping/redirection worked just fine :)
20:26 misc gosh, who wrote that iptables file ?
20:27 JustinClift The one that's on there now?
20:27 JustinClift That's my dodgy hack version
20:27 JustinClift With comments
20:27 misc well, the one taht redirect FORWARD and INPUT to the same table :)
20:27 JustinClift misc: Feel free to improve it
20:27 JustinClift misc: Which line?
20:28 misc -A INPUT -j RH-Firewall-1-INPUT
20:28 misc -A FORWARD -j RH-Firewall-1-INPUT
20:28 misc that's a bit surprising
20:28 hchiramm__ joined #gluster-dev
20:28 ndevos misc: I think thats system-config-firewall for you :)
20:28 JustinClift misc: Yeah, that's how it was generated by system-config-securitylevel
20:29 misc ndevos: I didn't really needed more reason to drink...
20:29 JustinClift misc: Today's changes are in iptables_yyyymmdd_description files in the same dir
20:30 JustinClift misc: So, you can see what's been done (to minute resolution)
20:30 ndevos misc: I'm only guessing... but cheers!
20:30 misc JustinClift: well, your changes are ok
20:30 misc but comeon, open port 631 by default ?
20:30 misc and i guess that's also system-config-securitylevel who open port 5353
20:32 JustinClift Well it wasn't me :)
20:32 misc yeah
20:32 JustinClift I was trying to work with what I was given, etc
20:32 lpabon joined #gluster-dev
20:32 JustinClift Dammit, missed the 1st Gluster PR meeting
20:33 misc JustinClift: oh, I do not doubt, I did see the state of the debian :)
20:33 JustinClift ?
20:33 misc the old servers
20:34 JustinClift Ahhh
20:34 misc so I have no doubt that stuff were here from before
20:34 JustinClift k, I've emailed the mailing lists to know things should be ok again
20:34 misc ( and that's always the same issue with startup, you learn as you go, and there is nothing bad with it )
20:36 JustinClift ndevos: Thanks for your help with this.  Got us over the line. :)
20:36 ndevos JustinClift: sure, np, next time you should find something more difficult
20:37 ndevos JustinClift: oh, and will you merge that script now?
20:37 * JustinClift looks
20:39 JustinClift Hmmm, no Merge button
20:39 JustinClift "Needs Verified"
20:39 JustinClift Looking for a Review button
20:39 JustinClift Aha.  Reverted to old screen layout.  This looks more familiar
20:40 JustinClift ndevos: k, done
20:40 JustinClift Merged
20:40 JustinClift ndevos: When you submitted that patch, do you remember the URL you submitted it to?
20:41 JustinClift ndevos: Was it to GerritHub directly, or GitHub, or ?
20:42 ndevos JustinClift: I submitted it to gerrithub, I assume you have a github repository backing that
20:42 JustinClift Yep
20:42 JustinClift ndevos: Cool.  I think I tried submitting stuff directly to the GitHub URL, and that didn't work.
20:42 JustinClift ndevos: I'll try submitting stuff through GerritHub, and see if that works :)
20:42 ndevos JustinClift: when you submit it, it should land at https://github.com/justinclift/glusterfs_patch_acceptance_tests
20:43 ndevos JustinClift: just click those buttons in gerrithub :)
20:43 JustinClift Yep
20:43 JustinClift Hmmmm, I'd been hoping for a solution that integrates with the GitHub Issues section, but GerritHub doesn't look like it does
20:43 JustinClift It does seem to use GitHub as a backend successfully tho
20:44 * JustinClift wonders if we can bodge something up so that GitHub issues get auto moved to GerritHub...
20:44 JustinClift misc: If you want to get xinetd set up to autostart Gerrit on review.gluster.org, you're welcome to
20:44 ndevos JustinClift: gerrit is not a bug tracker?
20:44 JustinClift ndevos: It's not that
20:45 JustinClift ndevos: It's just that people who are used to GitHub are used to submitting stuff *through* GitHub itself
20:45 misc JustinClift: I can take a look, but I think taht's not how it work
20:45 ndevos JustinClift: I was wondering about something similar, how can I send a pull request, and attach that to a github issue?
20:45 JustinClift I'd been hoping GerritHib woulb capture that
20:46 misc have you seen the discussion from mozzila on using github as a way to submit patch, but only that ?
20:46 ndevos JustinClift: ah, well, you want to create a pull request in github and then see it in gerrithub?
20:46 JustinClift ndevos: Yeah, it's completely fine if the GitHub issue is migrated automatically to GerritHub + closed - along with clear pointers and URL to the issue in GerritHub, so the submitter understands what's going on
20:46 ndevos JustinClift: just use ./rfc.sh or git-review and explain that in the how-to-contribute docs
20:46 JustinClift ndevos: Yeah, guess we'll have to ;)
20:47 JustinClift misc: Nope, haven't seen that
20:47 JustinClift misc: URL?
20:47 ndevos JustinClift: you can add a .review file and git-review can do most of the things then, seems easy
20:47 ndevos JustinClift: a little like this https://review.gerrithub.io/217589
20:49 misc JustinClift: http://gregoryszorc.com/blog/2015/01/12/utilizing-github-for-firefox-development/
20:50 misc https://github.com/servo/servo/wiki/Github-challenges might also be interesting
20:50 misc ( from a thread on osas@ )
20:52 JustinClift Thanks, I'll read that in a bit.
20:52 JustinClift Kind of brain dead atm and need food break
20:52 JustinClift :)
20:52 misc just finished the pizza myself
20:57 JustinClift On the plus side, working out how Gerrit was being connected to port 22 was one of the 2 last remaining things
20:58 JustinClift I was unsure of, for migrating to a new box
20:58 JustinClift So, that's now sorted
20:58 JustinClift The last thing we need to figure out, is how to migrate from the current H2 database, to MySQL or PostgreSQL or similar
20:59 ndevos an H2 database? whats that>
20:59 JustinClift Exactly
20:59 JustinClift Inbuild java based database that no-one's ever heard of
20:59 JustinClift Seems kinda like the java equivalent of SQLite
21:00 JustinClift eg embedded db, "suitable for test deployments"
21:00 ndevos hmm, yeah, "H2, the Java SQL database."
21:00 JustinClift And the Gerrit docs clearly state not to use it for production workloads ;)
21:00 ndevos the "java" part in there scares me
21:00 ndevos lol
21:01 * JustinClift nods vigourously
21:01 ndevos but, its SQL, and we have someone that likes databases, isn't that right, JustinClift?
21:01 JustinClift That was 2002
21:02 JustinClift Well, maybe through to 2005
21:02 JustinClift But that's still 10 years ago ;)
21:03 ndevos I'm sure you can get a dump of the data with http://h2database.com/html/tutorial.html#command_line_tools
21:03 dlambrig joined #gluster-dev
21:03 ndevos then, import that into a PostgreSQL for Gerrit with the same version (data only, setup the schema before)
21:03 JustinClift dlambrig: Should be working now
21:03 ndevos when the data is in the db, BACKUP
21:04 JustinClift ndevos: Yeah, I think it'll mostly be doing a few trial runs to ensure the data is ok
21:04 ndevos and do a Gerrit update :D
21:04 dlambrig justin: I see the port is open and ‘git clone ssh://dlambrig@review.gluster.org/glusterfs.git' works again. Thanks!
21:04 JustinClift It'll probably be MySQL tho, as firemanxbr will more likely be the maintainer and I don't think he knows PgSQL
21:04 JustinClift dlambrig: ndevos got us over the line :)
21:05 dlambrig iptables problem?
21:05 JustinClift Yeah
21:05 JustinClift There was a dirty hack in /etc/rc.local for doing port mapping, and whatever else they needed had been done manually and never commited to a file
21:05 ndevos JustinClift: isnt PostgreSQL more open source friendly? and I doubt it makes a big difference for Marcelo
21:06 JustinClift So, when I updated an iptables rule earlier today (using the "documented" utility), it all fell apart
21:06 JustinClift ndevos: Definitely.  PG is way more open source friendly
21:06 ndevos JustinClift: that settles it then?
21:06 JustinClift ndevos: I don't want to push my previous preferences onto the guy that volunteered to do the work
21:07 JustinClift So, lets check with him first? :)
21:08 ndevos JustinClift: sure, cehck with him, but also nudge him into the right direction :)
21:08 ndevos tigert: btw, gluster-devel@nongnu.org is not use anymore, you should delete that from your addressbook and use gluster-devel@gluster.org
21:12 misc also, nudge to salt /o\
21:15 ndevos misc: yes, do!
21:16 ndevos misc: what do I need to do to get bugs.cloud.gluster.org in salt?
21:16 misc ndevos: https://www.gluster.org/community/documentation/index.php/Saltstack_infra
21:18 ndevos misc: got a good intro/doc for salt too? how does a config/state look like?
21:18 misc ndevos: good doc no :/
21:18 misc there is the doc upstream but I would not call is "good"
21:18 ndevos misc: can a ansible .yml get imported or something?
21:19 misc ndevos: nope, it can't
21:19 ndevos or, got an example for one of the other sites?
21:19 misc ( still having on the backburner my ansible over salt plugin )
21:19 misc there is a shitload of example
21:19 misc like https://github.com/python/psf-salt
21:20 misc and https://github.com/saltstack-formulas
21:20 misc ( even if I didn't really checked if this correspond to good practice or not )
21:27 misc ndevos: what is bugs.cloud.gluster.org, btw ?
21:28 ndevos misc: it provides a website for tracking open bugs
21:28 misc ndevos: so nothing that will make JustinClift have a hear attack :)
21:28 ndevos misc: basically it is a centos7 with nginx and https://github.com/gluster/gluster-bugs-webui
21:30 JustinClift ndevos: Should we get a httpd cert for it, and disable http?
21:30 JustinClift ndevos: Can be done
21:31 ndevos misc: http://paste.fedoraproject.org/192436/42533179 is my amateur/sysadmin ansible file
21:31 JustinClift Actually, I think we have a wildcard gluster.org cert that could be installed on it
21:31 ndevos JustinClift: I dont really care about it, its a read-only page :)
21:31 JustinClift ;)
21:31 JustinClift np
21:32 misc ndevos: as long as it work :)
21:32 JustinClift k, that's slave20 rebuilt
21:32 misc using the manual method, or salt assisted ,
21:33 ndevos manual, its just trying out if I can get something out of ansible :)
21:34 misc ndevos: no, for slave20 :)
21:34 ndevos hah, yeah, just read that line from justin :D
21:34 misc I wrote 75% of the automation of http://www.gluster.org/community/documentation/index.php/Jenkins_setup
21:34 JustinClift Manual method
21:34 JustinClift Well, mostly automated, some finishing bits still manual
21:35 misc I guess I should finish the last 25%
21:35 JustinClift I could script up the last bits too, but can't be bothered since it'll be salted soon :/
21:36 JustinClift The only bits I don't have scripted at the setting of jenkins password (should be make public key instead), getting the jenkins ssh key over to the host + perms on it, and the correct name setup in /etc/sysconfig/network /etc/hosts
21:36 JustinClift Oh, and the initial git checkout from Jenkins user, to verify things work end to end for that
21:37 JustinClift Oh, and the DNS setup
21:37 JustinClift Heh
21:39 misc so in fact, you already scripted the same things as me :)
21:39 JustinClift :)
21:57 JustinClift ndevos: Hmmm, the reboot-vm script keeps on failing.  I'm getting tempted to hardcode the endpoint server's IP address in /etc/hosts
21:58 misc which vm is down ?
21:59 JustinClift 22 was
21:59 JustinClift Reboot worked finally
21:59 JustinClift Had to rebuild 20 and 21
21:59 JustinClift Going to go through the rest and get them ok, so they're ready for tomorrow's onslaught ;)
22:01 misc why 20 and 21 were rebuilt ?
22:01 JustinClift Didn't come back after reboot
22:02 misc wow
22:02 JustinClift 23 looks like it's not coming back either
22:02 misc no idea why ?
22:02 JustinClift Nope, and didn't really care
22:02 misc slave23 is answering to salt ping
22:02 glusterbot misc: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
22:03 JustinClift misc: Cool.  Yours.
22:03 misc JustinClift: well, it did 5 minutes ago :/
22:03 JustinClift Yeah
22:03 ndevos JustinClift: yeah, I dont know how a proxy or dns redirect is setup on build.gluster.org (or its network)
22:03 JustinClift That was prior to reboot
22:03 JustinClift misc: ^
22:03 ndevos JustinClift: when that is fixed/removed, the script should be more relyable
22:04 JustinClift ndevos: Yeah.  I don't know much about the RH "gluster.org" DNS layer
22:05 JustinClift misc: If you want to look into slave23 not coming back, you're definitely welcome to
22:05 JustinClift misc: If not, I'll just rebuild it
22:05 misc JustinClift: I will look
22:05 JustinClift :)
22:05 misc at least, see the console
22:08 ndevos JustinClift: I do not think there is many 'RH' in that, I suspect it came with the 'Gluster Inc.' package
22:09 JustinClift Could be
22:09 misc what is the issue ?
22:09 misc ( as I know the dns layer )
22:10 JustinClift Bad dns lookups
22:10 JustinClift Example:
22:10 JustinClift http://build.gluster.org/job/reboot-vm/47/console
22:11 JustinClift Instead of the reboot-vm.py script getting the correct IP address for the Rackspace host to connect to, it gets the www.gluster.org server IP address
22:11 misc mhhh
22:11 JustinClift misc: Which naturally, doesn't have the right page on it... so 404
22:11 misc JustinClift: yeah, make sense
22:12 JustinClift misc: Note - the slave not coming back problem is likely more important
22:12 JustinClift My dodgy hack solution :), to this reboot-vm problem would be to add the IP address for the Rackspace host to /etc/hosts, along with verbose comments to people know what/why
22:12 misc JustinClift: well, I try to connect as root, and it fail
22:13 misc it kinda look like my laptop I just saved
22:13 JustinClift misc: So, rebuild it?
22:13 misc JustinClift: I rather disect it
22:13 JustinClift Go for it
22:13 JustinClift You should be able to put it into recovery mode, and mount the disk and figure out wtf is wrong :)
22:13 misc grmbl
22:13 misc as soon as I said "reboot", the prompt appeared
22:14 JustinClift Heh
22:14 * misc shake fist at java and the cloud
22:21 misc mhhh
22:21 misc JustinClift: ok, my fault
22:21 misc I disabled eth0 instead of eth1
22:21 misc so don't reboot anything while I fix this
22:22 badone_ joined #gluster-dev
22:23 misc ( and fixed )
22:25 misc JustinClift: for the DNS
22:25 misc *               IN        A        198.61.169.132
22:25 misc I suspect this could be the reason why
22:37 JustinClift misc: Ahhh, for the DNS problem above?
22:38 JustinClift slave23 seems to be back online btw
22:38 JustinClift Ahhh, that's the "disabled eth0 instead of eth1"
22:38 JustinClift ?
22:38 misc yes
22:38 JustinClift Heh, so that's prob why 20 and 21 got rebuilt too... ;)
22:38 misc yep
22:38 misc guess I should make sure there is some peer review when i touch config
22:38 JustinClift Cool Thanks for fixing :)
22:39 misc and for the DNS, that's just a hypothesis
22:39 JustinClift I think I woulda suspected something a bit more generic a few more hosts in
22:39 JustinClift What could we do to fix the DNS?
22:39 misc well, remove the wildcard ?
22:39 JustinClift So that it either fails properly, or finds the right box
22:40 JustinClift Yeah, lets try that, and see if more or less stuff breaks :)
22:40 misc I am gonna change the DNS
22:41 misc http://paste.fedoraproject.org/192462/42533611 current dns
22:42 misc nothing missing ?
22:50 * JustinClift looks
22:52 JustinClift misc: No need to keep icescrum
22:52 JustinClift Nor rack really, since we use cloud more
22:53 JustinClift Then again..
22:53 JustinClift I do screw around with rack sometimes...
22:53 JustinClift So yeah, keeep that after all
22:53 JustinClift icescrum can go :)
23:07 JustinClift Meh, brain dead atm.
23:08 JustinClift I'll go through the rest of the hosts tomorrow and figure out what needs fixing for them.
23:08 JustinClift 'nite all
23:12 misc nite
23:30 wushudoin joined #gluster-dev
23:40 ndevos JustinClift: http://build.gluster.org/job/rackspace-regression-2GB-triggered/4812/consoleFull seems like an issue on slave22?
23:47 misc mhh that's me
23:48 misc ndevos: was a transient error
23:48 misc maybe I should follow the path of JustinClift and also go to bed
23:48 misc ( or restrict myself on not pushing )
23:58 vipulnayyar joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary