Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-05-17

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 yinyin joined #gluster-dev
00:52 ndevos joined #gluster-dev
00:52 avati_ joined #gluster-dev
00:52 hagarth__ joined #gluster-dev
00:52 semiosis joined #gluster-dev
00:52 johnmark joined #gluster-dev
00:52 awheeler_ joined #gluster-dev
00:52 hagarth joined #gluster-dev
00:52 jbrooks joined #gluster-dev
00:52 badone joined #gluster-dev
00:52 92AAAM1YK joined #gluster-dev
00:52 glusterbot joined #gluster-dev
00:52 kkeithley joined #gluster-dev
00:52 nixpanic_ joined #gluster-dev
00:52 avati joined #gluster-dev
00:52 xavih joined #gluster-dev
00:52 glusdev joined #gluster-dev
00:52 mohankumar joined #gluster-dev
00:52 lkoranda joined #gluster-dev
00:52 jclift_ joined #gluster-dev
00:52 foster_ joined #gluster-dev
00:52 Guest3022 joined #gluster-dev
00:52 _Bryan_ joined #gluster-dev
00:59 yinyin joined #gluster-dev
01:20 awheeler joined #gluster-dev
01:59 awheeler joined #gluster-dev
02:06 yinyin joined #gluster-dev
02:44 bharata joined #gluster-dev
02:46 awheeler joined #gluster-dev
02:58 bala joined #gluster-dev
03:16 shubhendu joined #gluster-dev
03:46 yinyin joined #gluster-dev
03:52 awheeler joined #gluster-dev
03:59 awheeler joined #gluster-dev
04:40 hagarth joined #gluster-dev
04:51 deepakcs joined #gluster-dev
04:56 raghu joined #gluster-dev
05:09 bulde joined #gluster-dev
05:09 yinyin joined #gluster-dev
05:09 awheeler joined #gluster-dev
05:14 kshlm joined #gluster-dev
05:39 krishnan_p joined #gluster-dev
05:55 lalatenduM joined #gluster-dev
06:10 puebele joined #gluster-dev
06:29 vshankar joined #gluster-dev
06:29 puebele joined #gluster-dev
06:59 jules_ joined #gluster-dev
07:03 krishnan_p joined #gluster-dev
07:51 krishnan_p joined #gluster-dev
08:06 puebele joined #gluster-dev
09:26 puebele joined #gluster-dev
09:46 puebele joined #gluster-dev
09:52 mohankumar hagarth: hagarth__ ping
10:17 vshankar joined #gluster-dev
10:25 hagarth mohankumar: pong
10:33 mohankumar hagarth: sent a private message
10:56 lpabon joined #gluster-dev
10:59 puebele1 joined #gluster-dev
11:01 rastar joined #gluster-dev
11:12 edward1 joined #gluster-dev
11:43 puebele1 joined #gluster-dev
11:55 bala1 joined #gluster-dev
12:03 puebele1 joined #gluster-dev
12:08 vshankar joined #gluster-dev
12:54 awheeler joined #gluster-dev
13:05 awheeler_ joined #gluster-dev
14:01 sghosh joined #gluster-dev
14:04 wushudoin joined #gluster-dev
14:06 portante|ltp joined #gluster-dev
14:06 mohankumar joined #gluster-dev
14:31 kkeithley per a conversation with lpabon the other day: givent that openstack-swift-1.8.0 is available for f18+ and rhel6 from the fedora updates{,-testing} repos and RDO (http://repos.fedorapeople.org/rep​os/openstack/openstack-grizzly/) _and_ that f17 is reaching EOL, I'm inclined to stop producing the "transitional" glusterfs-swift{,-*} packages for f17 for the next beta release(s)?
14:31 kkeithley Maybe even not produce f17 packages at all.
14:32 kkeithley Maybe even not produce _any_ packages at all for f17.
14:32 kkeithley anyone feel strongly about it?
14:50 rastar joined #gluster-dev
15:01 ndevos kkeithley: no, I like that plan :)
15:13 portante` joined #gluster-dev
15:16 portante joined #gluster-dev
16:30 hagarth joined #gluster-dev
17:02 lpabon joined #gluster-dev
17:06 bulde joined #gluster-dev
17:43 johnmark kkeithley: +1
17:43 johnmark hagarth: avati: ping
17:43 hagarth johnmark: pong
17:43 johnmark hagarth: heya - you know what I'm going to ask :)
17:43 johnmark hagarth: re: beta 2
17:45 hagarth johnmark: heya, we seem to be tracking towards early next week for that :)
17:45 hagarth johnmark: maybe 05/21?
17:46 johnmark ok, cool
17:46 johnmark I'll lock in a testing day
17:47 hagarth johnmark: great
17:47 johnmark :)
18:07 yinyin_ joined #gluster-dev
18:42 yinyin_ joined #gluster-dev
19:27 portante joined #gluster-dev
19:28 awheeler_ joined #gluster-dev
19:55 portante avati_, avati
19:55 portante you there?
20:12 avati portante, in a call
20:15 portante avati: sorry
20:15 portante Have been enjoying life now that other folks interface with customers. ;)
20:32 avati portante, still around?
20:33 portante yes
20:33 portante thanks
20:33 portante luis and I have noticed that some of these regression jobs take quite a while, and sometimes hang
20:33 portante we just killed one that took 2 hours, and had only run a few tests
20:34 portante did you make any progress on investigating the other VM option for jenkins?
20:35 portante and my second question is regarding the "Unable to access X. You need to run the web container in the headless mode. Add -Djava.awt.headless=true to VM"
20:35 portante message
20:36 avati i think there is one spare IP left
20:36 avati we could host a second build server VM on it
20:37 portante that would be great
20:37 portante I was thinking that we could run all the quick jobs on one, and the long running regressions on the other
20:37 avati if jenkins can support slave jenkins instances without need for a public IP, that would be even better
20:37 avati yes.. that would be good
20:37 portante lpabon: you listening?
20:37 lpabon yeah
20:38 lpabon it does.. i can even run a vm slave from my laptop
20:38 lpabon as long as the slave can connect to the master, that is all that is needed
20:38 portante from the build.gluster.org jenkins server?
20:38 lpabon the slave should have communication to: jenkins master, gerrit, and git.. that is it
20:39 portante so how do you make sure it is the only one that runs the jobs
20:39 avati so in the interest of minimizing co-ordination, i'll plan on moving glusterfs regression to a new VM.. and let glusterfs smoke and swift jobs continue in the current build server
20:39 portante great
20:39 portante I'd also like to do a full restart of jenkins to see if it will pick up the headless change
20:39 portante if you look here:
20:39 lpabon that works for the near term, if we can increase executors that would help better
20:40 portante http://build.gluster.org/job/glus​ter-swift-unit-tests-pre-commit/
20:40 portante I install the libX11-devel yum package which should resolve that problem, but we need to restart jenkins entirely.
20:40 avati yes, once we move regression out, we will lose the exclusion dependency b/w smoke and regression, and we can then have more executors
20:41 lpabon sweet!
20:41 portante nice!
20:42 portante avati: have you looked at the new jenkins job that luis created?
20:42 avati nope
20:42 portante gluster-swift-builds?
20:42 avati what's that?
20:42 portante He has it archiving the resulting build rpms
20:42 avati oh!
20:42 avati archiving long term?
20:42 portante so that when a commit is made, it runs the unit tests, archives the RPMs from the resulting build
20:43 portante right now it keeps the last 100 commits
20:43 lpabon last 200 builds.. when we release formally we can move them to download.gluster.org
20:43 portante settable
20:43 avati ok
20:43 avati do we have enough free space for the archives?
20:43 portante the RPMs are pretty small, so hopefully yes
20:44 lpabon each rpm is ~50k
20:44 avati oh ok
20:44 portante what is the /d disk?
20:44 portante on build.gluster.org?
20:44 avati scratch space
20:44 avati using that for archving should be fine
20:45 lpabon yeah, they can always be rebuilt
20:45 portante great, so we have room to move if we end up running out
20:45 portante nice
20:45 avati it would be good if you create the archives within /d rather than in /
20:46 portante lpabon: can you take that?
20:46 avati stay away from /d/backends and /d/build
20:46 avati they get deleted in every job run
20:46 avati you could use /d/pub if you want them published
20:47 portante sounds good to me, lpabon?
20:47 avati http://build.gluster.org/pub/ == /d/pub
20:47 lpabon i don't understand because I do not understand your setup... The archived rpms are copied to the Jenkins area where the logs are saved
20:47 lpabon the workspace is deleted
20:47 lpabon or can be
20:48 lpabon http://build.gluster.org/job/gluster-swift-​builds/lastSuccessfulBuild/artifact/build/g​lusterfs-openstack-swift-1.8.0-4.noarch.rpm is one of the links.. for example.. handled by jenkins web server
20:48 avati oh ok
20:48 avati hmm.. jenkins is running in /var (which is in /
20:48 avati wondering if it would be useful to just move jenkins home into /d
20:48 portante yes
20:49 portante that would probably be worth it
20:49 lpabon prob.. that is pretty easy.
20:49 avati it's about a gig
20:49 lpabon thats all backend jenkins config, so the links and archives will work once moved
20:50 avati i will cp -a /var/lib/jenkins to /d/jenkins (preserving symlinks and perms), change jenkins config to use /d/jenkins as the new home, and leave /var/lib/jenkins as-is
20:51 lpabon may want to rename /var/lib/jenkins to /var/lib/jenkins.old just to make sure no jenkins links or jobs are still going there
20:51 avati ack
20:51 avati oh some job is runnnig
20:51 avati i will move it in idle time
20:52 lpabon yeah, the 2h+ regressions :=)
20:52 lpabon im guess you can kill those
20:52 avati "started 16mins ago"
20:53 portante kkeithley has been starting those
20:53 portante we should check with him
20:53 avati kkeithley, ping?
23:01 badone joined #gluster-dev
23:41 awheeler joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary