Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-09-24

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:50 Aylin_Pagac22 joined #gluster-dev
01:32 glusterbot` joined #gluster-dev
01:40 aviksil joined #gluster-dev
01:45 Asa_Sawayn joined #gluster-dev
02:05 shyam joined #gluster-dev
03:48 Adeline_Glover joined #gluster-dev
03:50 itisravi joined #gluster-dev
04:06 shubhendu joined #gluster-dev
04:17 ndarshan joined #gluster-dev
04:28 kanagaraj joined #gluster-dev
04:31 nishanth joined #gluster-dev
04:39 anoopcs joined #gluster-dev
04:45 Rafi_kc joined #gluster-dev
04:45 rafi1 joined #gluster-dev
04:45 rafi1 joined #gluster-dev
04:49 deepakcs joined #gluster-dev
05:00 spandit joined #gluster-dev
05:02 Rafi_kc joined #gluster-dev
05:08 aviksil joined #gluster-dev
05:17 hagarth joined #gluster-dev
05:18 pranithk joined #gluster-dev
05:20 aravindavk joined #gluster-dev
05:27 kshlm joined #gluster-dev
05:28 jiffin joined #gluster-dev
05:41 Gaurav__ joined #gluster-dev
06:00 bala joined #gluster-dev
06:01 soumya__ joined #gluster-dev
06:05 atalur joined #gluster-dev
06:07 RaSTar joined #gluster-dev
06:23 ppai joined #gluster-dev
06:37 aviksil joined #gluster-dev
06:52 hagarth joined #gluster-dev
07:13 raghu joined #gluster-dev
07:18 aravindavk joined #gluster-dev
07:43 lalatenduM joined #gluster-dev
08:00 aravindavk joined #gluster-dev
08:01 hagarth joined #gluster-dev
08:04 deepakcs joined #gluster-dev
08:37 jiffin1 joined #gluster-dev
08:52 hagarth joined #gluster-dev
09:19 vikumar joined #gluster-dev
09:22 lalatenduM bala, here is the bug for master branch https://bugzilla.redhat.co​m/show_bug.cgi?id=1145989
09:22 glusterbot Bug 1145989: high, high, ---, lmohanty, NEW , package POSTIN scriptlet failure
09:23 lalatenduM bala, also here is the one for 3.6 branch https://bugzilla.redhat.co​m/show_bug.cgi?id=1145992
09:23 glusterbot Bug 1145992: high, high, ---, barumuga, NEW , package POSTIN scriptlet failure
09:50 ndarshan joined #gluster-dev
10:10 shyam joined #gluster-dev
10:19 spandit joined #gluster-dev
10:29 shyam left #gluster-dev
10:29 hagarth joined #gluster-dev
10:48 kdhananjay joined #gluster-dev
11:01 bala lalatenduM: fixes for both bzs are submitted for review
11:01 lalatenduM bala, yeah saw that, thanks bala++
11:01 glusterbot lalatenduM: bala's karma is now 1
11:02 lalatenduM bala, I think you forgot to add changelog in the spec file
11:02 lalatenduM bala, plz add that
11:03 kanagaraj joined #gluster-dev
11:05 bala lalatenduM: will do thaqt
11:18 ppai joined #gluster-dev
11:43 spandit joined #gluster-dev
12:01 itisravi joined #gluster-dev
12:01 JustinClift *** Weekly GlusterFS Community Meeting starting in #gluster-meeting on irc.freenode.net ***
12:02 JustinClift Unless I've got my timezones wrong...
12:09 hagarth joined #gluster-dev
12:17 kkeithley1 joined #gluster-dev
12:35 ppai joined #gluster-dev
13:00 tdasilva joined #gluster-dev
13:09 shyam joined #gluster-dev
13:16 ndevos lalatenduM: could you send an email about that ldconfig issue to the fedora packaging list?
13:17 ndevos lalatenduM: I've posted that question, link and email address in my 1st comment on http://review.gluster.org/8836 too
13:23 kkeithley_ ndevos: A Fedora Proven Packager made that change IIRC, without discussing it or testing it. But I don't remember it being a problem until recently
13:25 ndevos kkeithley_: yes, it is common practise to call ldconfig like that, it must be a recent breakage
13:26 ndevos kkeithley_: the packaging guidelines contain the "-p /sbin/ldconfig" snippet as example, if that is not supposed to be working anymore, the guidelines/snippets need to be updated too
13:26 ndevos and, probably many packages?
13:34 misc yep
13:34 misc a change in rpm ?
13:45 * lalatenduM was not at desk desk, reading logs
13:45 lalatenduM ndevos, did you mean , you have already sent tehmail to fedora devel?
13:45 ndevos lalatenduM: no, I did not send the email, I was asking if you could :)
13:46 lalatenduM ndevos, will do that
13:46 lalatenduM on it
13:46 ndevos lalatenduM++ thanks!
13:46 glusterbot ndevos: lalatenduM's karma is now 26
13:47 ndevos kkeithley_: you want to +1 or +2 http://review.gluster.org/8836 too?
13:47 * ndevos can merge the change now...
13:48 ndevos ah, wait thats the 3.6 branch one, http://review.gluster.org/8835 needs to go first
13:48 deepakcs joined #gluster-dev
13:48 kkeithley_ lala already +2'd both of them
13:49 lalatenduM ndevos, I haev tested them too
13:49 kkeithley_ why does build system fail 8835 though
13:49 ndevos its running again: http://build.gluster.org/job/rackspace​-regression-2GB-triggered/1754/console
13:49 kkeithley_ okay good, I don't need to retrigger it then. ;-)
13:50 ndevos because of this sporadic failure, I can not merge it yet :-/
13:50 ndevos and, the more +1 or +2's, the better :D
13:51 kkeithley_ +2 + +2 + +2 = +2? ;-)
13:51 ndevos yes, something like that!
13:53 JustinClift Hmmm, rackspace only support F20 atm, no F21 alpha
13:53 JustinClift I guess we could try an upgrade in place, and see if it still boots
13:54 * JustinClift figures if we can get NetBSD working from originally being a CentOS 6.5 VM, we should be able to do F21 ;)
13:54 kkeithley_ can't fedup to f21alpha?
13:54 JustinClift Dunno
13:54 JustinClift Will find out
13:55 kkeithley_ yeah, I'd think anyone who can morph a centos vm into NetBSD should be able to morph f20 to f21 pretty easily. And part the Red Sea, and maybe even walk on water. ;-)
13:57 kkeithley_ does rackspace do FreeBSD even? Ramnode does FreeBSD, not that that helps us any
13:58 kkeithley_ speaking of which...
13:58 kkeithley_ misc: any news on getting our server+RAID into PHX2 colo?
13:59 deepakcs joined #gluster-dev
13:59 kkeithley_ with that we could stand up pretty much anything we want without jumping through hoops
13:59 kkeithley_ when we want something rackspace doesn't support
14:01 kkeithley_ but morphing FreeBSD into NetBSD might be a wee bit easier than morphing CentOS into NetBSD.
14:02 misc kkeithley_: did asked to them, told me they would look
14:02 misc seems they didn't, will ping them again
14:03 lalatenduM ndevos, kkeithley I have sent the mail to fedora devel
14:06 kkeithley_ misc++
14:06 glusterbot kkeithley_: misc's karma is now 6
14:09 ndevos Humble: I've updated http://www.gluster.org/community/do​cumentation/index.php/How_to_clone a little, but didnt add screenshots ;)
14:19 wushudoin| joined #gluster-dev
14:28 lalatenduM kkeithley, ndevos , got a reply to my question fedora-devel , the cause mentioned in the reply is surprising for me, do u guys agrees to it?
14:29 lalatenduM kkeithley, the changes to you did for glusterfs-server issue (%ghost issue) , IMO we should send the change to master and 3.6
14:30 ndevos lalatenduM: oh, lol, the %post is not empty (comments only) and that is needed for "-p /sbin/ldconfig" to work correctly
14:31 lalatenduM ndevos, we cant put comments after "-p /sbin/ldconfig"?
14:31 ndevos lalatenduM: yeah :-/
14:31 lalatenduM ndevos, what the **** :)
14:32 ndevos lalatenduM: I think you can file a bug against rpm for that, it should probably interpret the comments as empty :)
14:33 lalatenduM ndevos, yeah , why not :)
14:33 ndevos lalatenduM: I normally run the nightly builds on my test systems, there has not been a %ghost issue there
14:33 kkeithley_ lalatenduM: yes, all the changes in the dist-git glusterfs.spec need to go into upstream glusterfs.spec.in
14:34 lalatenduM ndevos, thats interesting :)
14:34 lalatenduM kkeithley, ndevos should http://review.gluster.org/#/c/7199/ have the changes together?
14:35 ndevos lalatenduM: huh? what changes are you talking about?
14:36 lalatenduM glusterfs.spec in fedora dist git master vs upstrem glusterfs.spec.in
14:36 kkeithley_ wrt %ghost changes... Builds are fine. Your nightly builds do or don't install though? The %ghost issue only affects installing glusterfs-server
14:37 ndevos kkeithley_: I think I regurary install the nightly builds, at least I used those for testing with ganesha...
14:37 ndevos (from the master branch, glusterfs-3.7....)
14:37 * kkeithley_ wonders how that could have worked then
14:39 ndevos kkeithley_: /var/lib/glusterd/hooks/1/add-brick/post is not marked as %ghost in glusterfs.spec.in, see commit 4cf348fc
14:40 ndevos great, I think I debugged that issue that time too, see http://review.gluster.org/7645
14:40 kkeithley_ But there were other %ghosts in the spec file besides that one that were breaking the install
14:41 * ndevos wonders about that
14:41 kkeithley_ at least in the glusterfs.spec that Humble used for the initial 3.6.0beta1 build
14:42 kkeithley_ hopefully nobody will feel compelled to completely restructure the glusterfs.spec{,.in} again. ;-)
14:43 ndevos oh, well, that is not used for the nightly builds, they use the 'pure' glusterfs.spec.in
14:44 ndevos we can restructure again and place the comments in different spots, then we can use "-p /sbin/ldconfig" again!
14:44 lalatenduM ndevos, did not get you regrading "they use the 'pure' glusterfs.spec.in"
14:44 kkeithley_ well, the two should be almost exactly the same. The refactoring broke that.
14:44 ndevos lalatenduM: the nightly builds do not use the fedora dist-git .spec, but the one from the glusterfs sources
14:46 ndevos I'm still not sure if we should advise users to build rpms with the spec from the sources, and we provide rpms from a different .spec
14:46 ndevos but well... there are more important things to worry about ;)
14:46 lalatenduM ndevos, but fedora dist git have changes from upstream sources when we saw the issue, not sure why
14:47 kkeithley_ If we keep the specs in sync (either manually or by some automatic method) then I don't have a problem.
14:47 lalatenduM ndevos, users can anyway get the spec we use as we provide source rpms too
14:47 lalatenduM kkeithley, yeah agree
14:48 kkeithley_ previously they were in sync, and the only diff is/was the _for_fedora_koji_builds
14:48 kkeithley_ which was used for installing the glusterfsd .init/.service files
14:49 Humble ndevos++ , Thanks !! That page looks better now !
14:49 glusterbot Humble: ndevos's karma is now 25
14:49 kkeithley_ and the version, for rpms build from git source or in koji
14:50 kkeithley_ let's get back to them being in synch with each other
14:51 Humble I think we have to make fedora spec as much as close to upstream spec
14:51 kkeithley_ yes, we're all in agreement about that. ;-)
14:51 Humble :)
14:51 Humble I think the current version of fedora spec which we used for 3.6.0 looks close to upstream
14:51 Humble Isnt it ?
14:52 Humble :)
14:52 kkeithley_ they should be identical, except the value of _for_fedora_koji_builds
14:52 Humble indeed
14:52 kkeithley_ and the changelog
14:52 Humble true
14:53 Humble -->hopefully nobody will feel compelled to completely restructure the glusterfs.spec{,.in} again. ;-) -->
14:53 kkeithley_ hopefully now it is. But I think we need to fix up glusterfs.spec.in with the fixes we made in dist-git's glusterfs.spec
14:53 Humble yeah :)
14:54 Humble yep.. the .in has to be in sync
14:54 Gaurav__ joined #gluster-dev
14:55 Humble also we have to make sure we wont ship 2 major versions of GlusterFS  in same fedora release :) :)
14:56 ndevos lalatenduM: yes, I know they can get to the fedora spec, but it is much easier to 'rpmbuild -ta glusterfs-3.6.0.tar.gz' :)
14:57 _Bryan_ joined #gluster-dev
15:00 JustinClift Oh yah, the engg server from gluster.com days is running very old openssl still
15:00 JustinClift At least it's not running https
15:01 ndevos I thought https was a good thing?
15:02 * kkeithley_ thinks he meant "good thing we weren't running https with the old openssl.)
15:02 Humble kkeithley, jfyi the spec and the sources was pushed to fedora 'master' dist-git
15:02 Humble rest of the branches remains the same..
15:02 kkeithley_ yes, I saw the email about the commit to the master branch
15:02 Humble oh..ok
15:03 kkeithley_ I think we still need to decide whether we want to try to get 3.6.0 info f21 before GA or not.
15:03 kkeithley_ although mainly I think that depends on whether we GA 3.6.0 before f21 GA.
15:04 lalatenduM kkeithley, Humble ndevos yeah. may be we should start a mail thread on it and publish the result to a wikipage
15:04 kkeithley_ or maybe it has already been decided
15:04 Humble I think if we release 3.6 GA and if its stable, we could try to push
15:04 Humble kkeithley,
15:05 Humble kkeithley, looks like 3.6 GA will happen at 2nd week of oct
15:05 lalatenduM Humble, kkeithley samba,qemu, nfs-ganesh have to rebuild if we push 3.6.0 to f21
15:05 Humble hagarth, can confirm though
15:05 ndevos kkeithley_: if we want 3.6 in f21, we need to get an exception/blocker
15:06 lalatenduM I am fine with 3.6 in f21, if 3.6 GA happens before f21 GA
15:06 ndevos well, we *can* push the update, but it is against the fedora policy, and we'll run into the rebuilds of dependent packages
15:07 shubhendu joined #gluster-dev
15:07 lalatenduM kkeithley, ndevos Humble , regrading "rebuild and new versions of qemu, samba, and ganesha"  in rawhide , should I build for rawhide then send the mail or the reverse?
15:08 lalatenduM to fedora-devel
15:08 kkeithley_ so it's already decided then
15:08 kkeithley_ that's fine
15:09 kkeithley_ f21 = 3.5, f22/rawhide = 3.6
15:09 lalatenduM kkeithley, ?? did not get you
15:09 ndevos lalatenduM: 1st the email with a date on when you build, and include all the $PACKAGE-owner@fedoraproject.org on CC
15:09 lalatenduM kkeithley, nope it is not decided , was just taking rawhide as an example
15:10 ndevos lalatenduM: you can check for dependencies with teh repoquery command, check the archives for emails with topic about "SONAME bump"
15:10 kkeithley_ based on ndevos' comment that we need an exception/blocker to put 3.6 into f21, I'd say it's already decided that we should not put 3.6 into f21
15:10 kkeithley_ anyway, I'm fine with whatever decision we make
15:10 ndevos yes, from my understanding we can only put 3.6 in f22 (now still called rawhide)
15:11 Humble so, 3.5 in 21 and 3.6 in f22 ..
15:11 lalatenduM I am fine with it
15:11 lalatenduM ndevos, will do
15:12 kkeithley_ yeah. I don't think it really matters as we'll have RPMs for all versions of Gluster for all versions of Fedora on download.gluster.org
15:12 lalatenduM kkeithley, +1
15:12 Humble yep ..
15:13 ndevos indeed, if someone wants 3.6, they can get it from d.g.o
15:13 Humble with this , will it cause any issues on qemu, samba against libgfapi versions ?
15:16 lalatenduM Humble, qemu, samba, and ganesha need to rebuild on gluster3.6.0
15:16 lalatenduM ndevos, it seems with so version bump we need to rebuild the dependent packages (from the old mails) :)
15:16 lalatenduM kkeithley, ^^
15:16 kkeithley_ Jose already builds samba-4.1 for d.g.o. I build ganesha for d.g.o.  Mostly this has been to get gfapi support for those where they don't have gfapi support in EPEL.
15:17 ndevos lalatenduM: yes, but that means you need to be a proven packager, or be a maintainer for all those packages
15:17 Humble lalatenduM, yes, but who will do that for qemu ?
15:17 kkeithley_ Anyone can do scratch builds
15:17 kkeithley_ But the right dependencies need to be available to koji
15:18 ndevos scratch builds yes, but not push the rebuilt package to the rawhide repository
15:18 lalatenduM ndevos, kkeithley, are you guys a proven packager? can you guys do that
15:18 * ndevos can not do that
15:18 kkeithley_ No, I'm not
15:18 ndevos lalatenduM: that is the reason for the email, you ask the other maintainers to rebuild :)
15:18 lalatenduM hmm, then we need to ask the list
15:18 lalatenduM ndevos, yeah
15:19 Humble I think its better to ask the maintainers..
15:19 lalatenduM yup
15:19 kkeithley_ back up. In rawhide, we just announce the SO bump, and the owners of the dependent packages do their own rebuilds
15:19 lalatenduM kkeithley, cool
15:21 kkeithley_ Buf for our 3.6.0 RPMs on d.g.o for, e.g. f21, to be good community members we'll have to build our own RPMs of qemu, samba, and ganesha.
15:21 ndevos yes, indeed, and for all distributions...
15:21 Humble I dont think maintainers will do 'rebuild' for every release (ex: beta :) )
15:21 kkeithley_ If we want, we can do _official_ builds of 3.6.0 for f21 and f20, and never push them to updates-testing or updates
15:21 kkeithley_ Humble: no they won't
15:22 ndevos Humble: maintainers do the rebuild for rawhide, so the rebuild is available when f22 gets branched
15:23 Humble yep
15:23 kkeithley_ If we have _official_ builds (that aren't pushed to updates{,-testing} in koji there are tricks we can do to run scratch builds of qemu, samba, and ganesha. But we're getting ahead of ourselves.
15:23 * ndevos needs to step out (and play squash), ttyl!
15:24 kkeithley_ ohh, another squash player
15:24 Humble ndevos, njoy :)
15:24 Humble kkeithley, too  much of process :)
15:25 kkeithley_ the process isn't that heavy, but it's way to early to be worried about it.
15:26 Humble ok.. then .. I am keeping quiet
15:26 Humble :)
15:28 kshlm joined #gluster-dev
15:29 Humble kkeithley, lalatenduM ndevos will continue discussion later :)
15:29 lalatenduM Humble, wait :)
15:29 lalatenduM 1 min
15:29 Humble :)
15:29 Humble ok.
15:29 kkeithley_ well, as I think about it some more, maybe iet's easier to keep each branch in dist-git "clean", and only do the koji builds of the gluster version that matches the branch.
15:30 Humble make sense as always :)
15:30 kkeithley_ things like 3.6.0 for f20 on d.g.o will mean that we need to also build qemu, samba, and ganesha at the same time.
15:30 lalatenduM kkeithley, Humble ndevos , what about http://www.fpaste.org/136123/15725631/
15:31 kkeithley_ this is making my head hurt. ;-(
15:31 kkeithley_ ;-)
15:31 lalatenduM kkeithley, I know
15:31 lalatenduM :)
15:34 kkeithley_ 3.6.0beta1 is in rawhide now? No? I thought you only did scratch builds?  Once you do a real build, i.e. not a scratch build, then you need to announce the SO_NAME bump
15:34 kkeithley_ for rawhide
15:35 lalatenduM kkeithley, right
15:35 Humble yeah
15:35 Humble we have to worry only when 3.6 GA is available
15:35 lalatenduM kkeithley, I have not done the real build yet
15:35 kkeithley_ official builds for rawhide/f22 are automatically in the spin (such as it is) of rawhide
15:35 Humble when 3.6GA is available we have to request for a rebuiild
15:35 Humble till then we dont have to poke them
15:35 Humble thats my view
15:35 lalatenduM Humble, ok, I am fine with that too
15:35 kkeithley_ currect
15:35 kkeithley_ correcdt
15:35 kkeithley_ correct
15:36 lalatenduM cool
15:36 lalatenduM :)
15:36 Humble :)
15:36 lalatenduM thanks kkeithley Humble :)
15:36 lalatenduM Humble++
15:36 glusterbot lalatenduM: Humble's karma is now 6
15:36 Humble as long as we are not shipping 3.6 in f21 we can relax :)
15:36 lalatenduM kkeithley++
15:37 glusterbot lalatenduM: kkeithley's karma is now 22
15:37 Humble but we need to build it for download.g.org :)
15:38 kkeithley_ we _can_ build it in koji. That requires committing the 3.6.0 spec to the other branches. Or we can build it elsewhere and not "pollute" our branch .spec files.
15:39 kkeithley_ Maybe we can defer that decision until 3.6.0 GA?
15:39 Humble I think committing 3.6.0 spec to other branches will make life tough when we need to build 3.5
15:40 Humble yeah, lets avoid 'polluting' our branch spec files..
15:40 Humble any way master has that spec
15:40 Humble --> maybe we can defer that decision until 3.6.0 GA? -> yep :)
15:56 Humble kkeithley++ , lalatenduM++ planning to leave for the day :)
15:56 glusterbot Humble: kkeithley's karma is now 23
15:56 glusterbot Humble: lalatenduM's karma is now 27
15:57 Humble wil continue the discussion later :)
15:57 lalatenduM Humble, kkeithley ttyl guys
15:59 jobewan joined #gluster-dev
16:13 kkeithley_ JustinClift: did something change on build.gluster.org in the last hour? four or five smoke.sh failed and I don't see any actual failures in the tests
16:30 JustinClift kkeithley_: Not that I know of.  I haven't updated anything on that server yet
16:30 JustinClift raghug asked a little while ago about a failure in his smoke.sh too
16:30 * JustinClift just said "rerun it"
16:30 JustinClift Wasn't aware there's a pattern
16:30 JustinClift :/
16:31 * JustinClift is updating other boxen atm
16:32 JustinClift On that note, I'm about to reboot www.gluster.org
16:32 JustinClift (kernel updates)
16:32 JustinClift Hmmm, might as well install yum-cron while at it...
16:38 misc too bad, i didn't finish the saltstack stuff, we would have been able to run the yum upgrade -y on all servers
16:38 misc (except the one without yum )
16:40 JustinClift misc, yeah please finish that :)
16:55 JustinClift www.gluster.org is rebooting now
17:00 JustinClift Hmmmm, www.gluster.org is taking a long time to come back up
17:03 hagarth JustinClift: can I get access to a machine where the smoke tests are failing?
17:04 JustinClift hagarth: The smoke tests only run on build.gluster.org
17:04 misc JustinClift: fsck
17:04 JustinClift Aha, found out why www.gluster.org is taking so long to come back up
17:04 JustinClift misc: Not fsck
17:04 JustinClift The console is showing:
17:05 misc "press enter to continue"
17:05 JustinClift "Warning -- SELinux targetted policy relabel is required."
17:05 JustinClift "Relabeling could take a very long time, depending on file system size and speed of hard drives"
17:05 JustinClift With a bunch of stars progressing
17:05 misc yeah, so that's good
17:05 JustinClift It's now finished, and reboooting again, so should be all good soon
17:05 hagarth JustinClift: ah ok, checking there
17:06 JustinClift hagarth: The important part of our smoke tests is more "does is compile ok?" tests.
17:07 JustinClift If the smoke.sh stuff is having problems with the freebsd filesystem testing thingy, that part could probably be taken out
17:08 shyam joined #gluster-dev
17:08 JustinClift www.gluster.org seems to be back on line ok
17:10 JustinClift What's ironic is that our RHEL servers in Rackspace don't have the updated bash rpm available yet
17:10 JustinClift Where's some of my CentOS servers in Linode already do (and have been updated)
17:11 misc curious, all my rackspace server do have the update
17:11 JustinClift RHEL or CentOS?
17:12 misc rhel
17:12 JustinClift Do you have a few minutes to check the RHN config on www.gluster.org?
17:12 misc once I finish making sure all server are selinuxiefied :)
17:12 JustinClift :)
17:12 kkeithley_ slow propagation to mirrors....  anecdotal comment on centos-devel list. Probably depends on which mirror gets picked
17:13 JustinClift kkeithley_: The _RHEL_ servers don't have it yet
17:13 * misc look
17:14 JustinClift misc: With the "making sure all server are selinuxiefied", I guess that means the www.gluster.org relabelling was you
17:14 misc JustinClift: nope, I am looking at manageiq.org for now
17:14 JustinClift You haven't enabled it on review.gluster.org nor build.gluster.org have you?
17:14 JustinClift k
17:14 kkeithley_ yes, I don't know why any particular RHEL mirror would be fast or slow. Just making an observation that if CentOS mirrors are fast or slow then RHEL mirrors might have the same problem.   Anyway, just pontificating
17:14 JustinClift :)
17:15 hagarth JustinClift: looks like we are failing regular tests and not just freebsd smoke right now?
17:15 JustinClift hagarth: /me hasn't looked
17:16 JustinClift hagarth: I'm busy updating slave nodes and stuff
17:16 hagarth JustinClift: carry on, was trying to help raghug
17:16 JustinClift hagarth: np :)
17:18 kkeithley_ wrt failing tests, e.g.  http://review.gluster.org/8685, is just a log-level change, but it's failing the smoke.sh. I've already retriggered it once.
17:18 misc JustinClift: ok so turn out that I am stupid, i didn't properly checked and no rhel server is updated :)
17:22 JustinClift ;)
17:23 hagarth kkeithley_: right, looks like the tests are not going beyond /opt/qa/tools/posix-compliance/tests/chmod/11.t ..... ok
17:24 JustinClift hagarth: Ahhh, that'st he freebsd smoke stuff I mean (posix-compliance), not the newer VM based version
17:24 hagarth there is a watchdog to kill the smoke test if the test run goes beyond 600s
17:24 hagarth JustinClift: smoke runs on Linux too .. this is on Linux i presume
17:24 JustinClift Does this posix-compliance actually perform anything useful for us any more?
17:25 JustinClift Meh, I'll butt out :)
17:25 hagarth JustinClift: basic sanity that a smoke test should provide
17:25 JustinClift Other things to do
17:25 JustinClift k, np ;)
17:26 kkeithley_ hagarth: yes, just noticing that.  build.gluster.org is up 110 days, java is using 100% cpu
17:27 JustinClift Yay java
17:28 JustinClift If anyone managed to get a java dependency into main GlusterFS, I will find them and do bad things to them
17:28 kkeithley_ 100% of one (of six) cpu
17:28 JustinClift ... then revert their commit ;)
17:30 kkeithley_ that seems to be just jenkins, but using 100% (sometimes more) seems like a problem, and if there's a watchdog that's killing smoke.sh after 10min
17:30 JustinClift You should be able to shut down the jenkins process, then start it up again
17:30 JustinClift Pretty sure theres a startup script for it
17:30 kkeithley_ watchdog kill is a symptom ? of java/jenkins using 100%?
17:31 JustinClift ^ no idea
17:31 JustinClift hagarth: ^ ?
17:31 kkeithley_ I have bad luck with build.gluster.org
17:31 hagarth kkeithley_: not entirely sure. I suspect that one of the patches in the series might be causing a lockdown of the file system.
17:31 hagarth lockdown of the mount point to be more precise
17:32 JustinClift Well don't kill build.gluster.org.  It's not being backed up yet. ;)
17:32 hagarth a test passed smoke now!
17:32 JustinClift Uptime of 110 days isn't too bad though.  That indicates it'll survive a reboot
18:28 RaSTar joined #gluster-dev
18:29 RaSTar joined #gluster-dev
19:54 kkeithley_ lalatenduM++
19:54 glusterbot kkeithley_: lalatenduM's karma is now 28
20:02 shyam joined #gluster-dev
23:39 MacWinner joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary