Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-05-24

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:34 shaunm joined #gluster-dev
00:52 luizcpg joined #gluster-dev
01:06 shyam joined #gluster-dev
01:24 EinstCrazy joined #gluster-dev
01:33 luizcpg joined #gluster-dev
01:36 penguinRaider_ joined #gluster-dev
01:36 penguinRaider_ left #gluster-dev
01:40 penguinRaider joined #gluster-dev
01:41 jobewan joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:00 EinstCrazy joined #gluster-dev
03:10 josferna joined #gluster-dev
03:17 ppai joined #gluster-dev
03:57 overclk joined #gluster-dev
03:59 itisravi joined #gluster-dev
04:04 vimal joined #gluster-dev
04:11 atinm joined #gluster-dev
04:14 nishanth joined #gluster-dev
04:23 kotreshhr joined #gluster-dev
04:31 Manikandan joined #gluster-dev
04:36 mchangir joined #gluster-dev
04:42 gem joined #gluster-dev
04:46 aspandey joined #gluster-dev
04:57 pkalever joined #gluster-dev
05:06 Bhaskarakiran joined #gluster-dev
05:15 pranithk1 joined #gluster-dev
05:15 kotreshhr joined #gluster-dev
05:15 kkeithley joined #gluster-dev
05:15 xavih joined #gluster-dev
05:16 pkalever left #gluster-dev
05:17 pkalever joined #gluster-dev
05:17 raghug joined #gluster-dev
05:17 raghug joined #gluster-dev
05:26 Apeksha joined #gluster-dev
05:27 prasanth joined #gluster-dev
05:30 karthik___ joined #gluster-dev
05:31 hgowtham joined #gluster-dev
05:33 jiffin joined #gluster-dev
05:34 mchangir joined #gluster-dev
05:39 spalai joined #gluster-dev
05:43 itisravi joined #gluster-dev
05:44 atalur joined #gluster-dev
05:46 ppai joined #gluster-dev
05:49 rafi joined #gluster-dev
05:55 aravindavk joined #gluster-dev
05:55 pkalever left #gluster-dev
05:56 pkalever joined #gluster-dev
06:02 skoduri joined #gluster-dev
06:03 ndarshan joined #gluster-dev
06:05 kdhananjay joined #gluster-dev
06:05 kdhananjay pranithk1: just came to ask if you could merge http://review.gluster.org/14450
06:06 * kdhananjay poof
06:11 jobewan joined #gluster-dev
06:12 Saravanakmr joined #gluster-dev
06:15 asengupt joined #gluster-dev
06:26 hchiramm joined #gluster-dev
06:26 mchangir joined #gluster-dev
06:28 overclk joined #gluster-dev
06:53 mchangir joined #gluster-dev
07:17 Saravanakmr joined #gluster-dev
07:24 kdhananjay joined #gluster-dev
07:25 Saravanakmr joined #gluster-dev
07:25 ashiq_ joined #gluster-dev
07:26 spalai joined #gluster-dev
07:32 rastar joined #gluster-dev
07:39 skoduri joined #gluster-dev
07:57 karthik___ joined #gluster-dev
07:59 prasanth joined #gluster-dev
08:23 itisravi pranithk1: could you merge http://review.gluster.org/#/c/14358/ and ack http://review.gluster.org/#/c/14461/?
08:24 pranithk1 itisravi: second one didn't pass regression...
08:24 pranithk1 itisravi: and it is 3.8, so I won't be merging it
08:24 itisravi pranithk1: yeah that's why I only asked for acks..
08:24 pranithk1 itisravi: ah! ack
08:25 pranithk1 itisravi: done
08:25 itisravi pranithk++
08:25 glusterbot itisravi: pranithk's karma is now 51
08:40 EinstCrazy joined #gluster-dev
08:42 rraja joined #gluster-dev
08:49 josferna joined #gluster-dev
08:56 spalai joined #gluster-dev
08:57 ppai joined #gluster-dev
08:57 EinstCrazy joined #gluster-dev
08:57 nigelb misc: do you have details about our replication setup on gerrit?
08:57 nigelb It may be the source of our troubles :)
08:59 misc nigelb: not much
09:00 nigelb Let me paste what I discovered.
09:00 misc nigelb: stuff that go in etc/replciation.config but that's all
09:00 nigelb First our queue has this stuff http://dpaste.com/2DWGCEH
09:01 nigelb See the bunch of replication failures to file:///git
09:01 nigelb there isn't a git repo at /git on that VM.
09:01 nigelb or folder with git repos
09:02 misc mhh, nothing in /etc/fstab ?
09:02 nigelb This is what the replication.config says
09:02 nigelb http://dpaste.com/26WS9AS
09:02 nigelb ah
09:02 nigelb hh
09:02 nigelb yes, there's stuff in /etc/fstab
09:02 nigelb and now I see /git
09:02 misc I think this was previously a nfs share, but it was removed since unused
09:03 misc nigelb: and there is a /git, no ?
09:03 nigelb Yes, there is a /git
09:03 nigelb and it does have files
09:03 nigelb now I'm further confused.
09:03 nigelb let's see what a jstack says.
09:04 nigelb the relevant bit might be the folder ownership.
09:04 nigelb a few folders are owned by git
09:05 nigelb and most are owned by root.
09:06 ndevos kdhananjay, atalur: could you review these backports please? http://review.gluster.org/#/q/status:open+project:glusterfs+branch:release-3.8+message:cluster/afr
09:06 misc nigelb: they may not all be up to date or used
09:06 misc nigelb: a majority of them got moved to github, if I am not wrong
09:06 nigelb misc: fully developed on github and not gerrit?
09:07 nigelb our issues, I think stem from the queue being filled up
09:07 misc yeah, that would make sense
09:07 nigelb which explains why gerrit needs frequent restarts. It flushes out the queue.
09:07 misc well, gerrit required restart in the past before moving too :/
09:07 misc but maybe for different reasons
09:07 nigelb we may have multiple issues going on.
09:08 nigelb I'm also curious if we're DDoSing ourself with the build system :)
09:08 misc but I do not know what is current or not, we had a gitorious hosted instance as forge.gluster.org
09:08 misc this was moved to github
09:08 nigelb For now, I'll assume all of them are important
09:08 nigelb and work on clearing the queue issues.
09:08 misc yeah
09:09 misc nigelb: in fact, if you look at /review/review.gluster.org/etc/replication.config
09:09 misc there is only 2 repo that are actively used on gerrit, since replciated to github
09:09 misc now, there is maybe some kind of automated replciation that would be triggered by the first replication rules
09:10 itisravi joined #gluster-dev
09:10 misc (in fact, when i did investigate that, it did end with me patching gerrit, then complaining to google about their stupid CLA)
09:11 nigelb the first set of replication is our current issue.
09:11 nigelb as far as I can see.
09:12 misc yeah, so fixing the perm would likely work
09:12 misc removing local replication for unused repo would also work too, once we know where they should be
09:12 misc I was also working on a gerrit salt states
09:13 nigelb I'm just getting a java stack trace
09:13 nigelb 1) so we know how to debug these
09:13 misc ( https://github.com/gluster/gluster.org_salt_states/blob/master/gerrit/init.sls ), but nothing on those permissions
09:13 nigelb 2) so we can confirm we're right on our theory
09:14 misc in fact, as i did copy the file from the nfs, maybe the permissions issues was here since before the migration
09:15 misc so on the old server, the permission are :
09:15 misc drwxr-xr-x  7 techiweb techiweb 4096 Apr 20  2015 glusterfs-hadoop.old.git
09:16 nigelb I'm going to get a few repos in the git group
09:16 nigelb and see what happens.
09:17 nigelb the jstack didn't help much.
09:18 misc I am not sure I can start the old VM
09:18 skoduri ndevos, can you please merge http://review.gluster.org/#/c/14426/ , http://review.gluster.org/14428 if you do not any further comments..thanks
09:18 misc and that's a RHEL 5, so there isn't guestfish
09:20 7GHAA6HYK joined #gluster-dev
09:25 nigelb misc: it should be fine.
09:25 nigelb Let me try restarting gerrit to flush that queue out.
09:28 misc anyway, even if that's not this, that's a good catch
09:29 mchangir joined #gluster-dev
09:29 nigelb boo, still failing.
09:30 kdhananjay ndevos: have ack'd the two patches I am familiar with.
09:30 ndevos kdhananjay++ thanks!
09:30 glusterbot ndevos: kdhananjay's karma is now 19
09:30 nigelb misc: I was trying to see if we were ddosing ourselves
09:30 nigelb when I ran into this particular problem :)
09:31 misc nigelb: I can ddos the server if that help resolving the question
09:32 kdhananjay ndevos: when do you plan to do GA for  3.8?
09:32 ndevos kdhananjay: end of this month
09:32 * kdhananjay wants to know how much time she has to squeeze in few test cases and two bug fixes
09:32 kdhananjay ndevos: oh next week I suppose then.
09:33 nigelb misc: haha
09:33 ndevos kdhananjay: I want to do RC2 today, and somewhere next week GA
09:34 kdhananjay ndevos: ok, thanks!
09:34 ndevos kdhananjay: get your patches in early, and get them reviewed as well, that helps a lot
09:34 kdhananjay ndevos: yep. working on it.
09:35 misc the sooner they get, the more time there is for unplanned problem
09:39 ndevos skoduri: ah! I'm just writing a note, and kkeithley merged the master branch one
09:39 skoduri ndevos, oh
09:41 shubhendu joined #gluster-dev
09:48 nigelb misc: I've just looked at those /git repos
09:48 nigelb none of them have commits newer than march 2016
09:48 nigelb Is that when the migration happened?
09:51 mchangir joined #gluster-dev
09:52 misc nigelb: around this date, yes
09:53 nigelb misc: I changed permissions of some of those folders to be owned by review
09:53 nigelb the queue is much smaller now :)
09:53 misc nigelb: but if no one is using those repos (since no one complained they were out of date), why do we keep them ?
09:55 nigelb We shouldn't.
09:55 nigelb I'm going to turn off that replication line.
09:55 nigelb instead of messing around with permissions.
09:55 nigelb where these used for anything? Backups perhaps?
09:57 itisravi joined #gluster-dev
09:57 misc given that they were on a shared nfs server a long time ago, i would bet on synchronisation with others server
09:57 misc engg.g.o was running 3 VM
09:57 misc jenkins, gerrit and a shell server
09:57 nigelb ahhh
09:58 nigelb If we still have heavy loads
09:58 nigelb I'm contemplating setting up a replicated git server on rackspace
09:59 misc I would first migrate to postgresql :)
09:59 nigelb would reduce load on gerrit server and reduce bandwidth costs at rackspace.
09:59 misc cause frankly, h2 is likely untuned
09:59 nigelb well, yeah, after the migration and upgrade
09:59 nigelb Like after all of this first set of things are done.
09:59 misc nigelb: we do not have bandwidht cost at rackspace, the problem is more instances number
09:59 nigelb aha
10:00 misc I do not want to say that coders do not work hard enough, but I think they are not yet at a terabytes of patch per month :)
10:01 nigelb heh
10:01 nigelb restarting gerrit again
10:01 nigelb without that replication
10:01 nigelb when we upgrade the OS, we can get rid of all these unwanted things to a clean config.
10:07 nigelb misc: do you want a pull req to the ansible state for gerrit for this change?
10:07 nigelb Or would you rather I worked towards an ansible role directly?
10:07 misc nigelb: directly to ansible
10:07 misc the salt state is just a prototype
10:07 nigelb okay
10:07 misc I bumped into "salt is too old on EL5" before deploying it
10:07 nigelb the ansible role will help us migrate to EL7
10:07 nigelb that should be good.
10:08 misc yeah, but for now, dev.gluster.org is completely unmanaged by ansible
10:08 misc and I suspect we shouldn't change that before the release
10:08 misc (I like to live dangerously, but not that much)
10:08 rraja_ joined #gluster-dev
10:09 nigelb I'll get everything in order so we can migrate to postgres immediately after, though.
10:12 misc how hard would it be to have HA for gerrit ?
10:12 misc like, there is the sql db
10:12 misc and I guess something on the local fs
10:13 nigelb the git stuff can be replicated.
10:13 nigelb It's not too hard.
10:13 nigelb People do it all the time.
10:14 nigelb gerrit has docs for how to do it.
10:14 nigelb we may have to upgrade, though
10:15 misc yeah, but so, what happen if the server fail over at the wrong moment ?
10:15 misc (ie, before git is replciated, but after the db is, or the reverse)
10:15 misc mhh
10:15 atalur_ joined #gluster-dev
10:15 misc it might be ok if that's on different branches
10:15 * misc read too much papers on distributed computing
10:16 nigelb This is gerrit's stuff about scaling https://gerrit.googlesource.com/homepage/+/md-pages/docs/Scaling.md
10:19 kkeithley I mgerged the the master branch one ?
10:19 misc nigelb: that's a bit messy for a document :)
10:20 nigelb yeah. More of a wiki sort of thing than an actual useful guide.
10:20 misc the problem is that they are more looking on the scaling thing than replication for HA purpose
10:21 misc and I do not think we are yet at the stage where we need replciation (or at least, I didn't feel that based on a quick reading of the ressources usage, maybe I am wrong)
10:21 kkeithley ndevos, skoduri:  I merged the master branch one ?
10:22 kkeithley was I not supposed to?
10:23 nigelb misc: how much work would it be to get gerrit under munin?
10:23 misc nigelb: depend on the way
10:24 misc we need a port redirection by IT
10:24 kkeithley skoduri: https://bugzilla.redhat.com/show_bug.cgi?id=1339090 (grace post failback, downstream) says status is POST, but I don't see the patch in https://code.engineering.redhat.com/gerrit/
10:24 glusterbot Bug 1339090: urgent, unspecified, ---, kkeithle, POST , During failback, nodes other than failed back node do not enter grace period
10:25 misc we need to install munin, and likely put a manual config in /etc/munin/conf.d
10:25 misc another solution is to either make salt work on el5 (ie, push a backport, or see why it is not in el5)
10:25 misc or we need to open a port for direct ssh access on the server, so that's again IT magic
10:26 nigelb Hrm.
10:26 kkeithley skoduri: You posted the patch for me? Or I still need to do that?
10:26 kkeithley wrong channel
10:28 misc nigelb: so I would say 1 day of work at worst case and around 1 week of wait
10:28 nigelb misc: we can't install munin with ansible, can we?
10:28 misc nigelb: dev.gluster.org is not managed by ansible yet
10:28 nigelb Before we do any major changes for gerrit, I'd like to get more numbers out of gerrit.
10:28 nigelb no of pushes, no of clones
10:28 misc one solution that would work is a ssh tunnel
10:29 nigelb memory consumption and cpu consumption
10:29 misc that's fragile
10:29 misc but I guess it could do the trick for 1 or 2 weeks worth of data
10:31 nigelb Let's wait and do it the right way.
10:31 nigelb It'll take a while before we get to do much with the data anyway.
10:31 ndevos kkeithley: no, thats fine, skoduri asked me to review and merge it, you merged while I was reviewing :)
10:32 nigelb In the meanwhile we need to migrate to postgres, upgrade to EL7, and upgrade gerrit.
10:32 EinstCrazy joined #gluster-dev
10:32 ndevos kkeithley: any objections to merge http://review.gluster.org/14503 - the config.guess/sub one?
10:33 kkeithley No objection from me. I was hoping to get another set of eyes on it.
10:33 kkeithley first
10:37 pranithk1 xavih: hey! I was going through the locking changes recently done. I have some doubts, could you let me know when you are free...
10:39 pranithk1 xavih: In ec_lock_assign_owner(), if the lock is added to wait_list() I think we lose the ref but there is one extra lock.. because refs_pending is decremented but there is no ref tracking the fact that it is waiting?
10:39 pranithk1 xavih: May be I should look at the whole code...
10:40 pranithk1 xavih: give me some more time
10:41 post-factum hard time for India https://www.youtube.com/watch?v=c1JEJpWhbU4
10:41 ndevos kkeithley: who's eyes would you like?
10:41 pranithk1 xavih: I think I found the bug I missed in the review about timer-cancel :-(
10:44 josferna joined #gluster-dev
10:48 xavih pranithk1: we do not track the number of refs being waiting. It's equal to the number of entries in the waiting list and we do not need the exact number in any place
10:49 xavih pranithk1: if waiting list is empty, there are 0 references, otherwise there are 1 or more, but we don't care how many
10:49 pranithk1 xavih: ah!, okay, must be done in the new patch... sorry for the ping before I completed the full review :-)
10:49 pranithk1 xavih: got it
10:50 xavih pranithk1: what bug are you talking about ?
10:50 misc nigelb: going out for a few hours if you do not have any more questions
10:50 nigelb misc: Nope. I'm just planning out documentation writing :)
10:51 pranithk1 xavih: The one you fixed about timer_cancel race. I didn't get a chance to review it. Jeff merged it. So I am reviewing it now...
10:51 pranithk1 xavih: sorry for the delay :-(
10:51 xavih pranithk1: ah, ok. No problem :)
10:52 pranithk1 xavih: I delayed reviewing your hardware acceleration patches also. :-(. They are not making to 3.8 because of me chiefly.
10:52 pranithk1 xavih: Hope by end of this aspandey comes up to complete speed...
10:52 pranithk1 xavih: this year..
10:53 kkeithley ndevos: anyone who has made any changes to build and/or .spec file!
10:54 pranithk1 xavih: seems like there is going to  be new release model. Hope I get a chance to make it up to you
10:54 kkeithley but since it's blocking getting 3.8rc2 out, it's not worth waiting.  I was just hoping
10:54 pranithk1 xavih: which one did you vote by the way? I voted for release every 3 months
10:56 xavih pranithk1: I have no real preference between 2 or 3 months
10:56 pranithk1 xavih: :-). So which one did you vote?
10:57 xavih pranithk1: I haven't voted yet
10:57 pranithk1 xavih: then go for 3 months :-)
10:57 post-factum xavih: or 2
10:57 xavih hehe
10:58 * kkeithley thinks pranithk1 should volunteer to be the release manager for 3.9
10:58 pranithk1 kkeithley: No releases will happen looking at the way I operate :-(. So far all the releases I wanted to do were bad in one way or the other. I am not able to be as strict as ndevos about taking more patches.
10:59 pranithk1 kkeithley: So it never gets frozen
10:59 kkeithley there's a pill for that. ;-)
11:01 pranithk1 kkeithley: oh! what is this magic pill?
11:01 kkeithley StrictNine
11:01 misc ah ah
11:02 misc I heard that's a killer feature
11:02 pranithk1 kkeithley: you lost me
11:03 kkeithley strychnine is a poison
11:03 kkeithley rhymes with StrictNine
11:04 kkeithley not as funny when you have to explain it.
11:04 kkeithley Maybe Strict-3.9
11:04 kkeithley ?
11:05 pranithk1 kkeithley: Hmm... I need to become more strict....
11:05 kkeithley it's okay, misc got the joke
11:05 kkeithley no, please stay just the way you are.
11:05 kkeithley don't change a thing
11:06 pranithk1 kkeithley: I don't know man, the present way is not working out, everytime it is either ndevos or kaushal who get to work so much
11:06 pranithk1 kkeithley: of course along with Vijay, I mean for 3.7 and all that...
11:13 atinm joined #gluster-dev
11:13 ira joined #gluster-dev
11:14 kkeithley yeah, don't worry about it. I was just joking
11:14 kkeithley s/joking/making a joke/
11:15 kkeithley but maybe being the release co-manager for 3.9 might not be such a bad idea after all.
11:20 pranithk1 kkeithley: yeah.. let's see. Let the voting complete.
11:22 pranithk1 xavih: Did one round of code walk through. Looked fine. Good that we got rid of refs, inserted counters
11:22 hgowtham joined #gluster-dev
11:22 pranithk1 xavih: Will try to understand it a bit more in some scenarios
11:23 pranithk1 xavih: how did you figure out the bug in timer_cancel. I saw it when I reviewed the old patch once more...
11:26 josferna joined #gluster-dev
11:27 xavih pranithk1: I did see fops being executed without the locks taken in some tests. I didn't see it either while reviewing the old patch :(
11:29 pranithk1 xavih: it was not handling the failure of timer_cancel right? that is the bug you found?
11:31 Saravanakmr #REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ( start ~in 30 minutes)
11:32 xavih pranithk1: it was, but it wasn't doing it well: if the cancellation failed, it marked the lock to be released, but it allowed the current fop to continue.
11:34 pranithk1 xavih: yeah!, that is what I saw.
11:34 karthik___ joined #gluster-dev
11:35 pranithk1 xavih: Is there an easy way to hit the bug. without the patch?
11:37 hchiramm if any one want to deploy openshift in kubernetes or Openshift , the configuration files are available @ https://github.com/gluster/glusterfs-kubernetes-ose
11:38 xavih pranithk1: I've only seen it at random under high load. I don't think it's easy to reproduce
11:39 xavih pranithk1: you would need to send a fop just while the previous fop's timer on the same lock is expiring...
11:41 nigelb misc: munin lists a ci.gluster.org.
11:41 nigelb can't seem to hit that machine though
11:42 pranithk1 xavih: okay :-)
11:44 nishanth joined #gluster-dev
11:44 misc nigelb: mhh, I can ssh on it
11:45 misc nigelb: that's the 2nd server currently powered on in RH DC, waiting to be moved to the other part of the DC
11:45 misc nigelb: by "you can't hit", can you be mor eprecise ?
11:46 misc (the server is running 2 VM for fedora and ceots testing, but this was done using some vpn due to issue on iweb side)
11:47 misc (and need to be reinstalled, since there is also something funky going on the disks...)
11:47 kkeithley misc,nigelb: I know you guys are busy, but don't forget my request for a VM in the DC with a public IP.  (For a machine to run clang, cppcheck, maybe coverity, and publish the results.)
11:48 rafi joined #gluster-dev
11:48 misc kkeithley: I didn't, but I am also supposed to be in PTO :)
11:51 kkeithley I can't do much about that, other than suggest that if you're supposed to be on PTO, then you should go.  There's a spot on the beach with your name on it, and it's waiting for you. You should go
11:53 misc for now, i am more moving books from old place to new place
11:53 misc even when in PTO, I have to deal with migration of data :(
11:55 nigelb misc: ah, i expected a webserver in there.
11:56 nigelb kkeithley: You just want those things run, correct?
11:56 nigelb Do you acutely care about where?
11:56 nigelb (I mean, yes, the cost matters, and DC is better)
11:56 nigelb but I'm just wondering about whether you want the results or access to run those tests
11:57 kkeithley nigelb: I guess you
11:58 kkeithley 're asking me those questions.
11:58 kkeithley Yes, I want a box with a web server where a) I can run the analyses, and b) post the outputs.
11:59 nigelb I'm going to be working on CI stuff once I understand what we have where.
11:59 kkeithley so I need to be able to ssh into it to set up the "tests"
11:59 nigelb a) why not have them run via jenkins?
11:59 nigelb b) Can they be automated so you don't need to ssh in?
12:00 kkeithley maybe they can run in jenkins
12:00 kkeithley automated as in cron job, or jenkins. both work
12:01 nigelb I mean jekins.
12:01 nigelb *jenkins
12:01 kkeithley sure, jenkins probably works to automate them
12:01 nigelb I'm also working with misc to make our CI infra more... scaleable.
12:01 nigelb So less manual ssh
12:02 nigelb and more automatically built images.
12:02 kkeithley to me it's six of one, half a dozen of the other. I'm happy to go with jenkins
12:03 kkeithley right now they run on internal lab machines and the results get scp'd to download.gluster.org.
12:04 kkeithley A VM (in jenkins) where they run and the results are posted  without scp/ssh is what I want.  I would still need to ssh in to install software and tweak the builds.
12:06 nigelb Ideally, I want to create a base image for all our jobs.
12:06 nigelb with all the software in them.
12:06 nigelb and tweak them with ansible.
12:06 nigelb But let's see how we can reach a middle ground.
12:07 nigelb I'm willing to compromise for a setup where we build a machine with ansible and then do test runs on them.
12:08 kkeithley coverity, clang, cppcheck, maybe things like Intel's and AMD's compilers?  The output produced by some of them is quite large.  I have cron jobs running on download.g.o (and the lab machine) to prune the output posted in the web to keep from using all the disk space
12:08 nigelb there has to be a way to do this better than cron jobs.
12:08 nigelb I really want to do this correctly so you and I are happy with it.
12:09 kkeithley I don't know that we need all those pieces in One True Ansible image.  Maybe two?
12:09 nigelb Once I'm done with gerrit stuff, let's talk.
12:09 nigelb I'm curious to solve this in a reproducible manner.
12:10 nigelb I'm sure there are more builds that can be pruned and cleaned up.
12:10 kkeithley I am certainly willing to use jenkins.  I don't know that Jenkins is necessarily better (or worse) than cron. That's really beside the point.
12:10 nigelb My idea is to have all of this documented in code in a reproducible manner.
12:10 nigelb If we have a hardware failure
12:11 nigelb you still need to be able to run these tsets.
12:11 nigelb If we want to improve the number of machines that run these tests, we need to be able to.
12:11 kkeithley well, get gerrit and jenkins stable. That's the most important thing atm
12:12 nigelb Once that's sorted, I can focus more on this. This is pretty much what I'll be working on.
12:12 misc so far, jenkins is kinda stable
12:19 penguinRaider joined #gluster-dev
12:22 nigelb misc: how do I go about creating a user on jenkins?
12:22 misc nigelb: iirc, jenkins use local users
12:22 misc so useradd
12:22 nigelb ahh
12:23 misc or adduser, not sure which is the debianism and which is the posix one
12:23 misc for now, taht's local user, wanted to move to ldap
12:24 misc but for that, we need more structure, like who and why have access, etc
12:37 kkeithley Saravanakmr++
12:37 glusterbot kkeithley: Saravanakmr's karma is now 6
12:46 nishanth joined #gluster-dev
12:53 rraja joined #gluster-dev
12:58 atinm joined #gluster-dev
13:15 vimal joined #gluster-dev
13:16 nishanth joined #gluster-dev
13:29 nthomas joined #gluster-dev
13:32 nigelb general question for devs - when you get a +1 from a build failure
13:32 nigelb how do you find out what failed?
13:33 nigelb ah, nvm.
13:33 nigelb I can scrolldown.
14:05 vimal joined #gluster-dev
14:17 pkalever left #gluster-dev
14:24 atinmu joined #gluster-dev
14:24 atinmu joined #gluster-dev
14:31 aravindavk joined #gluster-dev
14:46 lpabon joined #gluster-dev
14:47 wushudoin joined #gluster-dev
14:47 wushudoin joined #gluster-dev
14:51 rafi1 joined #gluster-dev
14:54 kkeithley at the risk of stating the obvious, you never get a +1 from a failure.
15:13 aravindavk joined #gluster-dev
15:50 skoduri joined #gluster-dev
16:01 pranithk1 joined #gluster-dev
16:08 rraja joined #gluster-dev
16:22 dlambrig_ joined #gluster-dev
17:15 luizcpg joined #gluster-dev
17:19 jiffin joined #gluster-dev
17:41 shyam joined #gluster-dev
18:02 hagarth joined #gluster-dev
18:30 rafi joined #gluster-dev
18:59 jobewan joined #gluster-dev
19:14 shaunm joined #gluster-dev
20:11 luizcpg_ joined #gluster-dev
20:27 dlambrig left #gluster-dev
21:50 shaunm joined #gluster-dev
22:17 eKKiM_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary