Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-23

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 kkeithley_ joined #gluster-dev
00:19 kkeithley_ joined #gluster-dev
00:19 kkeithley_ so nobody knows what's wrong with jenkins?
00:22 pranithk_afk kkeithley_: it works fine for me... whats happening?
00:27 kkeithley_ most just submitted in the last 12 hours haven't run the regression test
00:28 kkeithley_ most changes submitted
00:28 pranithk kkeithley_: That is true. Even mine didn't get triggered
00:29 kkeithley_ I've rebooted most of the slaves, but they're still all sitting idle.
00:29 pranithk kkeithley_: hmm... kaushal will come online in ~4 hours...
00:29 pranithk kkeithley_: may be send a mail to gluster-infra?
00:50 pranithk kkeithley_: Are you part of gluster-infra? In which case ignore the two messages above this :-D
00:50 kkeithley_ ???
00:51 kkeithley_ ERROR: Error fetching remote repo 'origin'
00:53 kkeithley_ http://build.gluster.org/job/rackspace-regression-2GB-triggered/11177/console
00:59 hagarth http://build.gluster.org/job/rackspace-regression-2GB-triggered/11178/
00:59 hagarth seems to have got picked
01:01 mribeirodantas joined #gluster-dev
01:04 kkeithley_ yeah, I'm kicking them
01:07 hagarth kkeithley_: worth a chat with kbsingh et al in the summit to see if we can completely migrate off iweb
01:09 kkeithley_ yup
01:09 kkeithley_ I've resubmitted ~10.
01:10 kkeithley_ let me kick a couple more back to life and I'll submit some more
03:01 overclk joined #gluster-dev
03:36 atinm joined #gluster-dev
03:58 itisravi joined #gluster-dev
04:00 gem joined #gluster-dev
04:03 shubhendu joined #gluster-dev
04:06 kdhananjay joined #gluster-dev
04:24 sakshi joined #gluster-dev
04:27 poornimag joined #gluster-dev
04:40 anrao joined #gluster-dev
04:41 nbalacha joined #gluster-dev
04:41 ashishpandey joined #gluster-dev
04:48 ndarshan joined #gluster-dev
04:50 arao joined #gluster-dev
05:03 jiffin joined #gluster-dev
05:09 pppp joined #gluster-dev
05:11 schandra joined #gluster-dev
05:11 spandit joined #gluster-dev
05:17 hgowtham joined #gluster-dev
05:20 vimal joined #gluster-dev
05:21 ashiq joined #gluster-dev
05:22 deepakcs joined #gluster-dev
05:29 anekkunt joined #gluster-dev
05:31 overclk joined #gluster-dev
05:32 Bhaskarakiran joined #gluster-dev
05:35 atalur joined #gluster-dev
05:36 soumya joined #gluster-dev
05:40 arao joined #gluster-dev
05:42 raghu joined #gluster-dev
05:46 overclk spandit, raghu mentioned that http://review.gluster.org/#/c/11311/ is probably the patch that's causing regression issues, correct?
05:47 overclk spandit, the patch itself passed regression though ;)
05:47 Gaurav__ joined #gluster-dev
05:49 kaushal_ joined #gluster-dev
05:50 raghu overclk, spandit: I have sent a patch which reverts that change. http://review.gluster.org/#/c/11354/
05:50 gem_ joined #gluster-dev
05:50 raghu spandit: Please take a look at it. If that does not seem to be the issue (or the fix for the issue), please feel free to comment in the patch or send another fix
05:52 overclk raghu, spandit, the thing is, that patch itself passed regression and tripped other patches :O
05:54 raghu overclk: yes.
05:59 josferna joined #gluster-dev
06:02 spandit overclk, I am wondering the same thing, I dont know how that passed the regression. But we are sure that is the one which is causing the problem
06:07 badone joined #gluster-dev
06:11 gem joined #gluster-dev
06:12 atinm overclk, spandit : is this patch in 3.7.2 as well?
06:12 spandit overclk, Nope its not in 3.7.2
06:12 spandit atinm, ^
06:13 atinm spandit, ahh!! that's a relief
06:16 arao joined #gluster-dev
06:24 raghu overclk: I think reverting that patch seems to work. Without reverting it, the tests failed exactly at the same points (13, 15, 17-18) on my local machine. After reverting the tests passed. :)
06:24 overclk spandit, OK, in that case let me try to revert it in my local tree and run the test again.
06:24 Bhaskarakiran joined #gluster-dev
06:24 overclk raghu, you are a jiffy faster than me ;)
06:25 overclk spandit, so, what functionality do we loose with the reverted patch?
06:25 spandit overclk, Sure. Vijaikumar and I did observe the same thing what raghu observed.
06:25 spandit overclk, that patch was supposed to fix a possible memory leak
06:25 overclk spandit, raghu already confirmed :)
06:26 spandit overclk, we need to find out a different approach to fix the memory leak
06:26 overclk spandit, so, now with this revert we introduce a mem leak? :O
06:26 spandit overclk, am afraid so, we are working to fix the mem leak
06:28 overclk spandit, raghu, so, every single patch under review should be failing regression, correct? It's then wise to merge the reverted patch.
06:29 raghu overclk: he he
06:29 spandit overclk, yes. But I do see some of the patch passing the regression :-| It is better if we merge the patch
06:30 gem joined #gluster-dev
06:30 spandit overclk, Do we need to wait for regression to pass on this patch :P That would lead to deadlock situation
06:31 raghu overclk: I dont think v have a bug for that. I have sent it as rfc patch (mainly to get the opnion of spandit and vijaykumar). Shall I log a bug and send the patch against it?
06:32 overclk spandit, deadlock? Is there a test that checks the mem leak?
06:32 spandit overclk, oh! my bad :P
06:32 overclk raghu, we don't have a bug. Could we possibly use the same bug the patch fixes provided that we reopen the bug?
06:33 spandit raghu, cant we send the patch with the bug ID that was associated to original patch.
06:37 atinm spandit, which test is failing?
06:38 spandit atinm, I am sorry, I did not get you
06:39 atinm spandit, I am asking which test is broken because of that patch?
06:39 atinm spandit, the one which you guys are planning to revert
06:39 saurabh_ joined #gluster-dev
06:39 atinm spandit, one of my patch also failed regression, so wanted to check it out quickly whether its same or not
06:39 raghu spandit: sure. Will do it.
06:40 raghu overclk: shall I send it right away or should I wait for the previous patchset to pass regressions?
06:40 spandit atinm, ah ok. give me a min
06:42 spandit atinm, tests/bugs/quota/bug-1153964.t  (Failed at 13, 15, 17-18)
06:42 atinm spandit, ok
06:42 atinm spandit, mine is different then
06:42 atinm spandit, thanks
06:42 spandit atinm, is that related to quota too ?
06:42 atinm spandit, nopes, posix
06:42 spandit atinm, oh alright.
06:45 overclk raghu, send the patch out
06:45 overclk atinm, milind too hit the same posix regr. failure.
06:45 josferna joined #gluster-dev
06:49 anrao joined #gluster-dev
06:51 raghu overclk: sure
06:52 arao joined #gluster-dev
06:52 pranithk joined #gluster-dev
06:56 Gaurav__ joined #gluster-dev
07:04 raghu overclk: sent the patch with bugid
07:04 ppai joined #gluster-dev
07:17 overclk thanks raghu
07:17 pranithk xavih: Let me know if the new version of patch is fine :-)
07:17 pranithk xavih: I'm off for lunch...
07:23 kotreshhr joined #gluster-dev
07:28 josferna joined #gluster-dev
07:30 itisravi anrao: ping
07:30 glusterbot itisravi: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
07:32 itisravi anrao: For http://review.gluster.org/#/c/9897/, I think you did not convert afr-inode-write.c ?
07:34 poornimag joined #gluster-dev
07:34 hchiramm anrao, ^^^^
07:35 anrao joined #gluster-dev
07:36 hchiramm <itisravi> anrao: For http://review.gluster.org/#/c/9897/, I think you did not convert afr-inode-write.c ?
07:36 overclk ashiq, I see you use the same message id for start/stop scrub. Any reason why? IMO, they should be different.
07:36 hchiramm anrao, ^^
07:36 anrao hchiramm: yes checking it
07:37 ashiq overclk, checking it
07:38 Gaurav__ joined #gluster-dev
07:38 overclk ashiq, same with start/stop crawl in bitd {signer}.
07:38 overclk ashiq, check br_fsscanner_log_time() and br_fsscan_reschedule()
07:40 soumya joined #gluster-dev
08:01 anrao itisravi: updated the patch file with afr-inode-write.c. Sorry, somehow I missed it
08:03 itisravi anrao: cool.
08:11 pranithk xavih: pm
08:14 ashiq overclk, changed in br_fsscanner_log_time() and br_fsscan_reschedule()
08:15 ashiq overclk, couldn't find crawl start/stop
08:16 rjoseph joined #gluster-dev
08:23 atalur joined #gluster-dev
08:26 arao joined #gluster-dev
08:30 gem joined #gluster-dev
08:35 soumya ndevos, I get configuration error while trying to login to review.gluster.org "The HTTP server did not provide the username in the GITHUB_USER header when it forwarded the request to Gerrit Code Review. "..
08:35 soumya I know its already been asked many times..sorry couldn't recollect what was the workaround given..
08:35 soumya restarting the browser dint help
08:44 ndevos soumya: go to review.gluster.org/logout and login again after that
08:45 soumya ndevos, thanks..that worked
08:45 soumya ndevos++
08:45 glusterbot soumya: ndevos's karma is now 165
08:45 ndevos :)
08:48 overclk ashiq, br_oneshot_crawl()
08:49 anekkunt joined #gluster-dev
08:52 * ndevos *pow*
08:59 obnox hagarth: i missed you yesterday.. I came online again 1 minute after you dropped of
08:59 obnox f
08:59 ashiq overclk, done :)
08:59 gem joined #gluster-dev
09:00 obnox hagarth: I was testing on a system of ben.t. i don't currently have a representative system myself
09:01 obnox hagarth: but I am sure that the effects should be observable on smaller systems as well
09:06 atinm rjoseph, pm
09:10 arao joined #gluster-dev
09:11 hchiramm schandra, ping
09:11 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:24 ndevos atinm: bug triage meeting?
09:25 atinm ndevos, yes :)
09:25 ndevos atinm: okay, thanks :)
09:25 atinm ndevos, just remind me to send a reminder :D
09:25 atinm ndevos, if you remember :) :)
09:25 ndevos atinm: I just did, a few hours in advance is good :)
09:26 atinm ndevos, hmmm
09:26 atinm ndevos, will shoot the mail then
09:26 ndevos even a day before might be helpful, and include a .ics invite for the single meeting
09:41 badone joined #gluster-dev
09:42 arao joined #gluster-dev
09:43 raghu joined #gluster-dev
09:44 poornimag joined #gluster-dev
09:46 gem joined #gluster-dev
09:51 arao joined #gluster-dev
09:54 Gaurav__ joined #gluster-dev
10:00 kkeithley_ joined #gluster-dev
10:04 kkeithley_ ndevos: e.g. look at Du's patch http://review.gluster.org/#/c/11363/ submitted three hours ago.  No automatic regression test has been run
10:04 * ndevos looks
10:05 rjoseph atinm: ping, are you looking into https://bugzilla.redhat.com/show_bug.cgi?id=1234720?
10:05 glusterbot Bug 1234720: high, unspecified, ---, amukherj, ASSIGNED , glusterd: glusterd crashed
10:07 ndevos kkeithley_: aha, http://build.gluster.org/gerrit-trigger/ shows that the connection to gerrit died
10:07 * ndevos clicked the red ball, and it turned blue
10:08 hchiramm ----
10:08 glusterbot hchiramm: --'s karma is now -1
10:08 hchiramm Error unpacking rpm package glusterfs-server-3.7.2-2.fc22.x86_64
10:08 hchiramm error: unpacking of archive failed on file /var/lib/glusterd/hooks/1/delete/post: cpio: mkdir
10:08 hchiramm --------
10:08 glusterbot hchiramm: ------'s karma is now -2
10:09 ndevos kkeithley_: you can also use http://build.gluster.org/gerrit_manual_trigger/ and search for change:11363 to retrigger tests completely
10:10 kkeithley_ jenkins has too many knobs
10:10 poornimag joined #gluster-dev
10:10 kkeithley_ hchiramm: :-(
10:10 ndevos kkeithley_: want to add more tools, with more knobs?
10:10 hchiramm kkeithley, :(
10:10 kkeithley_ merde
10:13 hchiramm kkeithley, ah.. check #gluster
10:13 kkeithley_ I have no scrollback
10:14 hchiramm http://fpaste.org/235693/05443314/ kkeithley
10:15 kkeithley_ may as well put the -1 RPMs back. The work-around is to manually `mkdir /var/lib/glusterd/hooks/1/delete/post` before install/upgrade.  I don't know if there's a work-around for the -2 RPMs
10:17 fabiand joined #gluster-dev
10:17 fabiand Hey
10:17 ndevos kkeithley_: was the %ghost not removed in the -2 version?
10:17 fabiand I just wanted to note that I also run into the glusterfs-server rpm installation problem
10:17 hchiramm <kkeithley_> may as well put the -1 RPMs back. The work-around is to manually `mkdir /var/lib/glusterd/hooks/1/delete/post` before install/upgrade.  I don't know if there's a work-around for the -2 RPMs
10:18 hchiramm fabiand, ^^^
10:18 fabiand thanks
10:18 ndevos the workaround should still work, why would that not be the case?
10:20 ndevos hchiramm: can you create that dir and install the -2 package on you f22?
10:20 atinm joined #gluster-dev
10:20 hchiramm ndevos, checking
10:21 hchiramm ndevos, it works.
10:21 ndevos kkeithley_: did you build -2 from fedora dist-git, or something else?
10:22 hchiramm fabiand, did  u face the issue with 3.7.2-2 rpms ?
10:22 fabiand hchiramm, it's still building, it takes ~1hr
10:23 hchiramm r u using this repo ? http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/Fedora/
10:23 fabiand hchiramm, http://jenkins.ovirt.org/user/fabiand/my-views/view/Node/job/ovirt-appliance-node_master_create-squashfs-el7_merged/142/console if you want to tailf
10:23 fabiand hchiramm, latest centos7
10:24 hchiramm oh..ok
10:27 kdhananjay joined #gluster-dev
10:28 ndevos fabiand: dont bet your money in the success of that run
10:29 fabiand ndevos, :) nah, that#s fine for me if it fails
10:29 fabiand that's why we have CI, right ?
10:29 ndevos all those red balls in Jenkins make me sad
10:30 fabiand Yes .. sometimes ..
10:30 fabiand Then you can swap colors ..
10:30 hchiramm :)
10:31 ndevos fabiand: do you have multi-host tests? like setup a server process, start a client kida thing?
10:31 fabiand ndevos, not for gluster, no
10:31 fabiand ndevos, but we have it for our own client/server thing
10:32 fabiand ndevos, in the end tho, we hope to be able to cover gluster as well
10:32 ndevos fabiand: does that use standard jenkins plugins for multiple slaves?
10:32 fabiand ndevos, no - it's a home brewn testing framework
10:32 fabiand relying on nesting and creating vms itself
10:32 soumya_ joined #gluster-dev
10:33 ndevos fabiand: ah, okay... I hope we get some multi-slave setup for our Jenkins on build.gluster.org one time...
10:33 fabiand good luck!
10:33 fabiand the problem I see with it, is really the isolation and tainted env problem.
10:33 fabiand So, that you either mess with your slave
10:33 fabiand or that a slave is "idrty"
10:33 fabiand That brought us to creating VMs on the fly which ensure that we start with an isolated and clean environment
10:34 ndevos yeah, cleanup of a slave is sometimes an issue for us :-/
10:34 fabiand :)
10:34 fabiand It was for us as well
10:34 fabiand Well, it still is - we haven#t migrated everything yet
10:34 ndevos there are libvirt and openstack plugins to create slaves on demand, that is something I still want to look at
10:35 fabiand Yep
10:35 ndevos we dont need nested-virt, so things might be easier for us
10:35 fabiand The issue we saw there was the configuration of the Vms using the setupps
10:35 fabiand IIRC the libvirtd plugin had the problem that you could only launch pre-defined VMs ..
10:36 fabiand Yes - virt makes it a bit nasty ..
10:36 fabiand won't containers also help for isolated testing?
10:37 ndevos containers can only help partially, we also run tests on NetBSD and FreeBSD
10:37 fabiand ah - nice
10:37 fabiand but hey that can't be - containers solve everything! :)
10:38 ndevos yeah, thats what everyone says :)
10:38 fabiand … and nearly everybody believes.
10:39 ndevos no more oVirt? or are you going the container way too?
10:39 fabiand :-D
10:39 fabiand No!
10:39 * ndevos would like to see our testing problems addressed in a non-container and more OS portable way
10:39 ndevos :D
10:40 fabiand ndevos, VMs are good for that.
10:40 ndevos fabiand: exactly!
10:40 fabiand ndarshan, containers surely have their place. But they don't - as we know! - address everything.
10:40 ndevos fabiand: oh, is there a oVirt plugin for Jenkins? maybe we can spin slaves up that way?
10:40 fabiand mh ...
10:40 fabiand No, not that I know
10:41 fabiand ndarshan, if you only need "fresh" slaves, then I'd revisit the libvirtd plugin
10:41 ndevos oh, okay, I was hoping not to need to look into OpenStack too much
10:41 fabiand maybe it is capable of creating new VMs
10:41 fabiand :D
10:41 fabiand Not?
10:41 ndevos lol, no, even if many others would like that
10:42 ndevos libvirt or oVirt would be *so* much simpler
10:42 kotreshhr1 joined #gluster-dev
10:43 ndevos csim: do you think our new hardware could create new Jenkins-slaves as VMs for each regression run?
10:43 fabiand we actually need todkcer for VMs
10:43 * fabiand wondered that we do not have a tool which takes a Dockerfile and generates a VM ..
10:43 ndevos csim: that would make tests so much cleaner, and previous tests can not break future ones
10:43 nbalacha joined #gluster-dev
10:44 ndevos can you pronounce todkcer ?
10:46 fabiand me? No.
10:46 nbalacha joined #gluster-dev
10:51 kkeithley1 joined #gluster-dev
11:01 arao joined #gluster-dev
11:01 hagarth o/
11:02 hagarth ndevos: what is todkcer?
11:02 ndevos hagarth: I dont know, fabiand came with that
11:02 fabiand ndevos, really?
11:03 ndevos 12:43 < fabiand> we actually need todkcer for VMs
11:03 fabiand uh ..
11:03 * fabiand doesn't see what he meant ..
11:04 ndevos well, duckduckgo does not really seem to know about it either
11:04 fabiand seems to be a permuattion of docker, but that does not make sense
11:07 ndevos hagarth: btw, you dont have any objections if we add SEEK_DATA and SEEK_HOLE with a seek() FOP, right?
11:07 ndevos fabiand: what do you use for setting up VMs for your multi-node testing?
11:08 hagarth ndevos: these flags are Linux only right? as long as it doesn't break portability, I am good.
11:09 ndevos hagarth: I'm not sure if other OSs support it, but it surely depends on the underlaying fs too
11:09 ira joined #gluster-dev
11:09 ndevos 'man 2 lseek' contains some details
11:10 ndevos pranithk, xavih, kdhananjay: could you start thinking about implementing a 'man 2 lseek' like SEEK_DATA/HOLE for ec/sharding?
11:11 ndevos I would like to add a seek() like FOP so that gfapi and fuse can skip holes in data transfer
11:11 ndevos they could also skip data with that, but I doubt anyone would use it for that
11:18 xavih pranithk: I'll do it
11:23 kdhananjay ndevos: What;s the context?
11:23 ndevos kdhananjay: that would be for bug 1220173
11:23 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1220173 high, unspecified, ---, bugs, NEW , SEEK_HOLE support (optimization)
11:24 kdhananjay ndevos: Looking ...
11:24 ndevos kdhananjay: nfs and local filesystems start to support it, it would help with certain VM workloads (and their backups)
11:24 ababu joined #gluster-dev
11:24 xavih ndevos: btw, how lseek is translated into normal fops ? is it a new fop ?
11:24 kdhananjay ndevos: Yeah.
11:25 ndevos xavih: yeah, I'm looking into adding seek() as a new FOP
11:27 xavih ndevos: is it better to do it as a seek() instead of an ioctl-like request ?
11:29 ndevos xavih: well, ioctl() is not very well defined, and nfs implements SEEK now too, it is nice if the protocols look a little like eachort
11:30 anrao joined #gluster-dev
11:34 atinm REMINDER : Gluster Community Bug Triage meeting today at 12:00 UTC (in ~25 minutes)
11:36 spalai joined #gluster-dev
11:37 arao joined #gluster-dev
11:40 atinm bug triaging to happen @ #gluster-meeting
11:50 kotreshhr joined #gluster-dev
11:50 kanagaraj joined #gluster-dev
11:56 soumya_ joined #gluster-dev
11:57 anekkunt joined #gluster-dev
12:02 itisravi joined #gluster-dev
12:10 kotreshhr ndevos: ping
12:10 glusterbot kotreshhr: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
12:12 kotreshhr ndevos: Could you look into http://review.gluster.org/#/c/11358/? no test (smoke/regression) triggered automatically. I could manually trigger netBSD and linux regression. How about smoke and others?
12:23 ndevos kotreshhr: easiest would be to go to http://build.gluster.org/gerrit_manual_trigger/ and search for "changeid:11358"
12:24 kotreshhr ndevos: I did that, I could not find the bug number.
12:24 ndevos no, not changeid: but change:
12:24 ndevos kotreshhr: not the bug number, but the change number as in the url
12:26 kotreshhr ndevos: got it. Yeah I did the same. I didn't see an instance of it. Then I triggered it manually.
12:29 arao joined #gluster-dev
12:30 kotreshhr ndevos: My question was: should I trigger smoke, rpm and other tests manually one by one or there is a link where I could trigger all at once?
12:31 ndevos kotreshhr: I'm not sure why you can not trigger it through http://build.gluster.org/gerrit_manual_trigger/ , that would be the way to trigger all tests for a change
12:31 kotreshhr ndevos: Oh ok. I will try that. Thanks:)
12:33 kotreshhr ndevos: That worked! thanks
12:35 ndevos kotreshhr: cool :)
12:41 csim ndevos: technically, yes. not sure how we can do that, we would need to give jenkins some right to start some salt stuff, with sudo, etc
12:42 hchiramm kkeithley, moved the rpms to download.gluster.org
12:43 atinm kkeithley++
12:43 glusterbot atinm: kkeithley's karma is now 75
12:43 atinm ndevos++
12:43 glusterbot atinm: ndevos's karma is now 166
12:43 atinm soumya_++
12:43 glusterbot atinm: soumya_'s karma is now 1
12:43 atinm hagarth++
12:43 glusterbot atinm: hagarth's karma is now 68
12:43 ndevos atinm++ thanks for being our host again!
12:43 glusterbot ndevos: atinm's karma is now 11
12:45 atinm ndevos, my pleasure :)
12:45 ndevos csim: that would be cool!
12:49 hchiramm kkeithley++
12:49 glusterbot hchiramm: kkeithley's karma is now 76
12:49 ndevos lol, I'm the only one without shoulders: http://osbconf.org/speakers/
12:50 csim ndevos: it might make the build slower however
12:50 kkeithley_ hchiramm: ??
12:50 fabiand joined #gluster-dev
12:51 hchiramm kkeithley, for 3.7.2-3 rpms
12:51 hchiramm :)
12:51 hchiramm fabiand, we have pushed new rpms with the fix.
12:51 hchiramm please let me know if it fix the reported issues
12:51 fabiand hchiramm, thanks for the update, triggering my ci
12:51 fabiand it's running
12:52 hchiramm fabiand++ thanks!
12:52 glusterbot hchiramm: fabiand's karma is now 1
12:52 kkeithley_ hchiramm++  for deploying
12:52 glusterbot kkeithley_: hchiramm's karma is now 42
12:52 ndevos csim: yes, I understand that, depending on the time needed to get the VMs up that may or may not be acceptible
12:52 kkeithley_ I don't know if it's _the_ fix. It's a fix.
12:52 csim ndevos: but as I plan definitely to script that part, we will see
12:53 ndevos csim: but, I think we should definitely have a job in jenkins to rebuild an existing vm
12:55 csim ndevos: I didn't knew that people had access to jenkins to trigger command
12:58 ndevos csim: not everyone, but there are too many issues with it to have Jenkins manage itself
12:58 atinm rjoseph, hi, can you ack on 11364 ?
13:02 ababu joined #gluster-dev
13:05 ndevos hagarth: what is your plan for tomorrows community meeting?
13:06 kkeithley_ Live, from Summit?
13:06 ndevos maybe? or "it automagically resolves itself" like last week?
13:06 * ndevos prefers to be a little prepared
13:07 hagarth ndevos: go ahead with the meeting.. i will in all probability be busy getting to the summit
13:07 arao joined #gluster-dev
13:08 ndevos hagarth: I was not offering to host the meeting, you understood that part wrong ;-)
13:08 hagarth ndevos: I said go ahead without me in the picture ;)
13:09 ndevos hagarth: aha, maybe atin or someone else wants to host it?
13:09 ndevos OH, IDEA!! we have a maintainers list with people that should rotate hosting the meeting :D
13:10 hagarth ndevos: +1 !
13:10 hagarth +100 in fact :D
13:10 Saravana joined #gluster-dev
13:10 ndevos hagarth: you want to send that as an email to the list?
13:10 * hagarth applauds ndevos for the IDEA
13:10 kkeithley_ you only get one vote, unless you're a company in the US, then you just buy all the votes you need
13:10 hagarth kkeithley_: lol
13:11 hagarth ndevos: I am still awaiting some responses for the previous email I sent on maintainers :-/
13:11 ndevos hagarth: and I do not want to be the 1st/only one replying to it
13:11 hagarth ndevos: I get that
13:12 ndevos maintainers tend to be such a bunch of lazy ....
13:13 ndevos ... or maybe they are just busy
13:13 hagarth ndevos: I hope it is one of the two ;)
13:21 rjoseph atinm, sorry I was in a meeting
13:21 rjoseph atinm: I will look into the patch and give ack
13:22 pppp joined #gluster-dev
13:23 kanagaraj joined #gluster-dev
13:29 arao joined #gluster-dev
13:33 firemanxbr joined #gluster-dev
13:39 shyam joined #gluster-dev
13:48 _Bryan_ joined #gluster-dev
14:02 jiffin joined #gluster-dev
14:09 csim ndevos: I am a bit unconfortable with giving jenkins too much right, I would prefer ssh access :)
14:19 pousley joined #gluster-dev
14:19 ndevos csim: yes, I understand... maybe setup vms to re-install after a job in the vm did a "dd if=/dev/zero of=/dev/vda ... ; reboot"?
14:20 * ndevos does that with his test VMs like that too
14:22 csim ndevos: that's a bit extreme :)
14:23 ndevos csim: well, I prefer really clean installations for my test environment, I never remember what I did to whatever was installed before :)
14:24 ndevos and that "dd" is only wiping the mbr, nothing more :)
14:26 arao joined #gluster-dev
14:56 spalai left #gluster-dev
15:11 gem joined #gluster-dev
15:12 arao joined #gluster-dev
15:24 arao joined #gluster-dev
15:43 kotreshhr left #gluster-dev
16:24 soumya_ joined #gluster-dev
16:29 arao joined #gluster-dev
16:39 Gaurav__ joined #gluster-dev
16:48 arao joined #gluster-dev
17:55 jiffin joined #gluster-dev
18:00 firemanxbr_ joined #gluster-dev
18:24 fabiand hchiramm, the CI went well ..
18:24 fabiand all good now
18:43 ira joined #gluster-dev
19:14 hagarth joined #gluster-dev
20:32 Gaurav__ joined #gluster-dev
21:01 badone joined #gluster-dev
23:45 badone_ joined #gluster-dev
23:45 akay1 joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary