Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-03-31

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:37 vmallika joined #gluster-dev
01:46 baojg joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:50 vmallika joined #gluster-dev
02:05 pranithk joined #gluster-dev
02:25 kshlm joined #gluster-dev
02:57 mchangir joined #gluster-dev
03:01 baojg joined #gluster-dev
03:05 camg joined #gluster-dev
03:28 overclk joined #gluster-dev
03:29 kaushal_ joined #gluster-dev
03:36 atinm joined #gluster-dev
03:45 atinm hagarth, can you please merge http://review.gluster.org/#/c/13847/ ?
03:45 hagarth atinm: centos regression has a -1 ?
03:45 atinm hagarth, this has passed all the regression, centos regression has passed for previous patch set and it's just a rebase done by ndevos
03:46 hagarth atinm: I don't think I can merge that as the web interface does not let me do so
03:47 atinm hagarth, ahh yes
03:47 atinm hagarth, let me retrigger it
03:47 atinm hagarth, I want to get this in 3.7.10 as its a regression
03:48 atinm thanks hagarth!
03:48 hagarth atinm: ok.. good luck!
03:50 aspandey joined #gluster-dev
03:55 nbalacha joined #gluster-dev
03:56 shubhendu joined #gluster-dev
04:01 itisravi joined #gluster-dev
04:06 ashiq_ joined #gluster-dev
04:11 Bhaskarakiran joined #gluster-dev
04:12 kshlm joined #gluster-dev
04:27 jiffin joined #gluster-dev
04:27 sakshi joined #gluster-dev
04:33 EinstCrazy joined #gluster-dev
04:35 Bhaskarakiran joined #gluster-dev
04:35 gem joined #gluster-dev
04:38 aravindavk joined #gluster-dev
04:39 skoduri joined #gluster-dev
04:40 skoduri kshlm, kkeithley_afk , ... http://review.gluster.org/#/c/13824/ has passed has passed regressions..request to merge it so that it shall be included in 3.7.10
04:42 skoduri kshlm++
04:42 glusterbot skoduri: kshlm's karma is now 70
04:42 skoduri thanks
04:43 hagarth kshlm: any plans to compare performance with 3.7.9? since there has been considerable activity for 3.7.10?
04:44 pranithk joined #gluster-dev
04:47 karthik___ joined #gluster-dev
04:48 Manikandan joined #gluster-dev
04:49 prasanth joined #gluster-dev
04:50 ndarshan joined #gluster-dev
04:52 vmallika joined #gluster-dev
04:54 kshlm hagarth, I didn't have any plans as such, but I can do it.
04:54 kshlm Any particular tests to run.
04:55 hagarth kshlm: the ones I described in my 3.7.9 testing update (perf-test.sh from my github or avati's github)
04:55 kshlm hagarth, Okay. I'll check what it out.
04:56 hagarth kshlm: it would be better to install 3.7.9, gather numbers and then perform a rolling upgrade to 3.7.10 with I/O going on. After that we can collect numbers for 3.7.10 in the same setup.
04:56 kshlm hagarth, You have an existing setup I can use? If not I'll need to spin up one.
04:57 hagarth kshlm: while doing so please keep an eye on the resource consumption of all gluster processes
04:57 hagarth kshlm: all my setups are running tests right now.. better to spin up a few physical nodes for the perf test.
04:57 EinstCra_ joined #gluster-dev
04:58 kshlm Physical? I don't have any free. I can spin up VMs though.
04:59 hagarth kshlm: VMs can be flaky for perf tests but give that a shot.
05:05 mchangir joined #gluster-dev
05:05 aravindavk joined #gluster-dev
05:07 kotreshhr joined #gluster-dev
05:07 pkalever joined #gluster-dev
05:09 ppai joined #gluster-dev
05:10 atinm kshlm, this is regarding http://review.gluster.org/#/c/13847
05:10 atinm kshlm, actually the regression had passed for the previous patch set, but now the web interface doesn't allow to submit
05:10 camg joined #gluster-dev
05:11 atinm kshlm, any way to get this merged considering patch set 2 was just a rebase?
05:12 kshlm atinm, There's not easy way.
05:12 kshlm s/not/no/
05:12 kshlm gerrit is setup to copy flags for trivial rebases and commit-message changes.
05:13 kshlm So it carried over the -1 from the 1st regression run. But the 2nd run completed later.
05:16 Bhaskarakiran joined #gluster-dev
05:22 hgowtham joined #gluster-dev
05:22 Bhaskarakiran joined #gluster-dev
05:30 poornimag joined #gluster-dev
05:31 Apeksha joined #gluster-dev
05:37 nishanth joined #gluster-dev
05:41 josferna joined #gluster-dev
05:43 atinm kshlm, I think the job is running for change 13847, https://build.gluster.org/job/rackspace​-regression-2GB-triggered/19414/console
05:52 vimal joined #gluster-dev
06:00 baojg joined #gluster-dev
06:05 Gaurav_ joined #gluster-dev
06:07 rafi1 joined #gluster-dev
06:13 atalur joined #gluster-dev
06:20 pranithk kshlm: Hey, For doing client-side background heals in afr itisravi introduced an option for which Vijay asked us to change the option name after the patch is merged. We want to send a patch to address it before the release. But we don't have much time. What do you suggest we do now?
06:21 itisravi kshlm: comment in question: http://review.gluster.org/#/c/13564/3/xlat​ors/mgmt/glusterd/src/glusterd-volume-set.c
06:23 kdhananjay joined #gluster-dev
06:24 kshlm That comment was given quite a few days earlier.
06:25 kshlm It could have been fixed ealier than last moment
06:26 skoduri joined #gluster-dev
06:26 kshlm I'm waiting for another change which fixes a log rotate regression to complete a regression run.
06:26 kshlm If you have to submit a new change now, I don't think it can pass everything fast enough.
06:27 kshlm Also,  jenkins is being migrated today (though I've not seen misc around yet).
06:27 kshlm itisravi, pranithk ^
06:28 itisravi hmm
06:28 pranithk kshlm: Then we are stuck with bad option name?
06:28 itisravi pranithk: IMHO, it is not a bad naming option :)
06:28 kshlm I believe so.
06:28 rraja joined #gluster-dev
06:28 * itisravi say no to SMS lingo.
06:28 kshlm You'd also need to first send and update to master.
06:29 kshlm s/and/an/
06:33 spalai joined #gluster-dev
06:35 spalai joined #gluster-dev
06:35 penguinRaider joined #gluster-dev
06:38 Gaurav_ joined #gluster-dev
06:38 josferna joined #gluster-dev
06:43 Bhaskarakiran joined #gluster-dev
06:44 asengupt joined #gluster-dev
06:55 vmallika joined #gluster-dev
06:56 misc kshlm: so, we can migrate or not ?
06:56 kshlm misc, I'm waiting for 1 more regression run to finish. Should be done in under an hour.
06:56 Saravanakmr joined #gluster-dev
06:56 kshlm There might be others running though.
06:57 kshlm I'll check.
06:57 misc kshlm: ok, i was more on the release side
06:57 misc as there wasn't much obvious consensus on the maintainers list
06:57 kshlm I can do the tagging without jenkins.
06:57 kshlm I'll take the offtime to build changelogs, release-notes etc.
06:58 kshlm I need jenkins to do the tarball, but that can happen later.
07:10 itisravi joined #gluster-dev
07:31 kshlm joined #gluster-dev
07:46 atinm kshlm, that patch has passed regression
07:46 atinm kshlm, could you merge it please?
07:47 kdhananjay joined #gluster-dev
07:48 atinm ndevos, I didn't notice you are logged in, could you merge http://review.gluster.org/#/c/13847 ?
07:52 Bhaskarakiran joined #gluster-dev
07:59 sakshi joined #gluster-dev
08:00 kshlm atinm, I've merged the change.
08:10 aravindavk joined #gluster-dev
08:23 Saravanakmr_ joined #gluster-dev
08:36 atinm kshlm, thank you!
08:47 kotreshhr joined #gluster-dev
08:50 josferna joined #gluster-dev
08:54 Saravanakmr joined #gluster-dev
09:21 misc kshlm: so, still a lot of regression running ?
09:22 kshlm I saw 4 that were running about 30 minutes back.
09:22 kshlm All were at least 2hours into their runs.
09:23 misc yeah
09:23 misc the build queue is quite big :/
09:27 kshlm I enabled safe shutdown. The queue will not be processed.
09:27 kshlm There are 3 more jobs running.
09:28 kshlm 1 of them should be finishing soon (I expect in ~5 minutes), it's been running for nearly 4 hours now.
09:28 kshlm The other 2 will need at least an hour more.
09:28 kshlm If needed, we can stop those 2 and I'll retrigger them later.
09:28 misc no, let's finish
09:29 misc I will need food soon anyway :)
09:29 misc kshlm: so jenkins lose the queue on shutdown ?
09:29 misc WTF ?
09:29 kshlm yeah.
09:30 kshlm But if we had an upgraded gerrit, jenkins coould recover lost jobs.
09:30 misc we will have
09:30 misc but that's quite annoying that the duty of recovering is on gerrit side and not jenkins
09:30 kshlm I'll keep a track of the jobs pending, and trigger them all after the migration.
09:31 misc I know that distributed computing is hard, but we are speaking of a worker queue, that's a solved problem :/
09:31 kshlm jenkins uses a new gerrit api to fetch stream of events.
09:31 kshlm That came in a later version of gerrit than what we use.
09:35 kshlm Googling around, it seems that jenkins should be saving the queue
09:36 kshlm https://issues.jenkins-ci.org/browse/JENKINS-6804 says it's been fixed since 2011 at least
10:01 aspandey joined #gluster-dev
10:02 kshlm I've listed down all the changes which need jobs retriggered at https://public.pad.fsfe.org/​p/gluster-jenkins-migration
10:19 Bhaskarakiran joined #gluster-dev
10:22 misc kshlm: so going to fetch food, back in 30 to 40 minutes
10:22 misc (since the build are still building)
10:46 sakshi joined #gluster-dev
10:53 kasturi joined #gluster-dev
11:00 overclk joined #gluster-dev
11:02 kshlm misc, No more jobs are running now.
11:04 misc good so jenkins is shutdown ?
11:04 kshlm I just stopped jenkins.
11:04 misc excllent
11:04 misc so let's sync
11:05 rafi1 hgowtham: ping
11:05 glusterbot rafi1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
11:06 rafi1 joined #gluster-dev
11:06 misc kshlm: alos, while on it, why is there stuff running out of ndevos home ?
11:06 misc ndevos: ^
11:07 kshlm I don't know
11:08 misc ok so switching name
11:10 spalai joined #gluster-dev
11:14 hgowtham rafi1, pong
11:15 gem joined #gluster-dev
11:16 prasanth joined #gluster-dev
11:23 lkoranda joined #gluster-dev
11:23 foster joined #gluster-dev
11:24 [o__o] joined #gluster-dev
11:25 misc kshlm: so, what ip does bluid.gluster.org return ?
11:25 kshlm 66.187.224.203
11:26 misc and old-build.gluster.org ?
11:26 misc (it doesn't work here, but I would blame the crappy ISP router)
11:26 kkeithley joined #gluster-dev
11:27 kshlm 184.107.76.12
11:27 misc ok so "blame ISP"
11:28 kshlm I use google dns, that helps.
11:30 misc kshlm: i still dream that ISP can get dns right one day
11:31 misc so jenkins is restarted
11:31 misc the job queue is empty :(
11:31 misc (and I did trnasfer it)
11:34 kshlm misc, I'll restart everything
11:34 kshlm I got a list
11:34 misc kshlm: well, i would like for now to do a test, since the first job I started did fail in a strange way :/
11:35 misc i tried to click on the clock on the right, with the green arrow
11:35 misc for freebsd smoke => invalid answer from upstream server
11:35 misc Caused by: java.net.UnknownHostException: review.gluster.org
11:35 misc W T F
11:42 misc kshlm: ok so there is already a few job coming, i guess you did reschedule them ?
11:42 kshlm Nope.
11:42 ira_ joined #gluster-dev
11:42 kshlm Gerrit pushed them I think.
11:48 Apeksha joined #gluster-dev
11:55 misc kshlm: ok so stuff seems to be running fine for now
11:55 kshlm Yup.
11:55 kshlm I just manually triggered some jobs.
11:55 gem_ joined #gluster-dev
11:55 kshlm I'm gonna keep a watching
11:57 misc I did reboot a few builder who were down
11:57 misc slave46 seems to be in a strange shape
11:57 Debloper joined #gluster-dev
12:02 vimal joined #gluster-dev
12:08 kdhananjay joined #gluster-dev
12:12 mchangir joined #gluster-dev
12:14 shaunm joined #gluster-dev
12:15 atalur joined #gluster-dev
12:15 spalai1 joined #gluster-dev
12:19 EinstCrazy joined #gluster-dev
12:30 pranithk joined #gluster-dev
12:32 ashiq_ joined #gluster-dev
12:34 penguinRaider joined #gluster-dev
12:36 nishanth joined #gluster-dev
12:44 ira_ joined #gluster-dev
12:46 aravindavk hi kshlm
12:46 kshlm hey
12:46 shaunm joined #gluster-dev
12:47 aravindavk kshlm: is 3.7.10 already tagged? have one glusterfind patch and changelog patch waiting for merge
12:47 kshlm Nearly.
12:48 kshlm I will be tagging once I've prepared the release-notes.
12:48 aravindavk kshlm: can I merge glusterfind patch, regressions complete for that
12:51 kotreshhr kshlm: This patch review.gluster.org/#/c/13861/ is waiting on centos regression! Can we wait for this?
12:51 kshlm Only if it is fixing some regression.
12:51 kshlm Otherwise, they will be in the next release.
12:52 aravindavk kshlm: that patch is the missing link for bareos integration
12:53 kotreshhr kshlm: The changelog patch critical for a setup where glusterfind is being used.
12:54 kkeithley misc++
12:54 glusterbot kkeithley: misc's karma is now 19
12:54 kshlm How critical?
12:54 kshlm I'd like to do the release on time for once.
12:55 kshlm I would have liked the changes to be highlighted when the release plans announcements were sent out.
12:55 kshlm I'd sent out the announcement last week that we'll be doing 3.7.10 on the 30th.
12:56 kotreshhr kshlm: Yeah, but the issue is reported just two days ago.
12:56 kshlm But no one spoke about the changes they wanted.
12:56 kshlm kotreshhr, Again how critical?
12:57 kshlm If it is fixing some data corruption, I'm okay with waiting. If not, then it can wait till the next release.
12:57 aravindavk kshlm: without changelog patch, data loss can happen. kotreshhr
12:58 kshlm You could have also replied to the mail I sent out after the community meeting yesterday.
12:58 kotreshhr kshlm: Ok, if glusterfind is setup on volume,  and on unlink of an entry, it's 'loc->pargfid' would be changed to '/'
12:58 kshlm And how would that cause data loss?
12:59 kotreshhr kshlm: Not in a generic case, but yes if 'loc->path' is not filled as in clients such as self-heal
13:00 kkeithley releases on a Friday aren't good.  (just saying)  If the issue is critical, we _could_ release 3.7.11 ahead of schedule too, as far as that goes.
13:00 aravindavk kshlm: when you unlink a file in subdir(when replica was down), if the same named file exists in root dir then that will get unlinked by self heal daemon kotreshhr
13:00 shyam joined #gluster-dev
13:00 kkeithley apart from that it would be good to release 3.7.10 on schedule IMO.
13:01 kshlm This happens for all replica volumes?
13:02 kotreshhr kshlm: aravindavk is right. No, the other replica will have the copy and it heals back when lookup happens. But extra file will be remained which is supposed to be deleted.
13:02 aravindavk kshlm: no, only if changelog.capture-del-path and changelog.changelog is enabled
13:02 kshlm So this isn't a data loss issue then?
13:03 kshlm You have some extra data lying around.
13:03 kotreshhr kshlm: In a very corner case, if self-heal direction is from other end.
13:04 kshlm Okay. So from your explanations, you've given a picture of a quite hard to hit bug. The result of which is that data that was supposed to be deleted, might not be.
13:05 hchiramm joined #gluster-dev
13:05 hchiramm_ joined #gluster-dev
13:05 kshlm IMHO, this doesn't sound like a blocker.
13:05 kshlm But please reply to the 3.7.10 update I sent today morning.
13:05 kotreshhr kshlm: ok!
13:06 aravindavk thanks kshlm. kshlm++
13:06 glusterbot aravindavk: kshlm's karma is now 71
13:06 kotreshhr kshlm++
13:06 glusterbot kotreshhr: kshlm's karma is now 72
13:06 atinm joined #gluster-dev
13:06 kshlm I'd like to hear from others about what they think needs to be done.
13:07 kshlm It'll take some more time for me to build the release-notes (~3hrs), if we have an agreement by that time, I'll wait for and merge the change before tagging.
13:08 kshlm If not, I'll tag.
13:08 kshlm As kkeithley said, we can do a emergency release if the issue is really critical.
13:08 kotreshhr kshlm: ok, sounds good.
13:11 spalai1 left #gluster-dev
13:12 dlambrig_ joined #gluster-dev
13:12 shubhendu joined #gluster-dev
13:14 nishanth joined #gluster-dev
13:16 ndarshan joined #gluster-dev
13:26 penguinRaider joined #gluster-dev
13:30 hagarth joined #gluster-dev
13:30 dlambrig_ joined #gluster-dev
13:45 ppai joined #gluster-dev
13:47 ashiq_ joined #gluster-dev
13:59 nbalacha joined #gluster-dev
14:01 skoduri joined #gluster-dev
14:05 kotreshhr joined #gluster-dev
14:08 ndarshan joined #gluster-dev
14:09 nishanth joined #gluster-dev
14:10 camg joined #gluster-dev
14:17 shyam joined #gluster-dev
14:17 vimal joined #gluster-dev
14:19 vimal joined #gluster-dev
14:21 Manikandan joined #gluster-dev
14:31 lkoranda joined #gluster-dev
14:58 shyam1 joined #gluster-dev
15:06 pranithk joined #gluster-dev
15:06 aravindavk overclk: please merge this patch http://review.gluster.org/#/c/13861/  cc: kshlm, kotreshhr
15:10 kotreshhr joined #gluster-dev
15:17 baojg joined #gluster-dev
15:21 nishanth joined #gluster-dev
15:23 penguinRaider joined #gluster-dev
15:25 ira_ joined #gluster-dev
15:31 Manikandan joined #gluster-dev
15:31 kshlm joined #gluster-dev
15:38 dlambrig_ joined #gluster-dev
15:50 baojg joined #gluster-dev
15:58 overclk kshlm, kotreshhr: merged #13861
16:09 shubhendu joined #gluster-dev
16:13 shyam joined #gluster-dev
16:16 atalur joined #gluster-dev
16:27 shaunm joined #gluster-dev
16:28 camg joined #gluster-dev
16:29 lkoranda joined #gluster-dev
16:41 nishanth joined #gluster-dev
16:51 shubhendu joined #gluster-dev
16:57 kotreshhr left #gluster-dev
17:14 lkoranda joined #gluster-dev
17:21 dlambrig_ joined #gluster-dev
17:29 penguinRaider joined #gluster-dev
17:51 penguinRaider joined #gluster-dev
17:56 atalur joined #gluster-dev
17:58 dlambrig_ joined #gluster-dev
18:08 vmallika joined #gluster-dev
18:09 lpabon joined #gluster-dev
18:11 penguinRaider joined #gluster-dev
19:35 nishanth joined #gluster-dev
21:47 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary