Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-19

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:40 shyam joined #gluster-dev
01:49 ilbot3 joined #gluster-dev
01:49 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:13 rafi joined #gluster-dev
02:24 dlambrig joined #gluster-dev
02:57 ramky joined #gluster-dev
03:05 hchiramm lpabon, ping
03:05 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
03:12 lpabon_ joined #gluster-dev
03:22 ramky joined #gluster-dev
03:22 pranithk1 joined #gluster-dev
03:26 magrawal joined #gluster-dev
03:30 spalai joined #gluster-dev
03:39 sanoj joined #gluster-dev
03:42 atinm joined #gluster-dev
03:49 ashiq joined #gluster-dev
03:53 hagarth joined #gluster-dev
03:55 poornimag joined #gluster-dev
03:55 shubhendu joined #gluster-dev
03:56 skoduri joined #gluster-dev
04:02 spalai left #gluster-dev
04:03 itisravi joined #gluster-dev
04:13 nigelb ndevos: I've updated the release job in JJB. If you could test and let me know if it does anything unexpected, that'd be great.
04:22 aspandey joined #gluster-dev
04:35 lpabon_ joined #gluster-dev
05:04 nbalacha joined #gluster-dev
05:05 aravindavk joined #gluster-dev
05:06 karthik_ joined #gluster-dev
05:16 aspandey joined #gluster-dev
05:18 ndarshan joined #gluster-dev
05:21 Manikandan joined #gluster-dev
05:35 hgowtham joined #gluster-dev
05:38 jiffin joined #gluster-dev
05:43 asengupt joined #gluster-dev
05:45 mchangir joined #gluster-dev
05:48 rraja joined #gluster-dev
05:52 ramky joined #gluster-dev
05:54 nishanth joined #gluster-dev
05:56 Muthu joined #gluster-dev
06:00 devyani7 joined #gluster-dev
06:03 devyani7 joined #gluster-dev
06:03 kshlm joined #gluster-dev
06:09 Manikandan joined #gluster-dev
06:12 spalai joined #gluster-dev
06:14 kshlm nigelb, Can I get temporary ssh access to slave{24,27, and any one other centos-slave}.cloud.gluster.org?
06:14 Saravanakmr joined #gluster-dev
06:14 kshlm I won't be modifying or changing anything. Just looking around.
06:14 nigelb kshlm: file a bug?
06:14 nigelb and attach your ssh key
06:15 kshlm Okay.
06:19 msvbhat joined #gluster-dev
06:20 kshlm nigelb, https://bugzilla.redhat.co​m/show_bug.cgi?id=1368339
06:20 glusterbot Bug 1368339: unspecified, unspecified, ---, bugs, NEW , Access to slave{24,27 and one other centos-slave}.cloud.gluster.org
06:27 Muthu joined #gluster-dev
06:31 nigelb kshlm: you have machines :)
06:32 kshlm nigelb, Thank you.
06:32 kshlm I shouldn't be too long.
06:34 kshlm nigelb, Can't get into slave32
06:34 kshlm I'm getting a password prompt
06:35 nigelb fixing
06:36 mchangir joined #gluster-dev
06:37 kdhananjay joined #gluster-dev
06:39 nigelb kshlm: try logging in as root into that machine.
06:39 nigelb Looks like it needs some debugging.
06:43 kshlm nigelb, Were slave24 and 27 offline recently or reimaged?
06:44 kshlm Build history shows slave27 came back up 17 days ago.
06:44 kshlm After being offline for nearly 2 months.
06:44 nigelb kshlm: I did recently bring up all the offline slaves.
06:44 nigelb across platforms.
06:45 kshlm What about slave24? It's been online for almost always.
06:45 kshlm But has it been re-imaged?
06:45 nigelb Not that I know of.
06:46 kshlm slave24 doesn't have /etc/ssl/glusterfs*
06:46 kshlm The others have it.
06:46 kshlm slave27 has really old /etc/ssl/glusterfs* certs.
06:47 kshlm slave24 and 27 were failing jobs because of this.
06:47 nigelb a) how are they supposed to get /etc/ssl/glusterfs* certs
06:47 nigelb b) why are we event writing into /etc/ssl/glusterfs* rather than /build/install
06:47 kshlm They are supposed to be generated by the tests.
06:47 nigelb s/event/even
06:47 kshlm But one test wasn't doing it. And was depending on them being present.
06:48 nigelb I really want to do what we do for netbsd on centos.
06:48 nigelb Don't write into paths you're not allowed to write into.
06:48 kshlm And this particular test was run before the 2 other tests which generated the certs.
06:48 nigelb There's several layers of problems here then.
06:49 kshlm I'll ask the test writer to have the test generate certs of its own.
06:49 kshlm But I don't think we'll be able to ask them not to use /etc/ssl
06:50 kshlm That path is right now hardcoded as the default path where gluster looks for ssl certs, keys and CAs.
06:51 nigelb kshlm: those tests are marked as bad in netbsd then?
06:53 rafi joined #gluster-dev
06:59 kshlm nigelb, I guess they are.
06:59 kshlm Let me check.
07:00 nigelb That would mean we're purposefully doing somethign evil and working around by disabling tests on the one platform that would catch the bad thing.
07:01 atalur joined #gluster-dev
07:01 kshlm nigelb, They haven't been marked as bad tests for the run-tests framework.
07:02 kshlm I don't know if manu had marked them on netbsd seperately.
07:02 nigelb If it's a bug number
07:02 nigelb then yeah, netbsd skips all of them.
07:02 kshlm Two of them are in tests/features
07:03 kshlm One is a bug.
07:04 rastar joined #gluster-dev
07:18 ppai joined #gluster-dev
07:19 kshlm aspandey, https://bugzilla.redhat.co​m/show_bug.cgi?id=1368349 has been filed for you.
07:19 glusterbot Bug 1368349: unspecified, unspecified, ---, aspandey, ASSIGNED , tests/bugs/cli/bug-1320388.t: Infrequent failures
07:19 aspandey kshlm: ok.
07:37 jlrgraham joined #gluster-dev
07:37 jlrgraham joined #gluster-dev
07:38 jlrgraham joined #gluster-dev
07:40 ankitraj joined #gluster-dev
07:41 Chr1st1an joined #gluster-dev
07:51 loadtheacc joined #gluster-dev
07:54 atalur joined #gluster-dev
08:09 ndevos nigelb: I think kshlm will do the next 3.7.x release at the end of the month, doing test-releases probably confuses people
08:18 pur joined #gluster-dev
08:28 mchangir itisravi, I took a look at iatt_from_stat() ... it calculates from from the actual file size making a reference to XFS in the comment ... should the same thing be handled for files with holes as well ?
08:29 mchangir s/calculates from from/calculates blocks from/
08:30 mchangir itisravi, or are blocks returned correctly for files with holes?
08:35 itisravi mchangir: For files with holes (sparse files), it returns the correct blocks. The rounding down is done only if the on-disk blocks are more than what is indicated by iasize.
08:36 itisravi mchangir: For example, if you fallocate a file with keep-size option, the du -h <file> will be incorrect on a gluster mount.
08:38 mchangir itisravi, thanks
08:38 itisravi mchangir: np
08:42 Manikandan joined #gluster-dev
08:46 rastar joined #gluster-dev
08:51 devyani7 joined #gluster-dev
08:53 Bhaskarakiran joined #gluster-dev
08:57 atinm ndevos, could you help us in figuring out why the smoke is failing for http://review.gluster.org/#/c/15198 ?
08:59 ndevos atinm: looks pretty clear: can't open file '../../events/eventskeygen.py': [Errno 2] No such file or directory
08:59 atinm ndevos, yes but the catch is locally it passes
09:00 atinm aravindavk, ^^
09:00 ndevos atinm: if that is the python file that gets generated, it should be generated under $(top_builddir)/.... and then things should be fine?
09:00 aravindavk ndevos: ../../events/eventskeygen.py' is the file which generates other files
09:01 ndevos atinm: I doubt it'll pass locally, assume sources are under /tmp/glusterfs, then do "mkdir /tmp/clean-build && cd /tmp/clean-build && /tmp/glusterfs/configure"
09:01 ndevos aravindavk: in that case, it should be called like $(top_srcdir)/...
09:03 ndevos aravindavk: I see "$(PYTHON) $(top_builddir)/events/eventskeygen.py PY_HEADER" in http://review.gluster.org/#/c​/15198/5/events/src/Makefile.am
09:03 ndevos aravindavk: you should run it with $(top_srcdir) there
09:04 aravindavk ndevos: before this patch top_srcdir was used, changed after your comment. Will revert that change and check
09:05 ndevos aravindavk: scripts that are part of the sources (not generated) use $(top_srcdir), generated files should land under $(top_builddir)
09:10 nbalacha xavih, ping
09:10 glusterbot nbalacha: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
09:11 ndevos poornimag: I'm still waiting for a reply from you in http://review.gluster.org/15191 - wondering if you had a bug for that already...
09:15 atinm ndevos++
09:15 glusterbot atinm: ndevos's karma is now 303
09:15 atinm aravindavk++
09:15 glusterbot atinm: aravindavk's karma is now 10
09:24 misc so what was the issue with the 3 systems taken out of rotation and tls ?
09:27 nigelb ndevos: That works too.
09:30 atalur joined #gluster-dev
09:32 devyani7 joined #gluster-dev
09:32 kshlm ndevos, We have a slightly severe regression in 3.8.2.
09:33 kshlm All bricks aren't being started on a glusterd restart.
09:38 ndevos kshlm: uh, that sounds bad :-/
09:41 nigelb kshlm: Can you fix the certs, btw?
09:41 nigelb And can I put s34 back into rota?
09:41 kshlm ndevos, Just found out that it's been fixed already!
09:42 ndevos kshlm: uh, you mean the patch is already merged for 3.8.3?
09:42 kshlm Yup.
09:42 kshlm https://review.gluster.org/15186
09:43 ndevos kshlm: so, we need a 3.8.3 already, or can we wait
09:45 kshlm This is quite serious IMO. Anyone upgrading to 3.8.2 and restarting will not have all their bricks starting automatically.
09:45 kshlm We do have a workaround of doing a volume start force.
09:47 ndevos is this an issue when rebooting? or do brick processes start fine then?
09:47 aspandey joined #gluster-dev
09:47 kshlm It is an issue when rebooting mainly.
09:48 kshlm Just a glusterd restart should have bricks running atleast.
09:48 kshlm If not connected to glusterd.
09:49 rastar joined #gluster-dev
09:51 ndevos kshlm: I'm trying to understand the commit message, but teh problem described is not very clear to me
09:53 ndevos kshlm: ah, the bug really mentions it happens on reboot too
09:57 kshlm ndevos, It's easier to understand by just looking at the code
09:57 kshlm https://github.com/gluster/glusterfs/blob/80​7b9a135d697f175fc9933f1d23fb67b0cc6c7d/xlato​rs/mgmt/glusterd/src/glusterd-utils.c#L4946
09:58 kshlm This is the line at the commit that caused the regression.
09:59 kshlm conf->restart_done is set after just the bricks of the 1st volume are started. The for the next volumes iterated upon, the bricks aren't started as restart_done is true.
09:59 ndevos kshlm: I would rather have a user-facing description to judge how important the problem is :)
10:00 kshlm So we have lots of bricks not being started on a reboot.
10:01 penguinRaider joined #gluster-dev
10:02 atinm joined #gluster-dev
10:03 ndevos kshlm: hmm, that is quite an issue, yes...
10:03 ndevos nigelb: I think we'll be testing your release job update later today, or over the weekend :-/
10:03 msvbhat joined #gluster-dev
10:04 nigelb oh dear.
10:04 nigelb ndevos: Hopefully, it does nothing differently.
10:05 nigelb At least, that's the goal.
10:07 nigelb ndevos: as long as you don't plan to release on Monday.
10:07 nigelb Because I have a short gerrit downtime planned for Monday.
10:07 nigelb I'm closing one of the oldest AIs from the community meeting.
10:08 nigelb We have a bot account for bugzilla.
10:12 misc nigelb: at what time ? Can I take a snapshot of gerrit vm to move it to the other server ?
10:13 nigelb misc: I just need to switch the bugzilla auth to the bot account (I need 5 mins), so I'll do it around 8 am my time.
10:14 misc nigelb: ok so likely too soon for me :/
10:14 nigelb heh, yeah.
10:14 nigelb I thought of doing it today.
10:14 nigelb But Friday.
10:14 nigelb Is that the final migration or are you doing a test migration?
10:14 misc test
10:15 misc I am still at the testing phase
10:15 nigelb If you want downtime to do a test migration, I'm happy to sync with yours.
10:15 misc it got slow because I spent too much of my time sneezing this week
10:20 nigelb misc: I'm happy to defer to you announcing a downtime, then.
10:20 nigelb But please do announce a time today.
10:27 kkeithley nigelb: any thoughts on why all the netbsd7-regressions are hanging?
10:28 * nigelb looks
10:29 nigelb sigh, did our whole pool go bad?
10:30 misc nigelb: I think I will rather do a online snapshot during the weekend, shouldn't requires downtime or anything
10:32 nigelb misc: still send out an outage email. considering ndevos or kshlm might be doing a release.
10:34 misc nigelb: yeah, will do this afternoon
10:34 misc but isn't the release planned for the 30 ?
10:34 nigelb see the conversation above. There is a critical bug.
10:35 misc mhhhh
10:36 misc ok, I will ponder the benefit of working during weekend vs risks
10:36 misc nigelb: so while you do the downtime, can you aslo upgrade the packages (like system packages) and reboot ?
10:38 nigelb sure
10:38 nigelb wait, I thought that was automatic?
10:38 misc not for gerrit
10:39 misc well
10:39 misc at least not the reboot part :)
10:39 nigelb ah
10:41 nigelb misc: it will come back online on it's own, right?
10:41 misc nigelb: yup
10:42 misc if not, that's a bug, you are authorized to call me at 4 in the morning to fix
10:42 misc (not that I will be fully operational at that time)
10:43 nigelb Heh
10:48 nigelb okay, so I'm going to create a new channel for notifications from Jenkins.
10:48 nigelb Mostly so I can know when things get aborted or fail very often.
11:42 misc nigelb: do we want to extend that to more type of notifcation ?
11:42 misc like I was pondering on ansible run
11:42 nigelb misc: let's see how well this works.
11:42 nigelb If it ends up being too noisy
11:42 nigelb I'll turn it off.
11:49 aravindavk ndevos: please review the changes http://review.gluster.org/15198
12:08 nigelb atinm: I suspect we're leaving a lot of netbsd nodes in an inconsistent state.
12:08 nigelb Specifically, we're hanging when trying to umount at the start of a job.
12:09 nigelb see bug 1368441
12:09 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1368441 unspecified, unspecified, ---, bugs, NEW , NetBSD machines hang on umount
12:09 atinm nigelb, this used to happen earlier as well I believe but someone fixed it
12:10 nigelb atinm: who's likely to remember the history? Emmanuel? Vijay?
12:11 atinm nigelb, we need to check Manu if required
12:11 atinm nigelb, Yup, Emmanuel aka Manu :)
12:12 nigelb Okay, email sent.
12:13 nigelb atinm: I have a very hacky fix.
12:13 nigelb But I really don't want to do it without getting okay from him.
12:14 nishanth joined #gluster-dev
12:17 poornimag joined #gluster-dev
12:22 ppai joined #gluster-dev
12:30 msvbhat joined #gluster-dev
12:33 hagarth joined #gluster-dev
12:39 lpabon nixpanic, ping
12:39 glusterbot lpabon: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
12:39 ndevos lpabon: are you dressed?
12:40 lpabon ndevos, ha, i hope so :)
12:40 lpabon ndevos https://github.com/gluster/gluster​fs-patch-acceptance-tests/pull/57
12:40 * ndevos checks
12:41 lpabon that patch not only pulls the change, which is based on some older master checkout, but then rebases it on top of the latest master
12:42 lpabon that way, the change can be always tested against the latest master, and not on on top of the version of master where the PR branch was created from
12:42 ndevos lpabon: yeah, looks good to me
12:42 lpabon awesome
12:43 ndevos lpabon: is you start a test now, it should use the updated script
12:44 ndevos s/is/if/
12:44 lpabon ndevos, sweet
12:44 lpabon ndevos, i will start one now
12:47 ndevos lpabon: seems to be doing its thing at https://ci.centos.org/view/Gluster/job​/gluster_heketi-functional/62/console
12:48 lpabon ndevos, nice it worked
12:49 ndevos lpabon: cool :)
12:51 ramky joined #gluster-dev
12:57 dlambrig joined #gluster-dev
13:31 julim joined #gluster-dev
13:32 mchangir joined #gluster-dev
14:01 rafi1 joined #gluster-dev
14:03 nbalacha joined #gluster-dev
14:12 hagarth joined #gluster-dev
14:21 nbalacha joined #gluster-dev
14:24 shyam joined #gluster-dev
14:32 kshlm joined #gluster-dev
14:35 dlambrig joined #gluster-dev
14:37 hagarth joined #gluster-dev
14:51 mrten left #gluster-dev
15:06 wushudoin joined #gluster-dev
15:32 msvbhat joined #gluster-dev
15:36 ira joined #gluster-dev
15:37 ira joined #gluster-dev
15:53 spalai joined #gluster-dev
15:57 rraja joined #gluster-dev
16:00 rafi joined #gluster-dev
16:02 rafi1 joined #gluster-dev
16:29 jiffin joined #gluster-dev
16:37 shyam joined #gluster-dev
17:04 nbalacha joined #gluster-dev
17:16 spalai joined #gluster-dev
17:40 rafi joined #gluster-dev
17:49 julim joined #gluster-dev
17:58 rafi joined #gluster-dev
18:20 rafi joined #gluster-dev
18:23 rafi joined #gluster-dev
18:54 pranithk1 joined #gluster-dev
19:28 jiffin joined #gluster-dev
19:58 shyam joined #gluster-dev
20:18 dlambrig joined #gluster-dev
20:29 jlrgraham joined #gluster-dev
20:31 nishanth joined #gluster-dev
20:32 nishanth joined #gluster-dev
20:32 pranithk1 joined #gluster-dev
20:35 hagarth joined #gluster-dev
21:34 shyam joined #gluster-dev
22:42 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary