Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-02

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:04 hagarth joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:11 spalai joined #gluster-dev
02:18 nbalacha joined #gluster-dev
03:23 aravindavk joined #gluster-dev
03:38 poornimag joined #gluster-dev
03:47 atinm joined #gluster-dev
03:47 magrawal joined #gluster-dev
03:54 nishanth joined #gluster-dev
04:07 ira joined #gluster-dev
04:09 itisravi joined #gluster-dev
04:12 hgowtham joined #gluster-dev
04:21 nbalacha joined #gluster-dev
04:24 mchangir joined #gluster-dev
04:24 jiffin joined #gluster-dev
04:24 rafi joined #gluster-dev
04:27 kotreshhr joined #gluster-dev
04:32 shubhendu__ joined #gluster-dev
04:32 spalai joined #gluster-dev
04:49 karthik_ joined #gluster-dev
04:51 spalai joined #gluster-dev
05:01 ankitraj joined #gluster-dev
05:05 atalur joined #gluster-dev
05:06 aravindavk joined #gluster-dev
05:07 aspandey joined #gluster-dev
05:16 prasanth joined #gluster-dev
05:16 ashiq joined #gluster-dev
05:16 ndarshan joined #gluster-dev
05:19 nigelb aspandey: there's a very good chance https://build.gluster.org/job/rackspace-net​bsd7-regression-triggered/18456/consoleFull will timeout.
05:19 nigelb Do you mind if I abort it in advance and take the node offlnie?
05:19 ppai joined #gluster-dev
05:20 nigelb *offline
05:20 Manikandan joined #gluster-dev
05:20 aspandey nigelb: ok. no prob..
05:20 Bhaskarakiran joined #gluster-dev
05:24 spalai left #gluster-dev
05:26 nigelb aspandey: thanks, aborted and triggered a new job.
05:30 aspandey joined #gluster-dev
05:30 hchiramm joined #gluster-dev
05:33 rafi joined #gluster-dev
05:34 ashiq joined #gluster-dev
05:34 Apeksha joined #gluster-dev
05:35 ankitraj joined #gluster-dev
05:38 skoduri joined #gluster-dev
05:45 ramky joined #gluster-dev
05:46 Manikandan joined #gluster-dev
05:48 kotreshhr joined #gluster-dev
05:48 ppai joined #gluster-dev
05:49 ashiq joined #gluster-dev
05:54 asengupt joined #gluster-dev
05:58 mchangir joined #gluster-dev
06:01 kshlm joined #gluster-dev
06:02 karthik_ joined #gluster-dev
06:03 Muthu_ joined #gluster-dev
06:05 aravindavk joined #gluster-dev
06:07 kshlm joined #gluster-dev
06:09 nigelb kshlm: Do you know why we don't update /opt/qa netbsd?
06:09 nigelb If it's a "didn't get around to it", I'd like to do a git pull and make it up to date master.
06:10 msvbhat joined #gluster-dev
06:20 atinm joined #gluster-dev
06:25 Manikandan joined #gluster-dev
06:28 atalur joined #gluster-dev
06:34 msvbhat joined #gluster-dev
06:44 devyani7_ joined #gluster-dev
06:45 devyani7_ joined #gluster-dev
06:51 rafi ndevos: ping
06:51 glusterbot rafi: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
06:52 rafi ndevos : I'm  not able to find a component named meta in our bugzilla
06:52 rafi ndevos: any idea
06:53 rafi nigelb: ^
06:53 * nigelb looks
06:53 nigelb did we have it at any point?
06:53 rafi nigelb: I'm not sure
06:54 rafi nigelb: we have an xlator called meta in xlator/meta
06:56 nigelb we seem to have filed bugs for that xlator in different components
06:57 nigelb we only have components for 3 xlators
06:57 nigelb meta, trash and compression.
06:58 nigelb Worth checking with Niels if we need a new component
06:59 kshlm joined #gluster-dev
07:00 aravindavk joined #gluster-dev
07:03 pur joined #gluster-dev
07:03 hgowtham joined #gluster-dev
07:03 rafi nigelb: thanks
07:04 rafi nigelb: I will check with ndevos or kshlm
07:04 rafi kshlm: ^
07:04 kshlm rafi, I just have the last 2 lines of history.
07:04 kshlm I don't have any context.
07:05 nigelb There's seemingly no appropriate component for bugs about the meta xlator
07:05 rafi kshlm: we don't have component for meta in bugzilla
07:06 kshlm If you need one, you need to ask the bugzilla admins.
07:07 rafi kshlm: ohh okey
07:07 nigelb bugzilla-requests@redhat.com
07:07 nigelb I think it needs a CC to your manager as well.
07:07 rafi kshlm: Do you think we need a bugzilla for meta
07:07 rafi *component
07:07 kshlm nigelb, For community projects as well? I don't think so.
07:07 rafi nigelb:thanks
07:08 nigelb kshlm: The bugzilla workflow is strange, yeah.
07:08 ndevos rafi: yes, please request it, state clear in your email what Product (GlusterFS) needs that component - put Kaleb on cc if need be ;-)
07:08 kshlm rafi, IMO it's a pretty small component and probably doesn't reqire a component all to itself.
07:09 ppai joined #gluster-dev
07:09 rafi kshlm: okey
07:09 kshlm We should get a "glusterfs other" component to add all bugs that don't fall into any of the current components.
07:09 ndevos kshlm: I hope we can extend it with more things, and maybe prevent using weir xattr APIs ;-)
07:09 ndevos weir+d
07:09 rafi kshlm: currently I opened the bug in glusterd :D
07:10 rafi kshlm: I have put a comment there ;)
07:10 ndevos glusterd catches them all!
07:10 rafi ndevos: :)
07:10 nigelb GlusterD is written in Go too
07:10 nigelb this all adds up
07:10 nigelb :D
07:10 ndevos lol
07:12 rafi kshlm: I think it is also make sense to add "glusterfs other"
07:12 rafi kshlm: +1
07:13 kshlm I was checking out the components we have currently.
07:13 kshlm We have a unclassified component already.
07:13 kshlm But we also have a lot of components for things that are probably less used than meta.
07:14 kshlm meta should get it's own component in that case.
07:16 atinm joined #gluster-dev
07:17 rafi kshlm: okey then I will send the mail
07:17 rafi kshlm: which is that common component
07:17 rafi kshlm: core ?
07:17 kshlm "unclassified"
07:21 itisravi joined #gluster-dev
07:21 rafi1 joined #gluster-dev
07:21 rafi1 i have moved the bug to unclassified
07:31 ndevos kshlm, rafi: "unclassified" always looks to me as if we do not know yet what the bug is about, I'd prefer something like "other" too
07:32 ndevos kshlm: oh, and dont forget to close th 3.7.14 bugs :)
07:32 kshlm ndevos, That is done already.
07:35 ppai joined #gluster-dev
07:35 Manikandan joined #gluster-dev
07:35 rafi ndevos: ya that make sense
07:35 ashiq joined #gluster-dev
07:36 rafi kshlm, ndevos: you can see the fix here in http://review.gluster.org/#/c/15068
07:36 kotreshhr joined #gluster-dev
07:36 karthik_ joined #gluster-dev
07:36 rafi kshlm, ndevos: reviews are most welcome
07:36 ndevos kshlm: oh, thanks :)
07:36 aspandey joined #gluster-dev
07:37 rafi kshlm: ndevos: this is for feature request http://review.gluster.org/#/c/15066/
07:38 rafi ndevos, kshlm: I'm testing meta for nfs ganesha and samba
07:38 rafi ndevos: I haven't added gNFS into the feature page as it is deprecated
07:39 rafi ndevos: is that okey ?
07:39 rafi ndevos: and we need additional changes in glusterd if we want to support for gNFS
07:41 ndevos rafi: yeah, skipping gNFS is ok
07:42 rafi ndevos: cool
07:44 ndevos rafi: maybe mention it in the feature page "Gluster/NFS is being deprecated, so ...."
07:44 rafi ndevos: I will do it
07:45 rafi ndevos: do you have any other comments,
07:45 ndevos rafi: I did not look at the feature page yet
07:46 rafi ndevos: if you have time, http://review.gluster.org/#/c/15066/ ;)
07:47 ndevos rafi: note that Samba and Ganesha can export sub directories, would the .meta dir be avaialble for those?
07:47 rafi ndevos: good point
07:48 rafi ndevos: I need to check that part
07:48 rafi ndevos: I think we can implement it
07:49 ndevos rafi: I am sure we can, but di we want+need it?
07:50 ndevos *do
07:51 rafi ndevos: To answer that we may need more eyes ;)
07:52 * rafi is going for lunch
07:54 ndevos rafi_lunch: sure, but you should share your suggestion :)
08:05 Saravanakmr joined #gluster-dev
08:12 msvbhat joined #gluster-dev
08:14 ndevos nigelb, poornimag: have you thought about doing the regression-test failure statistics on the regression-test-burn-in job? that one should always succeed
08:16 nigelb It can take any job as input.
08:19 nishanth joined #gluster-dev
08:21 ndarshan joined #gluster-dev
08:22 magrawal joined #gluster-dev
08:35 itisravi joined #gluster-dev
08:42 itisravi joined #gluster-dev
08:49 atinm joined #gluster-dev
08:54 kshlm nigelb, Are you here? I wanted to clear the long standing AI on the bot accounts.
08:55 kshlm IIRC, we will not be doing anything about this for Github .
08:55 karthik_ joined #gluster-dev
08:56 kshlm We will be creating a bugzilla bot account.
08:56 kshlm Gerrit already has bot accounts.
09:05 nigelb kshlm: yeah, it's on my todo list.
09:05 nigelb I suspect we may have to wait until the big bugzilla permission migration is done.
09:05 kshlm nigelb, That's fine.
09:06 kshlm I think the AI can finally be marked done.
09:06 nigelb It's not done. Just assigned :)
09:06 kshlm It had morphed to 'kshlm/csim to talk to nigelb about getting it done'
09:06 nigelb lol
09:06 nigelb kshlm: file a bug though
09:06 nigelb and you can mark it as done.
09:06 kshlm I will.
09:07 rafi ndevos: in my opinion we should support for sub directory mount as well
09:08 misc nigelb: so, before I forgot, I think I had to bring one node offline on friday evening, not sure if you did see it
09:09 nigelb misc: Oh, the netbsd one?
09:09 nigelb Turns out it was fine. Just an intermittent test. It had a good run right before it was brought offline.
09:13 ndevos rafi: sure, add a suggetion to how samba/ganesha can enable/disable it per share/export :)
09:15 ndevos nigelb, kshlm: we already have bugs@gluster.org, I guess someone needs to get credentials for it so that we can use it :)
09:16 ndevos note that it is a public mailinglist, so requesting to re(set) a password should not be possible... no idea how that is done by the bugzilla admins
09:19 misc nigelb: possible, it was friday evening and I had to leave, so forgot the details, just left a helpful "future me, fix it" or something
09:24 ndevos jiffin, skoduri: I'm looking into the glfs_free() thing got upcall etc, any early feedback on http://termbin.com/ltu3 ?
09:27 nigelb misc: yeah, that's the one. It's fixed now.
09:27 jiffin ndevos: it's look like a big change, I will apply ur changes and look into it
09:28 misc nigelb: is it worth doing a post mortem ?
09:28 ndevos jiffin: its not ready yet... I've renamed some structures again, because it was broken anyway, and this makes it clearer
09:29 ndevos jiffin: the change is against the master branch, you should be able to apply it cleanly there - check glfs-handles.h mostly
09:29 jiffin ndevos: okey
09:30 sanoj joined #gluster-dev
09:32 nigelb misc: it was a wrong call. THe machine was fine. Intermittent test failure :(
09:33 misc nigelb: the better type of test failure :)
09:33 nigelb The one thing I did notice afterward is that we have machines stuck in bad state after the jobs are aborted.
09:33 misc (remind me some story of a friend about some python script failling only in august and september)
09:33 nigelb I've filed a bug. I have a few ideas how to fix.
09:33 misc reboot after abort is the easiest
09:33 nigelb Basically, I can trigger a reboot as a post build action when aborted.
09:34 nigelb Yeah. That's precisely the plan. I just want to do it in a way that JJB supports.
09:34 nigelb ndevos: good call on checking if there are local modifications.
09:34 nigelb There's plenty.
09:35 nigelb That'll need a review before I do a git pull.
09:36 misc nigelb: at the same time, shouldn't we try to make test clean themself, so it also work outside of jenkins ?
09:36 misc (even if I recognize that's a daunting tasks, but that's "job security" :p )
09:37 msvbhat joined #gluster-dev
09:40 skoduri joined #gluster-dev
09:42 ndevos skoduri: http://termbin.com/ltu3
09:42 pranithk1 joined #gluster-dev
09:45 nigelb misc: The tests do clean up after themselves pretty well if it runs successfully.
09:45 asengupt joined #gluster-dev
09:45 nigelb The cases where Jenkins aborts a job because it's stuck and ran for too long is when the machine needs a restart.
09:45 nigelb Or when we manually abort it.
09:45 ppai joined #gluster-dev
09:45 aravindavk joined #gluster-dev
09:48 nbalacha joined #gluster-dev
09:49 kotreshhr joined #gluster-dev
09:49 Manikandan joined #gluster-dev
10:21 skoduri joined #gluster-dev
10:29 aravindavk joined #gluster-dev
10:34 nigelb https://github.com/gluster/glusterfs-​patch-acceptance-tests/pull/47/files
10:34 nigelb oh fun.
10:34 bfoster joined #gluster-dev
10:41 atinm joined #gluster-dev
10:43 kotreshhr joined #gluster-dev
10:46 ndevos nigelb: oh, seems I am good at guessing :)
10:55 nigelb ndevos: I think it's best to make those changes in a way that it's standard.
10:55 nigelb rather than making it netbsd-only.
10:55 Manikandan joined #gluster-dev
11:07 rraja joined #gluster-dev
11:11 ppai joined #gluster-dev
11:16 Muthu_ joined #gluster-dev
11:31 ndevos nigelb: oh, definitely! one repository that works everywhere
11:33 misc nigelb: yeah, but tests failure is the hard stuff to fix :/
11:36 nigelb misc: okay, I didn't get the context.
11:48 ppai joined #gluster-dev
11:49 Manikandan joined #gluster-dev
11:58 prasanth joined #gluster-dev
12:04 Muthu_ joined #gluster-dev
12:14 shyam joined #gluster-dev
12:27 dlambrig_ joined #gluster-dev
12:30 nigelb kkeithley: wait, who should the bug be assigned to if not bugs@gluster.org?
12:31 kkeithley the person who submitted the patch in gerrit
12:31 nigelb ahhh
12:31 nigelb That way.
12:32 nigelb ndevos: Hey, I noticed it before the bug was filed. It was sitting in my inbox.
12:32 nigelb Now it's on a bugzilla list that I review every day :P
12:33 ndevos nigelb: we actually have a whole description on https://public.pad.fsfe.org/p/​gluster-automated-bug-workflow - Manikandan should have scripts for that
12:33 nigelb I have the scripts :)
12:33 nigelb and I have that pad.
12:33 ira joined #gluster-dev
12:33 nigelb I'll more details into the bug so I have these details once I have the account.
12:33 kkeithley otherwise we end up with lots of BZs that are POST/MODIFIED/CLOSED _and_ assigned to bugs@gluster.org. And show up in Niels' bugs with incorrect state email
12:34 kkeithley Muthu_++
12:34 glusterbot kkeithley: Muthu_'s karma is now 6
12:34 nigelb Yeah, now I understand the problem I'm solving.
12:34 ndevos actually, that status email does not complain about the bug owner, that is something jiffin is goind to add :)
12:35 Manikandan Muthu_++, thanks for hosting and picking it up next week as well ;-)
12:35 glusterbot Manikandan: Muthu_'s karma is now 7
12:36 kkeithley yes, I understand that the status email isn't complaining about the bug owner, but that's where I see lots of BZs owned by bugs@ and in POST/MODIFIED/CLOSED
12:36 kkeithley and there are one or two frequent offenders
12:37 kkeithley Muthu_++
12:37 glusterbot kkeithley: Muthu_'s karma is now 8
12:37 kkeithley one for taking next week's meeting, one for having hosted this week's
12:38 Muthu_ kkeithley, thanks for being so generous ;-)
12:38 kkeithley lol
12:38 Manikandan kkeithley, Muthu_ :P
12:38 ndevos @karma kkeithley
12:38 glusterbot ndevos: Karma for "kkeithley" has been increased 137 times and decreased 1 time for a total karma of 136.
12:39 ndevos HAH SOMEONE DID A -- ON YOU!!!
12:39 kkeithley Manikandan++  for starting the meeting before Muthu_ arrived
12:39 glusterbot kkeithley: Manikandan's karma is now 60
12:39 kkeithley kkeithley__
12:39 kkeithley kkeithley__--
12:39 glusterbot kkeithley: kkeithley__'s karma is now -1
12:39 kkeithley kkeithley--
12:39 glusterbot kkeithley: Error: You're not allowed to adjust your own karma.
12:39 nigelb lol
12:39 ndevos hehe
12:39 nigelb so change your nick and try again
12:40 ndevos ndevos++
12:40 glusterbot ndevos: Error: You're not allowed to adjust your own karma.
12:40 jiffin ndevos: Yup , but not yet started
12:40 Manikandan kkeithley, so you want me to the say the same ;-) that you are so generous :P
12:40 ndevos I knew a bot that when you did a ++ on yourself, that it actually --'d you :)
12:40 kkeithley you guys crack me up
12:41 Manikandan kkeithley, ;)
12:41 kkeithley It's always good to start a day with a laugh
12:41 Manikandan kkeithley++ too for always making it to the triage meeting
12:41 glusterbot Manikandan: kkeithley's karma is now 137
12:42 kotreshhr left #gluster-dev
12:44 kkeithley_ kkeithley--
12:44 glusterbot kkeithley_: kkeithley's karma is now 136
12:44 ndevos kkeithley__++
12:44 glusterbot ndevos: kkeithley__'s karma is now 0
12:45 ndevos @karma kkeithley
12:45 glusterbot ndevos: Karma for "kkeithley" has been increased 138 times and decreased 2 times for a total karma of 136.
12:45 ndevos @karma kkeithley__
12:45 glusterbot ndevos: Karma for "kkeithley__" has been increased 1 time and decreased 1 time for a total karma of 0.
12:45 ndevos @karma
12:45 glusterbot ndevos: Highest karma: "ndevos" (294), "kkeithley" (136), and "kshlm" (97).  Lowest karma: "(" (-15), "<" (-12), and "-" (-11).  You (ndevos) are ranked 1 out of 188.
12:45 Manikandan @karma
12:45 glusterbot Manikandan: Highest karma: "ndevos" (294), "kkeithley" (136), and "kshlm" (97).  Lowest karma: "(" (-15), "<" (-12), and "-" (-11).  You (Manikandan) are ranked 8 out of 188.
12:45 ndevos Manikandan: hey, still top 10 ;-)
12:46 Manikandan ndevos, yep ;-)
12:46 Muthu_ @karma
12:46 Muthu_ @karma
12:46 glusterbot Muthu_: Highest karma: "ndevos" (294), "kkeithley" (136), and "kshlm" (97).  Lowest karma: "(" (-15), "<" (-12), and "-" (-11).  You (Muthu_) are ranked 43 out of 188.
12:46 Manikandan Muthu_, without a space ;-)
12:46 Manikandan Muthu_, top 50 :P
12:47 Muthu_ Ya :P
12:47 ndevos I guess you can also say ,,(karma) somewhere on the line
12:47 ndevos or maybe that only works in #gluster ?
12:47 ndevos well, not then.
12:48 Manikandan ndevos, hmm
12:48 kshlm Woo! Top 3!
12:48 ndevos I wonder who is #4
12:48 kshlm ndevos, You probably have more karma than all the others combined!
12:49 ndevos no idea how to check that, but 2x as much as #2 is a good headstart
12:49 post-factum @karma
12:49 glusterbot post-factum: Highest karma: "ndevos" (294), "kkeithley" (136), and "kshlm" (97).  Lowest karma: "(" (-15), "<" (-12), and "-" (-11).  You (post-factum) are ranked 25 out of 188.
12:49 post-factum top-25 :)
12:49 ndevos *just*
12:50 kshlm glusterbot help karma
12:50 glusterbot kshlm: (karma [<channel>] [<thing> ...]) -- Returns the karma of <thing>. If <thing> is not given, returns the top N karmas, where N is determined by the config variable supybot.plugins.Karma.rankingDisplay. If one <thing> is given, returns the details of its karma; if more than one <thing> is given, returns the total karma of each of the things. <channel> is only necessary if the message isn't sent on the channel
12:50 glusterbot kshlm: itself.
12:50 ndevos @karma *dev*
12:50 glusterbot ndevos: *dev* has neutral karma.
12:51 ndevos oh, no wildcards?
12:51 ndevos @karma .devos
12:51 glusterbot ndevos: .devos has neutral karma.
12:51 post-factum kkeithley: wanna merge http://review.gluster.org/#/c/14592/ ?
12:51 post-factum @karma ^.*devos$
12:51 glusterbot post-factum: ^.*devos$ has neutral karma.
13:04 shubhendu__ joined #gluster-dev
13:07 kkeithley ndevos: ^^^ didn't you trip over this recently on builds you did in CBS?
13:07 kkeithley what was your resolution?
13:08 * misc refrain from saying 1024 by 768 and giggle to himself for the joke
13:09 ndevos kkeithley: not sure what you mean, trip over what?
13:09 kkeithley [08:51:17] <post-factum> kkeithley: wanna merge http://review.gluster.org/#/c/14592/ ?
13:10 kkeithley glusterfs.spec, packaging S57glusterfind-delete-post.py[co]
13:10 ndevos hmm, no, I do not think so, the packages I built yesterday did not complain...
13:11 post-factum ndevos: without this fix OBS does not build gluster packages for us complaining about unpackaged files
13:11 ndevos post-factum: https://github.com/CentOS-Storage-SIG/glusterf​s/blob/sig-storage7-gluster-37/glusterfs.spec is the spec I used
13:12 post-factum ndevos: i use spec produced by autogen && configure
13:12 ndevos post-factum: well, that one gets used for the nightly builds... and they seem to succeed
13:13 kkeithley yeah, but I thought you'd mentioned it failing once, a few weeks ago
13:13 post-factum ndevos: http://review.gluster.org/#/c/14590/ merged
13:13 post-factum ndevos: http://review.gluster.org/#/c/14591/ merged
13:13 rraja joined #gluster-dev
13:14 ndevos post-factum: but http://artifacts.ci.centos.org/glu​ster/nightly/release-3.7/7/x86_64/ really contains packages from this morning
13:15 kkeithley yes, jdarcy merged those. I haven't decided yet whether to reverse those.  I get the errors building in Koji. Or did at one point
13:17 atinm joined #gluster-dev
13:17 dlambrig_ left #gluster-dev
13:18 ndevos kkeithley: oh, I gave http://review.gluster.org/#/c/14591/ a -1 and YOU still merged it
13:18 ndevos no wonder you had someone do a -- karma on you :P
13:19 ppai joined #gluster-dev
13:19 kkeithley i merged it by accident, and then reverted it.
13:20 ndevos ah, that deserves a -- and ++ then
13:20 kkeithley fat fingered it
13:22 ndevos maybe revert http://review.gluster.org/14590 too then, if that was not reverted with a different change?
13:22 shubhendu__ joined #gluster-dev
13:22 post-factum ndevos: if that breaks our builds, I'll complain
13:25 kkeithley pretty sure 14590 is still in. that's what I was referring to when I wrote that I haven't decided what to do about it, and also why I haven't done anything with 14592 and 14643
13:26 julim joined #gluster-dev
13:27 skoduri joined #gluster-dev
13:28 kkeithley I need to go back and see if Fedora/Koji builds work again without it.
13:29 post-factum ndevos: OBS uses bare 7, not 7.1 or 7.2. maybe, that is the problem
13:30 ndevos post-factum: I dont know, but the fix is ugly, we should backport the proper fix
13:30 post-factum ndevos: is there proper one?
13:30 ndevos post-factum: http://review.gluster.org/14928
13:31 post-factum meh
13:31 post-factum return -ETOOMUCHFIXES;
13:31 ndevos its more like some fixes get merged before getting reviewed, and follow-up fixes are needed
13:33 julim joined #gluster-dev
13:33 post-factum ndevos: phew, usual thing
13:38 jiffin1 joined #gluster-dev
13:39 dlambrig_ joined #gluster-dev
13:56 post-factum ndevos: kkeithley: will recheck both fixes now
13:57 kkeithley ndevos, post-factum: I was seeing the issue on my rhel7 box, which is 7.2.
13:57 kkeithley I don't see the issue when I build in mock/epel-7
13:57 ndevos kkeithley: backport http://review.gluster.org/14928 then :)
13:57 post-factum kkeithley: i'd like to revert 14592 and apply 14928, and see, what happens
13:58 post-factum kkeithley: give me 10 mins :)
14:00 ndevos post-factum: its like, apply 14592, apply 14928, make it one patch and call it a backport of 14928
14:00 hagarth joined #gluster-dev
14:00 post-factum ndevos: ok, but checking with no patches at all first
14:05 post-factum hm
14:05 post-factum builds ok with no patches
14:05 post-factum but that is not 7.2
14:06 post-factum kkeithley: could you recheck 7.2?
14:06 kkeithley I'm in the middle of that right now
14:06 post-factum ah ok
14:06 kkeithley just updated and saw a new rhpkg
14:07 kkeithley although that's not where the borked /usr/lib/rpm/brp-python-bytecompile comes from, so
14:07 kkeithley probably not the fix, if there is one.
14:10 kkeithley still borked in real rhel7.2 without any changes to the .spec file.  The BZ I filed against rpmbuild in RHEL has been updated to indicate that there won't be a fix until 7.4
14:11 shyam joined #gluster-dev
14:12 post-factum shame
14:13 kkeithley /usr/lib/rpm/brp-python-bytecompile comes from rpm-build-4.11.3-17.el7.x86_64
14:14 kkeithley I wonder if I can reconstruct what the el7 build of 3.7.14 used
14:18 msvbhat joined #gluster-dev
14:19 post-factum well, 14592+14928 builds ok for me as well on 7
14:20 kkeithley well, the koji build's root log of the el7 build says it also used rpmbuild-4.11.3-17.
14:22 kkeithley trying mock build on my rhel7 box
14:23 dlambrig_ left #gluster-dev
14:31 wushudoin joined #gluster-dev
14:40 gem joined #gluster-dev
14:42 rraja joined #gluster-dev
14:53 nbalacha joined #gluster-dev
15:01 poornimag joined #gluster-dev
15:03 lpabon joined #gluster-dev
15:08 poornimag nigelb, one of the netbsd regression is hung https://build.gluster.org/job/rackspace-n​etbsd7-regression-triggered/18474/console can you kill the same please?
15:13 nigelb poornimag: on it
15:17 hagarth poornimag: would it be possible to run your regression test failure report on https://build.gluster.org/j​ob/regression-test-burn-in/ as well?
15:17 poornimag hagarth, oh sure, let me try it out
15:18 poornimag nigelb, thanku
15:18 hagarth poornimag: thank you!
15:18 poornimag nigelb++
15:18 glusterbot poornimag: nigelb's karma is now 18
15:18 hagarth poornimag++ - appreciate your effort in sending out that report on gluster-devel
15:18 glusterbot hagarth: poornimag's karma is now 9
15:20 nigelb poornimag: i just rebooted that machine.
15:21 kkeithley oh, wow.  I haven't looked at the longevity cluster in ages
15:21 nigelb we aborted a job on that machine yesterday, I think it got stuck in an inconsistent state.
15:21 ndevos kkeithley: isnt that the point?
15:22 kkeithley I suppose
15:23 poornimag hagarth, wc:) https://paste.fedoraproject.org/399921/47015135/ has the result
15:24 nigelb I think it might be worth doing a 30-day report for this one.
15:24 poornimag hagarth, can run ./extras/failed-tests.py get-summary 4 /job/regression-test-burn-in/  in gluster master to get the report for any number of days
15:24 nigelb Because we only run it once a day?
15:24 poornimag nigelb, ohh is it ok
15:25 poornimag nigelb, thanks, i think the netbsd hang is a bug, i had seen this test case hang before as well, let me check with the dev
15:25 kkeithley It's been running 3.7.8 since 16 Feb.  No OOM kills, no crashes.  I should update it to 3.8.1
15:25 hagarth poornimag: cool, I think we should make these reports available on the fly somewhere
15:26 poornimag hagarth, nigel will be able to better ans this
15:27 nigelb hagarth: yeah, I'm working on that.
15:27 hagarth nigelb++ poornimag++ awesome, thank you!
15:27 glusterbot hagarth: nigelb's karma is now 19
15:27 glusterbot hagarth: poornimag's karma is now 10
15:28 poornimag hagarth, tried to get the dashbord on jenkins, couldn't. So i guess will have a jenkins job to run this tool every fortnight and report the same
15:28 hagarth poornimag: ah ok
15:29 nigelb hagarth: something we want to think of when we explore more with distaf testing is how to output it so that software can read failures.
15:37 pranithk1 joined #gluster-dev
15:38 pranithk1 nigelb: I see smoke is failing because "/tmp/hudson5762272422260944406.sh: line 2: /opt/qa/jenkins/scripts/glusterfs-rpms.sh: No such file or directory" in https://build.gluster.org/​job/devrpm-el7/571/console
15:38 pranithk1 nigelb: came across this problem before?
15:46 penguinRaider joined #gluster-dev
15:49 nigelb pranithk1: that looks like a problem on that machine. Give me a sec.
15:54 nigelb pranithk1: the /opt/qa folder wasn't updated. I'm fixing that.
15:54 pranithk1 nigelb: yep
15:55 nigelb I had to stop your regression running on that box and take it offline though.
15:55 nigelb I've retriggered it again.
15:55 nigelb it was weird.
15:55 nigelb git said success
15:56 nigelb but didn't actually do anything.
16:00 nigelb wow.
16:01 nigelb this is the weirdest thing I've seen.
16:07 pranithk1 nigelb: No worries, we have till morning for the regressions to pass :-)
16:07 ndevos well, I have a very weird issue too now... mounting my /tftpboot volume over NFS gets me EPERM when trying to read files as non-root, but permissions are 0644
16:07 pranithk1 nigelb: I triggered smoke as well
16:08 nigelb cheers
16:08 nigelb I'm trying to clone
16:08 nigelb git thinks it's successful
16:08 nigelb but no data written
16:09 nigelb as filesystem devs, where do you think I should start looking? :)
16:10 ndevos well, what reason could there be to prevent non-root from accessing files on a mountpoint?
16:11 ndevos nope, not selinux - and permissions of the files are *really* correct
16:11 * ndevos blames Gluster/NFS, but can not imagine why this suddenly pops up
16:11 nigelb are any flags that could do that?
16:11 nigelb *are there
16:11 ndevos checked ACLs, all should be fine
16:12 nigelb like chattr flags
16:12 ndevos and, the same volume over fuse just works... its just that the system with dnsmasq/tftp mounts over nfs
16:12 ndevos no, we dont support chatter over fuse/nfs :)
16:13 ndevos I was only trying to re-install some VMs for testing... sigh
16:13 pranithk1 nigelb: I guess I am too sleepy to understand anything :-). I will hit the bed.
16:13 pranithk1 ndevos: cya dude
16:13 ndevos bye pranithk1!
16:14 ndevos nigelb: for your git checkout, filesystem full?
16:16 ndevos http://termbin.com/ct31 for the people reading along
16:16 kkeithley ndevos, post-factum: wrt 14592 and friends. I only have an issue on non-mock builds on RHEL7.  It works as is in mock on rhel7 and fedora. It works as is in koji. it works as is in CBS.
16:16 kkeithley I'm inclined to abandon that
16:17 ndevos kkeithley: you should do mock builds anyway
16:18 post-factum kkeithley: no backports then?
16:24 kkeithley ndevos: sure, but for a "quick" sanity test mock is overkill
16:25 kkeithley takes 3-4x longer
16:25 kkeithley post-factum: backports being http://review.gluster.org/14592 and http://review.gluster.org/14643. Yes, I'm proposing to abandon those
16:25 ndevos kkeithley: you need a faster machine then, or backport 14928
16:26 post-factum kkeithley: i mean, will 14928 be backported to 3.7?
16:26 kkeithley yes, someone should do a backport of http://review.gluster.org/#/c/14928 to 3.8 and 3.7
16:30 shubhendu__ joined #gluster-dev
16:32 nigelb ndevos: I checked, no. I'm also calling it a night.
16:32 nigelb Perhaps I'll get some ideas when I wake up.
16:34 cholcombe joined #gluster-dev
16:44 msvbhat joined #gluster-dev
16:55 jiffin joined #gluster-dev
17:05 overclk joined #gluster-dev
17:06 julim joined #gluster-dev
17:25 amye kkeithley: Should we put up a post on -devel about the repo signing keys moving and why? Seeing some :( on twitter about it
17:42 kkeithley repo signing keys moving?
17:43 kkeithley I proposed that we should have a new signing key for 3.9, and a new key for each release there after. Is that what you're referring to?
17:44 kkeithley nobody reacted, unfavorably or otherwise.
17:45 kkeithley I didn't see anything on twitter, but maybe I don't follow the right people
17:45 kkeithley I would eventually post to -devel about it. Yesterday was just a feeler
17:46 mchangir joined #gluster-dev
17:46 amye https://twitter.com/nathwha​l/status/760263918618943488
17:47 amye I think it's people we don't normally see. :)
17:49 kkeithley keys themselves haven't moved? Not sure wtf they're whining about. Maybe they'd like to take over the packaging!
17:50 * amye laughs
17:50 amye Ok, I wasn't sure
17:52 kkeithley I guess it's more satisfying to flame on twitter than to post a nice question, e.g. here, if they think something is broken or missing
17:54 amye They may not always know where we live
17:54 amye But ok, if the keys aren't moved, I may ask for more info
17:56 kkeithley well, TBH, I neglected to create all the symlinks in the EPEL repo yesterday.  fixed it this am. maybe that's their bitch.
17:57 kkeithley not safe to assume that everyone follows @gluster.  I'll have to check whether I do or not
17:59 amye Oh, that may be the issue
18:00 amye Again, not specific in twitter, but I can respond with 'sorry, symlinks' :D
18:00 kkeithley EPEL has always been a pain. That's why we've moved to CentOS Storage SIG.  Much cleaner.
18:01 kkeithley Anyone using EL really should switch.
18:03 jiffin joined #gluster-dev
18:04 kkeithley Just use the V word. That usually shuts them up.
18:07 kkeithley We were trying to be nice and not force anyone to switch until 3.8, but maybe I should take down our 3.7 repo now that the Storage SIG is running smoothly.
18:08 amye As long as we have that documented, and redirected effectively, that should be fine.
18:10 kkeithley heh. The "redirect" for 3.8 is a README on d.g.o. I propose to do that same thing for 3.7. Don't make me use the V word.
18:12 kkeithley for starting the day off with a laugh, I sure got grumpy in a hurry
18:12 amye I mean, sounds like a personal problem
18:12 amye But a readme, as long as it's consistent - is fine.
18:18 kkeithley we can discuss it in the community meeting tomorrow
18:20 amye That's a 5am meeting for me, so no.
18:21 amye But happy to see an outcome on this.
18:30 kkeithley by "we" I meant "the community"
18:30 amye :D
18:31 amye Since it was you and I chattering about it, figured I'd be clear that it wasn't going to be a long conversation
18:31 julim joined #gluster-dev
18:31 kkeithley that's why I clarified
18:32 amye All good
18:33 kkeithley when I realized I had been ambiguous
18:34 kkeithley about who would be having the discussion
18:46 penguinRaider joined #gluster-dev
19:06 julim joined #gluster-dev
19:18 Acinonyx joined #gluster-dev
19:24 msvbhat joined #gluster-dev
21:44 hagarth joined #gluster-dev
23:03 julim joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary