Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-05-29

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:28 dlw joined #gluster-dev
01:36 dlw left #gluster-dev
01:45 hagarth joined #gluster-dev
02:15 ira joined #gluster-dev
02:29 kdhananjay joined #gluster-dev
03:05 pranithk joined #gluster-dev
03:15 overclk joined #gluster-dev
03:56 sakshi joined #gluster-dev
03:57 itisravi joined #gluster-dev
04:03 spandit joined #gluster-dev
04:17 spalai joined #gluster-dev
04:51 rjoseph joined #gluster-dev
05:00 ndarshan joined #gluster-dev
05:01 ashishpandey joined #gluster-dev
05:09 hgowtham joined #gluster-dev
05:10 ndarshan joined #gluster-dev
05:13 ashiq joined #gluster-dev
05:14 Apeksha joined #gluster-dev
05:14 nkhare joined #gluster-dev
05:14 shyam1 joined #gluster-dev
05:16 gem joined #gluster-dev
05:25 Gaurav_ joined #gluster-dev
05:29 Manikandan joined #gluster-dev
05:31 rafi joined #gluster-dev
05:34 kdhananjay joined #gluster-dev
05:36 anekkunt joined #gluster-dev
05:52 gem joined #gluster-dev
05:53 spalai joined #gluster-dev
06:18 ashiq joined #gluster-dev
06:22 vimal joined #gluster-dev
06:23 pranithk joined #gluster-dev
06:29 atalur joined #gluster-dev
06:36 spalai joined #gluster-dev
07:07 kshlm joined #gluster-dev
07:09 kshlm ndevos: http://review.gluster.org/10815 is ready for merge.
07:53 anrao joined #gluster-dev
08:37 ndevos hchiramm_: it seems that some of the new docs are not in sync with the wiki pages... is there a plan to correct that?
08:38 ndevos I just noticed that with the backporting guidelines
08:38 ndevos http://www.gluster.org/community/docum​entation/index.php/Backport_Guidelines
08:38 ndevos http://gluster.readthedocs.org/en/latest​/Developer-guide/Backport%20Guidelines/
08:39 ndevos the wiki page "was last modified on 6 May 2015, at 11:03"...
08:40 ndevos hchiramm_: I think it should be doable to look into the mediawiki database and fetch last modification dates of pages
08:59 aravindavk joined #gluster-dev
09:20 pranithk msvbhat: https://bugzilla.redhat.co​m/show_bug.cgi?id=1212110
09:20 glusterbot Bug 1212110: high, unspecified, ---, kdhananj, ASSIGNED , bricks process crash
09:27 hchiramm_ ndevos, checking
09:28 hchiramm_ http://gluster.readthedocs.org/en/latest​/Developer-guide/Backport%20Guidelines/
09:28 hchiramm_ it looks like all the contents are there
09:28 hchiramm_ or Am I missing something there
09:30 Gaurav_ joined #gluster-dev
09:33 ndevos hchiramm_: the contents is there, but it is old?
09:34 hchiramm_ I see the same content which is in the media wiki
09:35 ndevos hchiramm_: what about step 4 from the wiki?
09:35 * hchiramm_ checking
09:35 * ndevos really does not see it in the readthedocs version...
09:36 * hchiramm_ neither me :)
09:36 ndevos also 5 step vs 9 seems different
09:36 hchiramm_ I will check with schandra
09:36 hchiramm_ any way patches are welcome :)
09:37 ndevos hchiramm_: yes please, and make sure to find a way to check the last update time/date from the wiki and the github repo
09:37 hchiramm_ yep
09:37 ndevos hchiramm_: anything that was copied to the github repo, but was not marked as "moved to .." in the wiki might be old :-/
09:38 ndevos I'm also not sure if anything was marked as "moved to .."?
09:38 ndevos and, the mediawiki should really be made read-only at one point?
09:39 hchiramm_ it should be at this point ..
09:39 hchiramm_ I dont know who can put a banner saying
09:39 hchiramm_ the documentation is maintained in another location
09:39 hchiramm_ and this is READ ONLY
09:44 ndevos hchiramm_: it is not, see the 1st line on http://www.gluster.org/community/docum​entation/index.php/Backport_Guidelines
09:44 ndevos hchiramm_: I also have received several wiki page updates the last week, so others are still editing things too...
09:45 hchiramm_ true.. we need to put a banner asap
09:46 ndevos hchiramm_: not only a banner, make it read-only!
09:46 hchiramm_ yeah
10:03 anrao gem++
10:03 glusterbot anrao: gem's karma is now 11
10:23 hchiramm_ ndevos, r u able to edit any pages in mediawiki ?
10:28 poornimag joined #gluster-dev
10:29 ppai joined #gluster-dev
10:37 rafi1 joined #gluster-dev
10:37 ndevos hchiramm_: got a wiki page I should try? I could edit the backport-guidelines earier
10:38 hchiramm_ I changed some settings in wiki
10:43 pranithk xavih: when shall we talk about open-fd-self-heal?
10:45 ndevos hchiramm_: I just removed "TEST: is this read-only?" from the backport guidelines and saved the page, no problem at all
10:49 Gaurav_ joined #gluster-dev
10:59 ndevos overclk: oh, just a reminder, we're only supposed to merge regression-test-fixes for 3.7
11:03 poornimag joined #gluster-dev
11:07 kanagaraj joined #gluster-dev
11:08 hagarth joined #gluster-dev
11:23 rafi1 joined #gluster-dev
11:25 ira joined #gluster-dev
11:25 hchiramm_ ndevos, ahhhhh.. thanks :(
11:48 pranithk left #gluster-dev
11:56 anekkunt joined #gluster-dev
12:10 overclk ndevos, oops. thanks for reminding.
12:14 rjoseph joined #gluster-dev
12:27 hchiramm_ ndevos++ thanks!
12:27 glusterbot hchiramm_: ndevos's karma is now 139
12:30 ndevos hchiramm_++ thanks :)
12:30 glusterbot ndevos: hchiramm_'s karma is now 10
12:30 pranithk joined #gluster-dev
12:33 pranithk ndevos: wassup?
12:33 ndevos hey pranithk!
12:33 pranithk ndevos: hey!
12:33 ndevos pranithk: having weekend yet?
12:34 pranithk ndevos: There is rain. In office. Can't go home :-(
12:34 ndevos hagarth: care to merge http://review.gluster.org/10808 ? Or shall I?
12:34 pranithk ndevos: I will merge?
12:34 ndevos pranithk: oh, its not raining here at the moment, but very windy
12:35 pranithk ndevos: Won't this patch conflict all the other patches that are out there?
12:35 ndevos pranithk: you can merge it too, if you +2 it :)
12:35 ndevos I would +2 it too, but its always weird to do that on your own patches
12:36 pranithk ndevos: I think people will hate me if I merge it because their patches may conflict?
12:36 ndevos pranithk: I do not think there would be any conflicts? it only affects the first #include lines of files
12:37 pranithk ndevos: And you think people do not do any #include in their new patches? :-P. Is any patch blocking on this?
12:37 ndevos one line added, 1950 deleted, I think that is a nice cleanup :)
12:38 pranithk ndevos: No questions there! it is amazing
12:38 ndevos pranithk: very little #include's are added near "#include config.h", they normally get added much further below
12:39 pranithk ndevos: for once I understood build related changes :-)
12:39 pranithk ndevos: okay I will merge
12:39 ndevos pranithk: lol
12:39 ndevos pranithk++ oh, thanks!
12:39 glusterbot ndevos: pranithk's karma is now 18
12:39 ndevos pranithk: now you know how I feel when I understand something about AFR :D
12:40 pranithk ndevos: I wonder how many people will kill me if I merge it though :-)
12:40 pranithk ndevos: hehe
12:40 ndevos pranithk: you may send them to me
12:40 pranithk ndevos: Top most guy would be my manager :-P
12:40 pranithk ndevos: I will have to resubmit all the patches which will conflict :-D
12:41 kkeithley Do we need to send in some body guards?
12:41 pranithk ndevos: hmm... tough decision I must say
12:42 ndevos pranithk: I do not think there should be any patches that conflict, and if so, I'm happy to update those too
12:43 kkeithley you really want to touch 1900+ files?
12:43 pranithk ndevos: alright, we have a busy weekend :-P. I am merging
12:43 ndevos no, not that many files, 1950 lines, about 5 lines per file
12:43 pranithk ndevos: done!
12:43 pranithk ndevos: we are in for a ride :-)
12:43 kkeithley oh well, it's done
12:44 ndevos pranithk++ should I order you a pizza now? or do you think you can leave the office soon?
12:44 glusterbot ndevos: pranithk's karma is now 19
12:44 pranithk ndevos: NetBSD tests are taking a while dude, why so less number of slaves for NetBSD?
12:44 ndevos kkeithley: you took the safe route, only +1'd it
12:44 kkeithley I'm a coward
12:44 lpabon joined #gluster-dev
12:45 kkeithley I only +1ed it and suggested an alternative
12:45 ndevos pranithk: I dont know... I sometimes check/restart the NetBSD slaves, maybe something is stuck somewhere again?
12:45 kkeithley going off-line for a bit while I fedup my dev box
12:45 pranithk ndevos: kkeithley: Nothing to worry I guess. Because gerrit clearly said it conflicts with only one change i.e. new logging framework for glusterd.
12:45 ndevos cya kkeithley!
12:45 kkeithley biab
12:46 rafi joined #gluster-dev
12:46 ira joined #gluster-dev
12:47 pranithk ndevos: what are you working on shall we do that Automatic moving of bug to POST now?
12:48 ndevos pranithk: hmm, there was something I wanted to do, but I got distracted from that - and now I do not remember anymore
12:48 ndevos pranithk: sure, lets discuss the bug thingy
12:49 pranithk ndevos: So in rfc.sh in editor_mode() function it does the enter bug-id etc
12:49 pranithk ndevos: we need that bug-id
12:50 ndevos pranithk: getting the BUG is easy, that should be in the commit message
12:50 pranithk ndevos: it is already there
12:50 pranithk ndevos: in main()
12:51 pranithk ndevos: line 151
12:51 ndevos pranithk: well, I do not want to change teh bug status from checkpatch.pl
12:51 pranithk ndevos: there we want to check if bugzilla-cli is present and if it is we will ask the user if he wants to move the bug to POST
12:51 ndevos pranithk: my preference would be to do it when Gerrit receives the patch
12:52 pranithk ndevos: rfc.sh line 151
12:52 pranithk ndevos: that is when the patch is posted to gerrit
12:53 ndevos pranithk: yes, but Gerrit already posts the url to the patch in bugzilla, at that time, it can also change the status/assignee
12:53 pranithk ndevos: You want gerrit to do it :-)?
12:53 pranithk ndevos: that would be better
12:53 ndevos pranithk: yes, I think that is much nicer
12:54 pranithk ndevos: what if that is not the last patch?
12:54 pranithk ndevos: brb 1 minute
12:54 ndevos sure, I'll get a coffee
12:58 pranithk ndevos: back
13:00 ndevos pranithk: me too!
13:01 pranithk ndevos: Yes, ./rfc.sh should ask if the patch we are submitting is last one or not
13:01 * ndevos also unwrapped his new sports shoes, and is now wearing them on his fluidstance
13:01 pranithk ndevos: oho!!!
13:01 pranithk ndevos: I bought new head phones :-)
13:01 pranithk ndevos: if it is the last patch it should remember that in the commit message
13:02 ndevos pranithk: what do you listen?
13:02 pranithk ndevos: It is mostly for noise cancelling :-D
13:02 spalai joined #gluster-dev
13:02 ndevos pranithk: hmm, figuring out the last patch is always difficult
13:02 pranithk ndevos: That is why the onus is on the user hahah!
13:03 ndevos pranithk: right, I remember, you wanted to add a "Last-patch-in-series: Yes" tag, or something
13:03 pranithk ndevos: yes!
13:05 ndevos pranithk: how about the inverse? only tag something of which we know is not the last patch?
13:05 pranithk ndevos: Either way is fine :-)
13:05 ndevos pranithk: do you have an idea what kind of patches are sent most? 1 per bug, or more per bug?
13:05 pranithk ndevos: one per bug
13:06 pranithk ndevos: Either way it should not matter, should it?
13:07 ndevos pranithk: okay, adding a tag for the exception case would be nicesy
13:07 ndevos *nicest
13:07 pranithk ndevos: we are going to remove it at the time of merging dude
13:07 ndevos pranithk: oh, are we?
13:08 pranithk ndevos: Let us remove it.
13:08 ndevos pranithk: yes :)
13:08 * ndevos just did not think that far yet
13:08 pranithk ndevos: I have a BIG solution
13:08 pranithk ndevos: shall I tell you the whole thing?
13:08 ndevos pranithk: you think of EVERYTHING
13:08 ndevos pranithk: yes please!
13:09 ndevos pranithk: if you have written it down, paste it in an etherpad
13:09 ndevos pranithk: maybe we should put it in an etherpad anyway
13:10 ndevos pranithk: lets use https://public.pad.fsfe.org/p/​gluster-automated-bug-workflow
13:11 pranithk ndevos: yes
13:13 pranithk ndevos: I don't like the colour :-( It affects my eyes :-/
13:13 ndevos pranithk: oh, ok
13:13 pranithk ndevos: remove that pink :-) It is flashing on eyes :-P
13:14 pranithk ndevos: perfect
13:14 pranithk ndevos: blue was good
13:14 ndevos pranithk: this?
13:14 shyam joined #gluster-dev
13:15 shubhendu joined #gluster-dev
13:16 anrao_ joined #gluster-dev
13:18 pranithk ndevos: when do we move the bug to ON_QA? nightly build or release?
13:19 ndevos pranithk: either works for me
13:20 pranithk ndevos: Yes that is what we want. The only thing dev needs to worry about is moving from NEW to assigned when he starts working on it
13:21 pranithk ndevos: why checkpatch.pl is needed? should it be rfc.sh?
13:21 pranithk ndevos: in tools needed section
13:21 pranithk ndevos: I am not able to edit it any more
13:21 Joe_f joined #gluster-dev
13:21 pranithk ndevos: Please make it as both of our solution.
13:22 ndevos pranithk: oh, yes rfc.sh
13:22 ndevos pranithk: refresh the page? maybe your connection got dropped
13:22 pranithk ndevos: oh
13:23 pranithk ndevos: Done
13:23 pranithk ndevos: what do you think?
13:24 pranithk ndevos: Do we want to clone the bugs?
13:24 pranithk ndevos: yeah we want to clone the bugs
13:24 pranithk ndevos: Is this do-able? I mean easy?
13:25 ndevos pranithk: how do you mean, clone the bugs, when?
13:25 pranithk ndevos: ignore ignore
13:25 atinmu joined #gluster-dev
13:25 * ndevos ignores
13:25 kkeithley joined #gluster-dev
13:25 pranithk ndevos: so?
13:25 ndevos kkeithley: welcome F22!
13:26 pranithk ndevos: what do you think?
13:26 ndevos pranithk: think about the BIG solution? yes, that looks good
13:26 ndevos pranithk: we're missing at least 2 things
13:26 pranithk ndevos: oh, what?
13:26 kkeithley f22 hypmotized
13:27 ndevos pranithk: 1. assigning bugs, 2. bugs with the tracking keyword
13:27 pranithk ndevos: assigning is manual :-/
13:27 ndevos pranithk: why?
13:28 pranithk ndevos: Assigned generally means you are working on it.
13:28 ndevos pranithk: I would suggest that the developer sending the 1st patch gets assigned
13:28 pranithk ndevos: ah! thats not bad :-)
13:32 pranithk ndevos: I am telling you, if we automate this, it will be amazing!
13:32 kkeithley heh, terminal is back in System Tools. Will they ever make up their minds
13:33 pousley joined #gluster-dev
13:34 pranithk ndevos: dude! whats happening?
13:34 kkeithley is hagarth still out sick? Is that why 3.7.1 didn't happen? Or some other reason?
13:35 ndevos pranithk: why, what happened?!
13:35 pranithk ndevos: So how and by when shall we do it?
13:35 ndevos pranithk: I am wondering how to do the MODIFIED -> ON_QA part
13:35 pranithk ndevos: ah! what script do we use to make build?
13:36 ndevos pranithk: https://forge.gluster.org/bugzap​pers/nightly-builds/trees/master
13:37 pranithk ndevos: what do we mention in the .spec? changelog?
13:37 ndevos pranithk: but maybe we should move that to a different workflow too?
13:37 pranithk ndevos: no no, that also I thought about :-)
13:38 ndevos pranithk: like -> tag in git, push to gerrit -> gerrit triggers jenkins nightly build -> jenkins moves bugs to ON_QA
13:38 pranithk ndevos: In changelog we can add the bugs we fixed. For all the bugs which are in MODIFIED state we can move to ON_QA
13:39 pranithk ndevos: Are you saying we can automate that as well? hmm... thinking...
13:39 ndevos pranithk: the nightly builds are generated from the git repository, we have full access to the git-log - patching the %changelog for the RPM is not done yet
13:39 kkeithley hang on, move to ON_QA just for a nightly build?
13:39 pranithk kkeithley: you don't like it
13:39 pranithk kkeithley: ?
13:40 kkeithley I thought we only did that after a beta release?
13:40 ndevos kkeithley: yeah, that would be possible to
13:40 kkeithley Or do we have enough QA now that they're testing nightlies?
13:40 pranithk kkeithley: Not a big deal. We can do either.
13:40 pranithk kkeithley: at least EC qe in redhat did test nightlies
13:41 ndevos kkeithley: I do not know how much QA is doing, but I have had some RH managers moving bugs to ON_QA because a nightly build was available
13:41 kkeithley they did that in the run up to 3.7.0?
13:41 pranithk kkeithley: yes
13:42 kkeithley I'm just raising it as a question. What does ON_QA really mean. If we decide it means that a nightly build with it exists, then okay
13:42 ndevos sometimes QA even moved them to VERIFIED 8-)
13:42 pranithk kkeithley: yes, that is what we mean :-)
13:42 ndevos kkeithley: ON_QA for me means that it is ready for testing by non-developers
13:42 kkeithley to me it means there's either an {alpha,beta,etc} or someone in QA is actively testing that change.
13:43 ndevos community QA is rather difficult to quantify
13:44 kkeithley Okay, I just want to be clear, for me, what it means.
13:45 pranithk ndevos: What is the story for moving to ON_QA? if we take input as previous tag + new-tag as inputs, then we run git log figure out all the bugs that are in MODIFIED state then move them to ON_QA
13:45 ndevos kkeithley: I did share your opinion on that, and I am happy to only MODIFIED -> ON_QA when there is a tagged release
13:46 ndevos pranithk: yes, that is more or less doable
13:46 pranithk ndevos: kkeithley: These are just inputs to the tool we will be generating :-) We can decide this in one of the mail chains on gluster-dev or something.
13:46 pranithk ndevos: Then we are done! 8-)
13:46 pranithk ndevos: more like O:-)
13:46 ndevos kkeithley: oh, I guess you missed the etherpad link? https://public.pad.fsfe.org/p/​gluster-automated-bug-workflow
13:47 ndevos pranithk: \o/
13:48 ndevos pranithk: just added a not for the Tracking keyword, thats important to keep an eye on too
13:48 ndevos *note
13:49 pranithk ndevos: got it sir!
13:49 ndevos pranithk: ON_QA only for releases, not nightly builds?
13:50 pranithk ndevos: I would like nightly builds personally. Why hold it off till alpha/beta when you are already done with it. But will people know the sequence of nightly with alpha beta?
13:51 rafi ndevos: one doubt about this "if the bug is not assigned to anyone (or bugs@gluster.org) yet, Gerrit will assign the bug to the developer" > what happens if the email id is not registered with bugzilla ?
13:51 ndevos pranithk: I dont have a strong opinion about it
13:51 ndevos rafi: yeah, that will be a problem to solve
13:51 pranithk ndevos: bugzilla-cli doesn't know it?
13:51 pranithk ndevos: I mean if the user is registered or not...
13:52 ndevos rafi: we probably need to ise the .mailcap file or the extras/...something file for checking other email addresses
13:52 pranithk ndevos: Is that manual?
13:52 ndevos pranithk: I guess we can just try to assign to the email of the author of the patch
13:52 pranithk ndevos: If there is automated way of figuring out let the tool do the job?
13:52 ndevos pranithk: everything should be scriptable, we just need to update that .mailcap file in the repo
13:53 pranithk ndevos: Hmm...
13:53 ndevos pranithk: check your glusterfs sources, that file exists :)
13:53 pranithk ndevos: cool
13:53 pranithk ndevos: :-) I don't know much about this part of code :-D
13:53 ndevos actually, I do have an update for that file already
13:53 rafi ndevos: ok
13:53 ndevos pranithk: it is not code, it is more release-management
13:53 pranithk ndevos: Then we solved the problem in theory? just implementation left?
13:54 ndevos pranithk: yes, I think the design looks good
13:54 lpabon joined #gluster-dev
13:55 ndevos rafi: do you see any other difficulties?
13:55 pranithk ndevos: alright! this takes less effort to get done than making people like me disciplined :-P
13:56 gem joined #gluster-dev
13:56 rafi ndevos: nop, i'm reading it and trying to understand the complete picture
13:56 ndevos pranithk: hmm, yes, but just do not merge patches in the wrong order?
13:56 * ndevos tries to think of things that could go wrong
13:57 pranithk ndevos: It is only momentary glitch.
13:57 pranithk ndevos: eventually it will be consistent
13:57 pranithk ndevos: We are going for eventual consistency...
13:57 ndevos rafi: ask when something is not completely clear, we will send this out to the list when we agree it is fine
13:57 ndevos pranithk: AFR!
13:57 pranithk ndevos: well more like geo-rep
13:58 rafi ndevos: ya sure
13:58 shyam joined #gluster-dev
13:58 ndevos pranithk: yes, I was joking :P
13:58 pranithk ndevos: :-)
14:00 ndevos pranithk: there is also a difficulty with some bugs
14:01 ndevos pranithk: theer are bugs that have multiple patches, but some of them are not merged yet when a release is made
14:01 ndevos pranithk: I think we need a guard check, MODIFIED -> POST for bugs that get additional patches?
14:01 pranithk ndevos: they won't be in MODIFIED state
14:02 pranithk ndevos: why will they be in MODIFIED?
14:02 ndevos pranithk: sometimes they might be: patch -> POST -> merged -> MODIFIED -> ooops, needs additional patch -> *bang*
14:02 Joe_f joined #gluster-dev
14:03 ndevos pranithk: I am *sure* you have see those before too?
14:03 ndevos well, maybe not, if you did not pay attention to the status of bugs
14:03 pranithk ndevos: I have done those before :-D
14:03 ndevos pranithk: hehe
14:04 ndevos pranithk: that can happen for MODIFIED and ON_QA bugs, right?
14:05 rafi ndevos: pranithk : may be we can update our developer guide and encourage people to use one e-mail id
14:05 pranithk ndevos: yeah, it did :-(
14:05 rjoseph Anybody facing issues with gluster in F22 (gcc5) ?
14:05 rafi pranithk: ndevos : it could help for new developers
14:06 ndevos rafi: oh, that would be good, suggest to post patches with the email address that they have in bugzilla
14:06 pranithk ndevos: yes
14:06 pranithk rafi: yes
14:06 ndevos rafi: and, if they really want to post with an other address, we need the mapping in .mailcap
14:06 rafi ndevos: yes,
14:07 pranithk ndevos: rafi: All of this is necessary if the bugzilla-cli tool can't detect if the email-id has bugzilla account or not
14:08 ndevos pranithk: I think it will just fail if you try to assign a bug to an email address that bugzilla does not know
14:10 pranithk ndevos: thats it then :-)
14:10 pranithk ndevos: lets talk about implementation already :-D
14:10 ndevos rjoseph: I think kkeithley fixed some gcc5 issues, or at least knows more about them
14:10 soumya joined #gluster-dev
14:11 ndevos pranithk: sure, lets see what we need to get done
14:11 pranithk ndevos: 1) change in rfc.sh about if it is the last change or not and adding it to the commit-description
14:11 * ndevos copies that into the etherpad
14:11 pranithk ndevos: oh let me write that in etherpad then
14:12 rjoseph ndevos: I fixed few cases in changelog locally, then anoopcs told me he already has a patch...
14:13 rjoseph Now I am seeing more issues in other components as well
14:13 rjoseph e.g. dht
14:13 rjoseph kkeithley: Are you sending any patch for F22 problems?
14:14 ndevos rjoseph: I do not know, I did not follow it yet, and I am not on F22 yet either - ask me next week again ;-)
14:14 pranithk ndevos: thats it right?
14:15 rjoseph ndevos: Sure ;-)
14:15 kkeithley I'm not aware that we have any problems on f22 or that are specific to f22.
14:15 pranithk ndevos: brb
14:15 ndevos pranithk: I would like to have all the bug changing done by gerrit, that is not clear from the current notes
14:16 pousley_ joined #gluster-dev
14:16 rjoseph kkeithley: external linkage for many inline functions are not getting generated
14:16 kkeithley we've had builds of all the releases on f22 for d.g.o, and 3.6.x is in f22 dnf repos
14:16 rjoseph which is leading to undefined symbol error
14:16 kkeithley rjoseph: oh?
14:17 ndevos pranithk: okay, I'll get an other coffee too then
14:18 rjoseph kkeithley: yes, because of which I am unable to start a volume or mount it
14:18 kkeithley aha.
14:18 kkeithley okay
14:18 ndevos kkeithley, rjoseph: we really need some automatic installation and minimal functionality testing...
14:18 rjoseph kkeithley: I am not sure if its only my setup or other people are also facing the same issue. anoopcs also encountered the same
14:19 kkeithley 3.6.x or 3.7.0?
14:20 kkeithley I guess I'll try both.
14:20 rjoseph I am checking latest master build from source
14:20 pranithk ndevos: Moving to ON_QA can't be done by gerrit? or can it be?
14:21 rjoseph ndevos: yes, we really need that one. At least we could test that before releasing a build
14:22 ndevos pranithk: no, Gerrit does not know when a nightly build get done
14:23 pranithk ndevos: yes, so except MODIFIED->ON_QA rest can be automated by gerrit.. right?
14:23 ndevos pranithk: Gerrit does know when a release it made (tag added), so we could use that
14:23 pranithk ndevos: You know more about that part! feel free to change it :-)
14:24 ndevos pranithk: MODIFIED -> ON_QA would then be done in the nightly-build script
14:25 pranithk ndevos: I better get home. I will come online in say 2 hours?
14:25 wushudoin joined #gluster-dev
14:26 ndevos pranithk: I have a squash tournament tonight, so I will not be online more than 2:30 hours from now
14:26 vimal joined #gluster-dev
14:26 ndevos pranithk: if at all, more like 2 hours and I'll be afk
14:26 pranithk ndevos: Alright!! all the best then. I think we are done with this discussion
14:27 ndevos pranithk: yes, I think we're as good as done, shall I send a summary to the list?
14:27 pranithk ndevos: You send a beautiful mail to gluster-devel and we see what needs to be changed and then implement
14:27 pranithk ndevos: yes :-)
14:27 pranithk ndevos: Alright then. Cya.
14:27 ndevos pranithk: ok, will do! have a good trip, dont get too wet and enjoy your weekend!
14:28 atinmu guys, tests/bugs/distribute/bug-973073.t fails for me everytime
14:28 atinmu in master
14:28 ndevos atinmu: hmm, I think I have seen that too, did it segfault on the way?
14:29 * ndevos did not look into the core
14:29 atinmu ndevos, no
14:29 ndevos atinmu: oh, bummer, than I have hit something else
14:30 atinmu ndevos, shyam got this failure
14:30 atinmu ndevos, lets see what it turns out to be :)
14:30 ndevos atinmu: oh, that problem is in good hands then :)
14:31 atinmu ndevos, :)
14:31 pousley joined #gluster-dev
14:32 pousley_ joined #gluster-dev
14:34 vimal joined #gluster-dev
14:36 atalur joined #gluster-dev
14:37 pousley_ joined #gluster-dev
14:38 pousley_ joined #gluster-dev
14:39 shyam ndevos: Ummm... can you point me to the core? (i.e the regression run that caused it)
14:39 gem joined #gluster-dev
14:39 ndevos shyam: sure, I'll try to find it
14:39 shyam k
14:41 kkeithley hmm, maybe we should just outlaw use of "inline"
14:42 ndevos kkeithley: oh, awesome, one more of those?
14:43 ndevos shyam: core in nfs-testing, unrelated (?) patch: http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/9737/consoleFull
14:43 spalai joined #gluster-dev
14:43 * ndevos is still looking for the dht one
14:43 kkeithley filea.c has    inline void funca() {...}, only used in fileb.c
14:43 anoopcs kkeithley, Yes.. I saw the issue yesterday immediately after I upgraded to f22
14:44 kkeithley but did you open a BZ?
14:44 shyam ndevos: no core on the link posted... or I cannot see one... (checking again)
14:44 anoopcs kkeithley, yes
14:44 * anoopcs checks for the bz
14:44 kkeithley excellent.
14:44 ndevos shyam: aaccess-control: http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/9595/consoleFull
14:44 anoopcs kkeithley, https://bugzilla.redhat.co​m/show_bug.cgi?id=1226307
14:44 glusterbot Bug 1226307: high, high, ---, bugs, POST , Volume start fails when glusterfs is source compiled with GCC v5.1.1
14:45 anoopcs kkeithley, And here is the initial patch http://review.gluster.org/11004
14:45 * anoopcs expects more . .
14:45 shyam ndevos: pm
14:46 atinmu shyam++
14:46 glusterbot atinmu: shyam's karma is now 5
14:47 atinmu ndevos, FYI..we got the issue
14:47 atinmu ndevos, sending the fix, this is again a spurious fix, needs to go in asap
14:49 dlambrig1 joined #gluster-dev
14:50 shyam ndevos: ^^^ agree to Atin ^^^ (problem reproduced in 2 setups (atinmu and mine) and fix verified in both)
14:51 ndevos atinmu, shyam: sure, just send the patch :)
14:52 shyam atinmu ++
14:53 shyam atinmu++
14:53 glusterbot shyam: atinmu's karma is now 19
14:53 lpabon hey guys, for OpenStack Manila project we manage GlusterFS but require human intervention creating volumes.  Is there an automatic way to create volume? For example, from a pool of bricks?
14:53 ndevos firefox-- hangs when I click the "more ..." link on the bottom of the table on the left in http://build.gluster.org/job/rackspace-​regression-2GB-triggered/buildTimeTrend
14:53 glusterbot ndevos: firefox's karma is now -1
14:54 kkeithley aieeeeee.   I just did a git pull and then fetched anoopcs's patch. Build failed with glusterd-locks.c:19:28: fatal error: glusterd-errno.h: No such file or directory
14:54 kkeithley compilation terminated.
14:54 hgowtham joined #gluster-dev
14:54 ndevos kkeithley: hey, I have seen those failures this morning too
14:55 ndevos lpabon: no, not that I am aware of, but maybe it is possible with the oVirt ReST API
14:55 ndevos lpabon: sahina bose would be one of the devs that should know more about that
14:55 lpabon ovirt may be to hard to deploy with openstack... :( .. Maybe we need something like gluster-manager.. thing
14:56 lpabon is sahina part of ovirt?
14:56 ndevos lpabon: ah, well, oVirt as "storage management interface" only, but yes, it sounds a little over the top
14:56 ndevos lpabon: yes, she is
14:57 lpabon ndevos: cool thanks
14:57 atinmu lpabon, another alternative could be to use gluster-puppet?
14:57 atinmu lpabon, just a thought though
14:58 ndevos lpabon: maybe there is a feature for this in gluster-4.0, I think others asked about it before too
14:58 lpabon atinmu: i like that, but would need something above it to interface to.... gluster-puppet could be how it is executed
14:58 ndevos lpabon: just summon purpleidea and see if he has any ideas?
14:59 atinmu ndevos, better brick management talks about it
14:59 ndevos atinmu: ah, right, that could be the feature I was thinking of
14:59 lpabon ah i see, that is in gluster-4 doc?
15:00 wushudoin| joined #gluster-dev
15:00 ndevos maybe in http://www.gluster.org/community/d​ocumentation/index.php/Planning40 ?
15:00 lpabon ndevos: atinmu: thanks, i will take a look
15:00 atinmu lpabon, no problem
15:01 atinmu ndevos, shyam : spurious fix which we identified is addressed by http://review.gluster.org/11006
15:01 ndevos lpabon: you're welcome, and please inform the mailinglist if you have ideas/questions/requirements, others would like to know :)
15:03 lpabon ndevos: i think i will.  I think we need to do something about this sooner than later.. specially would like to have something .. or at least a plan before next openstack summit in Oct
15:03 lpabon I think if we do not, we will start losing mindshare in openstack
15:04 ndevos lpabon: definitely bring this up on the mailinglist(s) then
15:05 lpabon ndevos: will do, thanks!
15:06 ndevos atinmu, shyam: http://review.gluster.org/11006 makes sense to me, if you both mark it as verified+1 I'm happy to merge it
15:07 atinmu ndevos, I will leave it upto shyam, as a patch owner its not good if I mark verified as +1
15:07 ndevos lpabon: I think something based on an inventory of bricks/servers and the inteligence of puppet-gluster might work
15:08 ndevos atinmu: yeah, thats ok, I also do not like to +1/+2 my own patches :)
15:09 ndevos kkeithley: http://review.gluster.org/10846 (python-gluster noarch) failed the same way on el6: http://build.gluster.org/job/glusterfs-rpms-​el6/1782/artifact/RPMS/el6/x86_64/build.log
15:10 lpabon ndevos: i think that would work... I'll leave it to purpleidea and see what he says
15:11 atalur joined #gluster-dev
15:12 kkeithley head of master branch builds. I rebased anoopcs's patch, trying to build w/ anoopcs's patch applied now
15:12 ndevos lpabon: that would not be a completely glusterd managed approach, but I think it could be ready/usable before gluster-4.0
15:13 shyam lpabon: ndevos: +1 on what/how puppet does the same can help
15:13 lpabon yeah, we just need something to manage volume management.. we can always change it in the future
15:14 kkeithley I think the --include config.h is breaking things. rebasing anoopcs's patch seems to have fixed the build error w/ his patch.
15:14 gem joined #gluster-dev
15:15 purpleidea lpabon: o hai
15:15 ndevos kkeithley: hmm, but did that failure not happen before the patch got merged?
15:15 lpabon purpleidea: hey!
15:16 purpleidea lpabon: i think i'm missing some context, but if you have a question, i'm here now :)
15:16 ndevos purpleidea: what, you're unable to follow the one out of 4 conversations here?
15:16 lpabon purpleidea: from above: for OpenStack Manila project we manage GlusterFS but require human intervention creating volumes.  Is there an automatic way to create volume? For example, from a pool of bricks?
15:16 purpleidea ndevos: yeah, seriously! :)
15:16 ndevos purpleidea: :)
15:17 kkeithley seems like it did not.  gerrit won't let me rebase http://review.gluster.org/10846 (because it's already +2 and +1 Verified)
15:17 kkeithley ?
15:17 purpleidea lpabon: of course... you could use puppet-gluster if you want
15:18 ndevos kkeithley: yeah, it has been merged already - but the after-merge-rpmbuild failed
15:18 wushudoin| joined #gluster-dev
15:18 kkeithley oh, it's just building rpms that's broken. glusterd-errno.h probably wasn't added the dist files in configure.ac
15:18 lpabon purpleidea: cool! i am going to send an email to the email list just to get things started.  After that, I think the Manila guys (Csaba and Ramana) will get in contact with you do determine how to set it up and use it
15:18 ndevos kkeithley: hey, you're amazing! I did not think of that yet
15:18 pousley joined #gluster-dev
15:19 kkeithley I mean the xlators/mgmt/glusterd/src/Makefile.am
15:19 purpleidea lpabon: sounds good... (cc me)
15:19 lpabon purpleidea: will do
15:19 ndevos yeah, and, actually, I posted something like that to someone earlier today... just did not got a response and I closed the window, thought it was his/her patch only ;,-(
15:19 purpleidea lpabon: there are lots of existing docs and screencasts available too, that should answer most questions, if not, i can add to them
15:21 lpabon purpleidea:  great, i'll cc them also and you can reply with the right pointers
15:21 lpabon purpleidea: if you don't mind :-)
15:21 ndevos purpleidea, lpabon: when you need a gluster-bricks-volume-manager, you can call it AntFarm, just like the facebook people called their tool ;-)
15:21 * ndevos likes that name
15:21 ndevos or, maybe AntStack?
15:21 pousley_ joined #gluster-dev
15:22 lpabon ndevos: purpleidea, facebook has a tool?  is it open sourced?
15:22 lpabon if not, can we send them beer in exchange?
15:22 ndevos lpabon: no, it is not,  it is too dependent on their infra - we hope to get bits of it, but if that is possible, it will take time
15:22 kkeithley Not sure how this can be.  xlators/mgmt/glusterd/src/glusterd-errno.h was added on May 5.  It's not in the Makefile.am noinst_HEADERS. Yet rpms have been building all along??  srsly?
15:23 ndevos kkeithley: I have no idea
15:23 purpleidea ndevos: it would be interesting to know if they're managing a significant amount of gluster and how they do it
15:23 ndevos purpleidea: yes, they do
15:24 purpleidea ndevos: got any more info?
15:24 ndevos purpleidea: little... http://www.gluster.org/community/docu​mentation/index.php/GlusterSummit2015 does not contain their slides :-/
15:25 ndevos purpleidea: the notes from the summit contain a little more: https://public.pad.fsfe.org/p/gluster-summit-2015
15:25 lpabon purpleidea: ndevos: maybe we can just have a talk with them
15:25 lpabon just to get the idea, and maybe we can implement it however we need to
15:25 kkeithley actually I know why Koji rpm builds would have worked. It's the jenkins rpmbuild tests that are a puzzle
15:26 purpleidea lpabon: they probably just wrote a custom management layer
15:27 lpabon purpleidea: yeah, that's what i am thinking .. if we can at least learn from their exp we can see how we can do this before Aug
15:27 ndevos lpabon: sure, Richard works from Santa-Clara (?) and is approachable
15:28 lpabon cool.
15:28 purpleidea lpabon: august... ?
15:28 ndevos kkeithley: the jenkins rpm-build generate the tarball from outside the git-repository
15:28 lpabon ndevos: purpleidea: I think I will do the following:  Send an email to the email list and try to contact Richard
15:29 lpabon purpleidea: yes.  GlusterFS and Manila need a better integration, specially as containers are starting to be used by OpenStack.  I think if we do not have a better integration with OpenStack by Oct we will start losing a lot of mindshare.
15:29 ndevos lpabon: yes, sound good, just keep the list in the loop with all the details - and I think the facebook devs are on the list too, just lurking
15:30 ndevos lpabon: that would be containers on top of NFS on top of Gluster ?
15:30 lpabon since containers require volumes (at the host -- not in the container) it gives GlusterFS a great integration opportunity
15:32 lpabon ndevos: whatever protocol the host uses to attach to GlusterFS... In Manila, we can use the glusterfs protocol to attach the VMs to GlusterFS.  I could see these same VMs used as hosts for containers, allowing containers to have persistent state saved on GlusterFS
15:32 ndevos lpabon: when you email the list, just include a (ascii art) diagram so that it is understandable for non OpenStack/container people :)
15:32 lpabon ndevos: good idea
15:32 ndevos lpabon: manila takes in GlusterFS, but exposes NFS, I think?
15:33 lpabon ndevos: it depends on the driver used.
15:34 ndevos lpabon: yes, okay, I dont want to know all the details :D
15:34 lpabon NFS is exposed if using the Ganesha driver.  gluster protocol is used if using the native driver.  By driver i mean the Manila python driver
15:34 lpabon ndevos: :-)
15:34 ndevos yeah, thats how much I care to know :)
15:34 lpabon lol
15:35 ndevos lpabon: but, if you have non-NFS "exporting" by Manilla, that would be interesting to understand, just not today for me
15:36 ndevos oh, probably CIFS/Samba is possible, I'm just interested in your container setup
15:36 lpabon ndevos: np, i'll send you a link with a 5 min video if you want
15:36 ndevos lpabon: include it in the email to the list :)
15:36 SnoWolfe2nd joined #gluster-dev
15:37 lpabon will do
15:38 pousley joined #gluster-dev
15:46 SnoWolfe2nd hey - anyone in here that can maybe answer a simple question? (been googling - and am in #gluster chan with 239 people and NO one talking)
15:48 SnoWolfe2nd when doing "volume top gv0 read-perf" i get a list with a lot of 0's - does this mean anything?
15:51 aravindavk joined #gluster-dev
15:53 ndevos SnoWolfe2nd: infinite speed, or none at all? I have no idea
15:54 ndevos SnoWolfe2nd: and none of the people that I would point you to seem to be online atm...
15:55 SnoWolfe2nd example of output (and they DO seem to "roll off" after an hour or so - looks like)
15:55 SnoWolfe2nd Brick: iadfrankenproxy:/var/gluster
15:56 SnoWolfe2nd MBps Filename                                        Time
15:56 ndevos SnoWolfe2nd: provide an fpaste link or something in #gluster ?
15:57 SnoWolfe2nd gimme a couple - lemme re-find one of my servers i can tghrow it on - im locked down to the extreme here - and have to go through a web based irc client
15:58 ndevos SnoWolfe2nd: sure, maybe it is easier to send an email to the list about it?
15:59 SnoWolfe2nd https://www.filepicker.io/ap​i/file/NrVcGBaWSUKlIN45YXNA
16:00 SnoWolfe2nd nice - kiwi had a little upload button
16:00 ndevos SnoWolfe2nd: try that in #gluster, bturner might know about it, but he is not in this channel
16:02 SnoWolfe2nd yeah - been in there a while and seen NO chatter whatsoever in there :P
16:02 [o__o] joined #gluster-dev
16:04 ndevos SnoWolfe2nd: its friday afternoon/evening/night for many of us :)
16:05 SnoWolfe2nd aw c'mon - who stops working on friday in the afternoon or night? hehe
16:06 ndevos kkeithley: btw, I also am not sure why hagarth did not do the 3.7.1 release yet, I did not talk to him since wednesday when said "tomorrow"
16:08 kshlm joined #gluster-dev
16:09 overclk joined #gluster-dev
16:09 ndevos shyam: if you tested http://review.gluster.org/11006 (the dht.rc change), just mark it verified and merge it?
16:15 ira joined #gluster-dev
16:28 rafi joined #gluster-dev
16:32 rafi joined #gluster-dev
16:35 rafi joined #gluster-dev
16:44 rafi joined #gluster-dev
16:46 rafi joined #gluster-dev
16:50 lpabon joined #gluster-dev
16:50 rafi joined #gluster-dev
16:54 wushudoin| joined #gluster-dev
16:58 shyam ndevos: Done
16:59 wushudoin| joined #gluster-dev
17:02 hagarth joined #gluster-dev
17:11 jbautista- joined #gluster-dev
17:16 jbautista- joined #gluster-dev
17:26 anoopcs rafi, ping around
17:26 spalai joined #gluster-dev
17:57 rafi anoopcs: pong
18:12 Joe_f joined #gluster-dev
18:13 gem joined #gluster-dev
18:16 lpabon joined #gluster-dev
18:43 Gaurav_ joined #gluster-dev
18:48 lpabon joined #gluster-dev
20:41 dlambrig1 left #gluster-dev
21:07 Joe_f joined #gluster-dev
22:28 lpabon joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary