Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-19

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:13 hagarth joined #gluster-dev
02:08 pranithk joined #gluster-dev
02:12 kdhananjay joined #gluster-dev
03:18 dlambrig left #gluster-dev
03:23 krishnan_p joined #gluster-dev
03:24 overclk joined #gluster-dev
03:27 spalai joined #gluster-dev
03:43 itisravi joined #gluster-dev
03:48 gem joined #gluster-dev
03:58 sakshi joined #gluster-dev
04:01 overclk joined #gluster-dev
04:05 poornimag joined #gluster-dev
04:16 nbalacha joined #gluster-dev
04:17 vimal joined #gluster-dev
04:17 atinm joined #gluster-dev
04:23 nbalacha joined #gluster-dev
04:23 ndarshan joined #gluster-dev
04:24 poornimag joined #gluster-dev
04:24 overclk joined #gluster-dev
04:26 shubhendu joined #gluster-dev
04:40 soumya joined #gluster-dev
04:51 hgowtham joined #gluster-dev
04:51 Humble_ joined #gluster-dev
04:58 nkhare joined #gluster-dev
05:02 gem anoopcs++
05:02 glusterbot gem: anoopcs's karma is now 9
05:08 pppp joined #gluster-dev
05:09 poornimag joined #gluster-dev
05:18 nkhare joined #gluster-dev
05:19 soumya joined #gluster-dev
05:25 jiffin joined #gluster-dev
05:29 ashiq joined #gluster-dev
05:29 Bhaskarakiran joined #gluster-dev
05:32 Manikandan joined #gluster-dev
05:33 ashiq overclk, http://review.gluster.org/10297 can you merge it
05:38 kdhananjay joined #gluster-dev
05:39 josferna joined #gluster-dev
05:39 spandit joined #gluster-dev
05:45 atinm Humble_, pm
05:48 Manikandan anoopcs++
05:48 glusterbot Manikandan: anoopcs's karma is now 10
05:49 raghu joined #gluster-dev
05:51 spalai joined #gluster-dev
05:52 atalur joined #gluster-dev
05:54 kotreshhr joined #gluster-dev
05:55 Bhaskarakiran joined #gluster-dev
05:56 asengupt joined #gluster-dev
05:56 ashiq atalur++
05:56 glusterbot ashiq: atalur's karma is now 3
05:57 overclk ashiq, sure. looking at it now.
06:00 nbalacha joined #gluster-dev
06:00 kshlm joined #gluster-dev
06:03 shubhendu joined #gluster-dev
06:04 Humble_ ashiq, is bitrot merged?
06:04 Humble_ atinm, reviewed
06:04 ashiq not yet, overclk is looking at it
06:04 overclk Humble_, I'm reviewing now..
06:04 Humble_ thanks overclk++
06:05 glusterbot Humble_: overclk's karma is now 7
06:07 Humble_ ashiq, http://review.gluster.org/#/c/10823/
06:07 Humble_ I will retigger netbsd for above.
06:07 ashiq poornimag, re-triggered it Humble_
06:07 Humble_ oh...ok..
06:07 Humble_ poornimag++ thanks
06:08 glusterbot Humble_: poornimag's karma is now 2
06:09 Humble_ http://review.gluster.org/#/c/10822/ ashiq this just need netbsd vote
06:09 ashiq Humble_, got all the patches with netbsd pending re-triggered waiting for the votes
06:09 Humble_ have u retriggered this as well
06:09 Humble_ awesome!
06:10 Humble_ http://review.gluster.org/#/c/10826/ ashiq this does not have anything
06:10 pppp joined #gluster-dev
06:12 ashiq kshlm, has to review it then with the changes I will re-trigger it
06:13 nbalacha joined #gluster-dev
06:14 josferna joined #gluster-dev
06:15 spalai joined #gluster-dev
06:15 Saravana joined #gluster-dev
06:21 gem atinm, This patch has passed NetBSD regression http://review.gluster.org/#/c/11212/ . Can I get it merged?
06:26 krishnan_p joined #gluster-dev
06:26 krishnan_p Are NetBSD regression running to completion? This one seems like it's hung - http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/7049/console
06:30 rjoseph joined #gluster-dev
06:31 anekkunt joined #gluster-dev
06:36 Gaurav__ joined #gluster-dev
06:39 spalai joined #gluster-dev
06:40 ndevos krishnan_p: that one looks hung, others runs to completion though...
06:40 ndevos krishnan_p: reboot the VM and retrigger?
06:41 * ndevos reboots the VM
06:41 krishnan_p ndevos, this was my 5th retrigger. I am willing it wait it out. Let NetBSD regression VMs stabilise
06:42 krishnan_p s/willing it/willing to
06:42 krishnan_p ndevos, I will try again next week.
06:42 ndevos krishnan_p: this kind of hang is normally because of a stale NFS mount :-/
06:43 ndevos krishnan_p: reviewing the refcnt patch helps ;-)
06:43 krishnan_p ndevos, Yep. I see that being discussed on the ML.
06:43 krishnan_p ndevos, I will.
06:43 krishnan_p ndevos, I am too tired to retrigger the build, this week.
06:44 ndevos krishnan_p: no problem, and once the nfs fixes are in, those hangs should not happen that regulary anymore
06:46 krishnan_p ndevos, thanks for rebooting the VM. I will review the refcnt patch.
06:48 anrao joined #gluster-dev
06:50 atinm ndevos, could you review http://review.gluster.org/#/c/11320 ?
06:51 atinm ndevos, its for 3.7.2 release notes, I want to get that merged asap :)
06:52 ndevos atinm: set BUG to the 3.7.2 blocker?
06:53 anrao joined #gluster-dev
06:55 atinm ndevos, I was referring to the previous release notes, there we had this as rfc
06:55 atinm ndevos, I can set the bug id though
06:55 atinm ndevos, shall I ?
06:55 ndevos atinm: yeah, the previous release notes were added after the tag :-/
06:55 ndevos atinm: but there is more
06:57 ndevos atinm: the 1st paragraph should probably have a pointer to the 3.7.2 release notes too, and the ones for 3.7.1 (of we have those?)
06:57 ndevos atinm: for 3.5 I do it like https://github.com/gluster/glusterfs/blob/release-3.5/doc/release-notes/3.5.4.md
06:58 atinm ndevos, let me check the format
07:01 Manikandan_ joined #gluster-dev
07:02 pranithk joined #gluster-dev
07:04 krishnan_p ndevos, i have a naive question on the ref count patch
07:05 krishnan_p ndevos, I see that you have used sync_fetch_and_add/sub builtins when available. these report ref count as observed before the corresponding gf_ref_get/gf_ref_put.
07:06 krishnan_p ndevos, but when the builtins are unavailable, e.g, gf_ref_get returns the updated ref count. Is this expected, if so why?
07:09 ndevos krishnan_p: yeah, the returned counter will be different, off by one, but that does not matter, the failure case it important to check, not the nr of refs
07:10 atinm ndevos, done
07:10 atinm ndevos, could you check ?
07:11 krishnan_p ndevos, hmm.
07:11 krishnan_p ndevos, GF_REF_GET (p) will return 0 when builtins are used, when refcnt = 0 and return 1, when explicit locking right?
07:12 ndevos krishnan_p: the counter initialized to 1
07:13 anoopcs ldconfig
07:13 krishnan_p ndevos, what is the invariance here? Is it the return value or the refcnt value?
07:13 krishnan_p anoopcs, ldconfig done.
07:13 anoopcs krishnan_p, :D
07:13 krishnan_p ndevos, invariance between the two schemes
07:14 overclk raghu, ping. can you take a look at http://review.gluster.org/#/c/11300/ ?
07:15 krishnan_p ndevos, I got it now.
07:15 ndevos krishnan_p: there is no need to make them equal? "GF_REF_GET() == 0" should be treated as a fatal issue, the return has no other value
07:15 krishnan_p ndevos, the return values when the refcnt is zero is invariant across the two schemes
07:16 * ndevos has never heard the use of "invariant" in sentences like that before :)
07:17 krishnan_p ndevos, everywhere else the codomains of the function are disjoint. i.e, REF_BUILTIN (x) V REF_LOCKING(x) = {0}, where 'V' is used to indicate intersection
07:17 ndevos krishnan_p: yeah, return == 0 is the only case we care about
07:17 krishnan_p ndevos, but the documentation says the return value is the no. of references
07:17 krishnan_p ndevos, this cant be true for both schemes right?
07:17 ndevos krishnan_p: well, it seems the documentation is wrong then :-/
07:18 atinm ndevos, Humble_ : I am counting you guys to take in http://review.gluster.org/#/c/11320, once its merged I will push out 3.7.2 tag
07:18 ndevos krishnan_p: the docs should mention to only check for "== 0", or "!= 0"
07:18 krishnan_p ndevos, yeah.
07:18 Humble_ atinm,
07:19 ndevos atinm: I'm happy with it now
07:19 Humble_ any thing in v4 ?
07:19 krishnan_p ndevos, I feel bad that the regression just passed :(
07:19 Humble_ ok.. I am merging it
07:19 krishnan_p ndevos, I shall add a comment anyway
07:20 ndevos krishnan_p|lunch: we can make the behaviour equal later on, if we care enough?
07:20 Humble_ atinm, done
07:20 * ndevos would like to merge it if it is sufficient functional, so that we can fix the hangs
07:21 raghu overclk: sure
07:30 deepakcs joined #gluster-dev
07:33 Manikandan joined #gluster-dev
07:33 Manikandan_ joined #gluster-dev
07:36 raghu overclk: is it ok to merge the patch now? It seems to be for 3.7 branch
07:41 overclk overclk, after atinm has done pushing the tag I guess.
07:41 Humble_ ndevos++
07:41 glusterbot Humble_: ndevos's karma is now 160
07:45 raghu overclk: I have given a +2 for the patch. If its ok to merge the patch now, I will merge right away. Otherwise I am going to wait.
07:46 overclk raghu, sure. thanks!
08:21 kotreshhr left #gluster-dev
08:27 krishnan_p|lunch left #gluster-dev
08:28 krishnan_p|lunch joined #gluster-dev
08:28 Humble_ atinm, have u done with the release tagging ?
08:28 atinm Humble_, not yet, doing it now
08:28 krishnan_p ndevos, I am comfortable with the unequal behaviour
08:28 atinm Humble_, was out for lunch
08:28 krishnan_p ndevos, Should we leave the documentation string as it stands? I would think we should change that.
08:28 krishnan_p ndevos, thoughts?
08:29 ndevos krishnan_p: yeah, it needs changing, just not sure if we should delay merging the change for that
08:29 Humble_ atinm, oh.. ok
08:30 krishnan_p ndevos, could you just send a patch for that? we can take that later though. Having a patch would reduce the chances of it being missed
08:32 atinm it seems like overclk merged a patch in 3.7
08:32 atinm I would need to update the release note then :(
08:32 ndevos krishnan_p: sure, I can do that
08:32 krishnan_p ndevos, thanks for the patch. Much need
08:33 raghu atinm: have you pushed the tags??
08:33 atinm raghu, not yet
08:33 raghu overclk: do you think the patch http://review.gluster.org/#/c/11320 should go into 3.7.2? Is that the one you have pushed?
08:34 atinm raghu, overclk pushed http://review.gluster.org/#/c/11308/
08:35 ndevos krishnan_p: I'll just leave it like "_gf_ref_get -- increase the refcount" and for _gf_ref_put similar, ok?
08:35 raghu atinm: ok. http://review.gluster.org/#/c/11300 (not 11320) has passed regressions and also has received +2. If you are ok I am going to merge that patch (and v can include it for 3.7.2)
08:35 raghu overclk: ???
08:37 krishnan_p ndevos, Yep. That reflects what the functions do uniformly between the schemes. thanks
08:37 atinm raghu, merge it
08:37 raghu atinm: ok. atinm++
08:37 glusterbot raghu: atinm's karma is now 9
08:38 raghu overclk: http://review.gluster.org/#/c/11300 got merged.
08:38 atinm raghu, merged?
08:39 raghu atinm: yeah. Just merged.
08:40 overclk raghu, sorry. got disconnected.
08:40 overclk thanks raghu
08:41 raghu overclk: no probs. I have just merged the patch.
08:41 atinm ndevos, Humble_ : http://review.gluster.org/11325
08:41 * Humble_ echkcing
08:41 atinm ndevos, Humble_ : can you guys quickly take that in?
08:42 Humble_ atinm, sure..
08:42 Humble_ any more on the way wrt adding in release notes
08:42 atinm Humble_, no please, no more merges
08:42 atinm otherwise this will go on and on
08:45 Humble_ yep
08:45 Humble_ atinm, I merged it
08:45 Humble_ without buildystem votes :)
08:45 atinm Humble_, thanks
08:46 atinm Humble_++
08:46 glusterbot atinm: Humble_'s karma is now 2
08:46 atinm this is a release note, so it hardly matters :)
08:48 Humble_ true
08:50 atinm I pushed the tag now,
08:51 * ndevos checks
08:52 ndevos atinm: looks good
08:52 atinm ndevos, cool :)
08:52 ndevos atinm: next step is to run a release job in jenkins, dont have it build the rpms, and send the email to packaging@gluster.org
08:53 atinm ndevos, should I uncheck build rpms in http://build.gluster.org/job/release/build?delay=0sec ?
08:53 anoopcs ndevos, Why do we still have 3.7dev tag?
08:54 nkhare joined #gluster-dev
08:54 ndevos atinm: after that, send the all-clear to the maintainers list so that others can start merging patches again
08:54 ndevos anoopcs: we never delete tags
08:54 ndevos atinm: yes, uncheck that box
08:54 anoopcs ndevos, But I don't see other dev tags
08:55 ndevos anoopcs: there should be a v3.8dev tag too?
08:55 atinm ndevos, so I have to provide the source rpm links to the packagers right or giving the build link is good enough?
08:55 anoopcs ndevos, yes 3.8dev is present. but why not for previous?
08:56 anoopcs ndevos, For versions before 3.7?
08:56 ndevos atinm: only the tarball, that is sufficient
08:56 atinm ndevos, ok
08:56 ndevos atinm: this is what Jenkins sends out: http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/2
08:57 ndevos anoopcs: I'm not sure about versions before 3.7, it was a little more messy
08:57 anoopcs ndevos, Ok.
08:57 atinm ndevos, http://build.gluster.org/job/release/128/
08:58 ndevos anoopcs: ah, before 3.7 there were v3.6*qa* tags and the like
08:59 ndevos atinm++ should be good!
08:59 glusterbot ndevos: atinm's karma is now 10
08:59 ndevos atinm: you're called amarts too?
08:59 anoopcs ndevos, yes. So we decided to change from 3.7 onwards?
08:59 pranithk xavih: I am still not sure about the solution we decided on yesterday for access/readdir. I feel even that is not as clean. Why add it to cbk_list and clear it again? Why not create cbk and call ec_combine when ec_dispatch_one_retry returns false?
09:00 ndevos this amarts guy on Jenkins is so confusing, I thought about disabling him
09:00 atinm ndevos, unfortunately yes, I need to have an account
09:00 atinm ndevos, where should I put up a request for it?
09:00 ndevos atinm: gluster-infra@ ?
09:00 atinm ndevos, sure
09:00 pranithk xavih: I had bad experience with afr self-heal code where we had to clear some information and re-use same variables for healing...
09:01 pranithk xavih: merged your first patch? :-)
09:02 ndevos anoopcs: yeah, because the initial "git describe" for the (at that time) new release-3.7 branch was not qa-ready, giving the 1st commit a "dev" tag gets a reasonable versioning
09:03 xavih pranithk: it's only for consistency with the other fops. If we want to use the same management architecture, we should use the same structures and operations. This way it's easier to do changes and detect errors
09:03 anoopcs ndevos, And we already tagged v3.8dev.
09:03 ndevos anoopcs: yes, fortunately :)
09:03 xavih pranithk: yes, my first merge :D
09:03 pranithk xavih: congrats!
09:03 ndevos anoopcs: otherwise the release-3.8 branch would have "v3.7..." in the "git describe" output
09:04 ndevos wait, not teh release-3.8 branch, the master branch
09:04 ndevos atinm, krishnan_p: any improvements to http://gluster.readthedocs.org/en/latest/Developer-guide/GlusterFS%20Release%20process/ are welcome ;-)
09:05 pranithk xavih: Let me think a bit and see if there is a way to prevent re-using of variables....
09:05 xavih pranithk: what's the problem on reusing variables ?
09:06 xavih pranithk: they are local to the fop. I don't see any conflict
09:06 atinm ndevos, I am bit confused on the heading "create release announcements"
09:06 krishnan_p ndevos, will check it out.
09:07 atinm ndevos, does it mean only keep the content ready and then post them in "send release announcement" ?
09:07 ndevos atinm: I normally write a blog post about it, 1-2 lines of intro and then a copy/paste of the release notes
09:07 ndevos atinm: users have requested to get that blog post emailed to the lists, so I do that too
09:08 atinm ndevos, but if you post the blog still the rpms are not available for testing, isn't it?
09:08 pranithk xavih: Whenever you have to reuse variables we need to think about their life time. So different variables in structure will have different life times...
09:08 atinm ndevos, once the packages are available we should do that, no?
09:08 ndevos atinm: yes, it often makes sense to wait a day or two before sending the announcements
09:08 pranithk xavih: And we need to make sure we are using all these variables correctly based on different lifetimes.
09:09 pranithk xavih: If we don't re-use, all variables will have same life time. So we don't need to think too much which changing/reading code.
09:09 ndevos atinm: or, mention in the announcement that packages will follow soon, either works
09:09 atinm ndevos, cool, so looks like my job is done for today  :D
09:09 atinm ndevos, yeah, that can be done
09:10 ndevos atinm: I do not see the Jenkins email on http://news.gmane.org/gmane.comp.file-systems.gluster.packaging yet, if it does not arrive there soon, maybe send an email yourself
09:10 anoopcs ndevos, understood. thanks.
09:10 krishnan_p ndevos, he didn't CC packaging@gluster.org
09:10 ndevos atinm: and, update the maintainers list that they may marge patches again
09:11 atinm ndevos, I have send a mail to packaging@gluster.org
09:11 atinm ndevos, and maintainers as well
09:11 atinm ndevos, can you cross check:?
09:11 ndevos krishnan_p: ah, ok!
09:12 pranithk xavih: To give you an example, in our lock structure lock->fd has different life time compared to the other variables in the structure. Now we need to be careful about when to ref/unref it. I am not sure if I am able to express it clearly :-)
09:12 xavih pranithk: I know, but I think the reuse is perfectly correct here. The operation is basically restarted, so its control vars are reinitialized and used...
09:12 krishnan_p atinm, the mail I see doesn't have packaging@gluster.org CC'd
09:12 atinm krishnan_p, the one which says gluster 3.7.2 released?
09:12 xavih pranithk: yes, I understand
09:13 krishnan_p atinm, yes.
09:13 krishnan_p atinm, that one Gluster Build System emails on your behalf.
09:13 ndevos atinm, krishnan_p: oh, I think we need to update the release job defaults in Jenkins
09:13 atinm krishnan_p, ahh!! i forwarded that mail to packaging
09:13 atinm ndevos, defaut is gluster-devel and users, we should add packaging mailing list there as well
09:13 xavih pranithk: another thing: remember that with the changes in 'readdir[p]' we do not need any more the locks in 'opendir', we can implement 'opendir' the same way 'open' is currently implemented
09:14 krishnan_p ndevos, that is a better way of eliminating errors.
09:14 pranithk xavih: Well I am understanding it more like: we didn't like the answer we saw, so we are trying on remaining subvols. So I am finding more similarities with ec_dispatch_next() in ec_combine() which does more winds for readv for example when we don't have enough responses to compute the answer.
09:14 ndevos http://www.gluster.org/pipermail/packaging/2015-June/000006.html is there now :)
09:15 * ndevos updates the release job
09:15 spandit joined #gluster-dev
09:16 xavih pranithk: this case is special. We really want only one answer. If one answer is considered not valid, it must not be combined with anything else. We start over on a new brick
09:16 asengupt joined #gluster-dev
09:16 xavih pranithk: readv is different because if we don't have enough combinable answers, we request another one to add it to one of the current groups of answers
09:17 pranithk xavih: ah! now it makes more sense :-)
09:18 pranithk xavih: let me ask more question. Who decides which answers are valid/not valid?
09:18 xavih pranithk: ec_dispatch_min() requires a group of at least k answers (k being the number of data bricks), while ec_dispatch_one() requires a group of one and only one answer
09:19 xavih pranithk: the idea I have is to decide this in EC_STATE_PREPARE_ANSWER
09:19 xavih pranithk: I think we should only ask to another brick in case of ENOTCONN, ESTALE and (but not absolutely sure) ENOENT
09:20 pranithk xavih: Enoent won't come
09:20 pranithk xavih: rather shouldn't
09:20 pranithk xavih: ENOENT comes when 'name' in directory doesn't exist
09:20 anrao joined #gluster-dev
09:20 xavih pranithk: it could come if an rm is being executed concurrently, isn't it ?
09:21 xavih pranithk: oh, you are talking about access, right ?
09:21 pranithk xavih: Access is operation on inode. if inode doesn't exist the correct errno is ESTALE.
09:21 pranithk xavih: yeah :-)
09:23 atalur joined #gluster-dev
09:23 xavih pranithk: if access returns EACCES, for example, no other brick should be queried
09:24 xavih pranithk: the same happens with many other possible errors, so I think that considering ENOTCONN and ESTALE would be enough
09:25 nbalacha joined #gluster-dev
09:26 pranithk xavih: Yes
09:29 nbalacha joined #gluster-dev
09:29 poornimag joined #gluster-dev
09:32 soumya joined #gluster-dev
09:33 ashiq joined #gluster-dev
09:34 ashiq joined #gluster-dev
09:35 nbalacha joined #gluster-dev
09:37 pranithk xavih: What is the difference between expected and minimum?
09:38 pranithk xavih: I will change opendir similar to open in separate patch...
09:39 pranithk xavih: there seems to be a subtle difference.
09:40 ndevos I think Gerrit is preparing for the weekend, he's rather slow today
09:43 pranithk xavih: I am just trying to see if we can arrive at a solution which agrees with both of us. If we don't, I will implement what we have decided on yesterday.
09:53 nbalacha joined #gluster-dev
09:54 sabansal_ joined #gluster-dev
09:58 * atinm feels so too
09:58 pranithk xavih: Hey! You are right, it is a restart :-). But instead of what we discussed yesterday. The only change we will need to do is to remove the id bit we tried on from fop->mask and go to EC_STATE_DISPATCH. So it will mean something like, what we thought as mask based on lock and xattrop is not completely accurate, now we made it more accurate and re-started the fop. So there won't be ec_dispatch_one_retry(). We will keep calling ec_dispat
09:58 pranithk xavih: Do you see any issues with this approach?
10:01 atalur joined #gluster-dev
10:07 hagarth and here too
10:07 hagarth atinm: kudos on getting 3.7.2 out!
10:07 hagarth :)
10:08 ndevos hagarth: I thought Avra did not need the machine anymore? http://build.gluster.org/computer/slave34.cloud.gluster.org/
10:08 * ndevos looks for Avra, but does not know the nick
10:09 Manikandan joined #gluster-dev
10:09 dlambrig joined #gluster-dev
10:09 atinm hagarth, thanks
10:09 hagarth ndevos: asengupt is here
10:10 ndevos ah!
10:10 ndevos asengupt: are you still using slave34.cloud.gluster.org?
10:12 ndevos asengupt: if not, please click the [bring this node back online] on http://build.gluster.org/computer/slave34.cloud.gluster.org/ - or ask someone else to do that for you
10:20 ndevos hagarth: seems the netbsd regression test for the refcnt change was cancelled (jenkins restart?), I just retriggered it, but it'll take a while to get in front  of the queue
10:21 ndevos hagarth: maybe merge it without netbsd so that we get less hangs? http://review.gluster.org/11022
10:21 ndevos also needs http://review.gluster.org/11023, of course
10:21 ndevos oh, and that one has a -1..., I'll check+update that now
10:23 soumya joined #gluster-dev
10:30 Manikandan joined #gluster-dev
10:31 poornimag joined #gluster-dev
10:31 hagarth ndevos: yes, I am inclined to get both 11022 and 11023 in as soon as we are ready
10:37 overclk joined #gluster-dev
10:37 Manikandan raghu, could you look into this patch? http://review.gluster.org/#/c/11280/
10:43 soumya joined #gluster-dev
10:44 ashiq overclk, could you look into http://review.gluster.org/10297
10:48 overclk ashiq, looking into it from morning. lot's of context switches made me loose it.
10:50 xavih ndevos: I don't see any further problem in your patch 11023. I've +1 it :)
10:50 ndevos xavih++ thank you :)
10:50 glusterbot ndevos: xavih's karma is now 17
10:51 ndevos oh, great, I fixed the freebsd slave, but now its dropping the Verified+ bit that got set for regression tests :-/
10:52 hchiramm joined #gluster-dev
10:53 csim ndevos: so what was the issue ?
10:54 ndevos csim: no java... I sent an email about it too
10:54 csim grmbl
10:55 csim I guess pkg upgrade did remove it, I should have not trusted it too much to do the right thing :/
10:55 ndevos could be, no idea what "pkg upgrade" exactly does
10:57 csim it upgrades packages :)
10:58 csim but i guess me and the freebsd folks have a different view on what that mean in term of stability
10:58 overclk ashiq, going though it once. most probably I'll merge it soon...
10:59 ashiq overclk++
10:59 glusterbot ashiq: overclk's karma is now 8
11:01 overclk ashiq, minor question. why is bit-rot-messages.h under libglusterfs/src ?
11:02 overclk ashiq, this should be under xlators/features/bitrot/src
11:04 hagarth joined #gluster-dev
11:05 overclk ashiq, I think rest of the patch is pretty much OK. But this needs to be addressed.
11:05 ashiq overclk, for components which had sub folders,header has been kept in libglusterfs/src
11:06 overclk ashiq, why? cannot each sub-dir (sub component) have their own *-messages.h?
11:07 nkhare joined #gluster-dev
11:07 ashiq overclk, for one component we have allocated a segment of 1000 messages so its better to have one header file for bit-rot
11:07 atalur joined #gluster-dev
11:07 hchiramm ndevos++
11:07 glusterbot hchiramm: ndevos's karma is now 161
11:08 hchiramm poornimag, ping
11:08 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
11:09 overclk ashiq, have we reached 1k for bitrot?
11:09 poornimag hchiram, pong
11:10 hchiramm poornimag, this back port http://review.gluster.org/#/c/10971/ can go in release 3.7 right ?
11:10 ashiq overclk, no just 67
11:11 hchiramm poornimag, its bit flags in libgfapi
11:11 overclk ashiq, hmm. I guess the right thing to do is to have one -messages.h for each sub-dir if there's no _rule_ agianst that.
11:12 overclk I'll mention this in the patch.
11:13 poornimag hchiram, yeah sure, it can be backported
11:13 hchiramm ndevos, I am creating  a bug :)
11:17 hchiramm poornimag++ thanks
11:17 glusterbot hchiramm: poornimag's karma is now 3
11:22 rjoseph joined #gluster-dev
11:24 raghu Manikandan: sure. Will take a look
11:26 Manikandan raghu, thanks:)
11:26 ashiq overclk, I have to allocate another segment for bit-rot
11:27 overclk ashiq, one each for stub and bitd
11:27 ashiq will use the same segment for bitd and new segment for stub
11:28 overclk ashiq, IMO that would look cleaner rather than placing that in libglusterfs.
11:29 ashiq overclk, ok, working on it. :)
11:33 pranithk left #gluster-dev
11:34 spalai left #gluster-dev
11:35 atalur joined #gluster-dev
11:36 asengupt ndevos, have bought it back online
11:44 pppp joined #gluster-dev
12:00 sakshi joined #gluster-dev
12:04 gem_ joined #gluster-dev
12:06 itisravi joined #gluster-dev
12:06 rjoseph joined #gluster-dev
12:15 poornimag joined #gluster-dev
12:22 gem joined #gluster-dev
12:23 kkeithley RPM packaging question: which package should /usr/{sbin,libexec/glusterfs}/gfind_missing_files be in? geo-rep?
12:25 kkeithley Fedora/EPEL RPM packaging question
12:25 overclk kkeithley, I guess this tool is generic and not something used with geo-rep.
12:26 kkeithley indeed. hence my question. The way the spec file is written in 3.7.1 it was in -geo-rep always, and if -geo-rep was disasbled, then they weren't included.
12:27 kkeithley Now in 3.7.2 they're in geo-rep if it's enabled, and in -ganesha if -geo-rep is disabled. !!!
12:27 overclk kkeithley, woah!
12:27 kkeithley yeah
12:28 ashiq overclk, http://review.gluster.org/10297 look into it, finished the work as discussed :)
12:28 lalatenduM joined #gluster-dev
12:29 kkeithley are they server-side tools? Should they be in -server? Or in the base RPM?
12:30 gsaadi joined #gluster-dev
12:30 overclk kkeithley, might not be server specific.
12:40 shyam joined #gluster-dev
12:42 poornimag joined #gluster-dev
12:45 hagarth joined #gluster-dev
12:48 ashiq overclk, ping
12:48 glusterbot ashiq: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
12:48 josferna joined #gluster-dev
12:54 overclk ashiq, pong.
12:54 ashiq overclk, http://review.gluster.org/10297 is all the changes met
12:55 overclk ashiq, ok. thanks! I'll take a look.
12:55 ashiq overclk, no problem :)
12:58 kanagaraj joined #gluster-dev
12:58 jyoung joined #gluster-dev
13:01 jyoung joined #gluster-dev
13:01 jrm16020 joined #gluster-dev
13:02 jyoung joined #gluster-dev
13:03 jyoung joined #gluster-dev
13:04 jyoung joined #gluster-dev
13:05 jyoung joined #gluster-dev
13:06 jyoung joined #gluster-dev
13:09 jrm16020 joined #gluster-dev
13:10 jrm16020 joined #gluster-dev
13:20 ashiq joined #gluster-dev
13:29 Manikandan joined #gluster-dev
13:29 jrm16020 joined #gluster-dev
13:31 firemanxbr joined #gluster-dev
13:35 jrm16020 joined #gluster-dev
13:36 pousley_ joined #gluster-dev
13:38 jrm16020 joined #gluster-dev
13:44 shyam Any idea on how to download NetBSD logs from nbslave's in  build.gluster.org? I keep getting permission denied errors (I am logged into the build.gluster.org system)
13:45 shyam Ex: http://nbslave7c.cloud.gluster.org//archives/logs/glusterfs-logs-20150618205617.tgz
13:53 kanagaraj joined #gluster-dev
13:59 shaunm joined #gluster-dev
13:59 soumya joined #gluster-dev
14:03 shyam Used ssh access on the nbslaves to get the logs, so ignore ^^^
14:20 atinm joined #gluster-dev
14:23 anekkunt joined #gluster-dev
14:44 shaunm joined #gluster-dev
14:51 krink joined #gluster-dev
15:01 hagarth joined #gluster-dev
15:11 hchiramm joined #gluster-dev
15:18 ira joined #gluster-dev
15:22 shyam joined #gluster-dev
15:23 asengupt joined #gluster-dev
15:45 gsaadi joined #gluster-dev
15:56 ws2k3 left #gluster-dev
16:21 shyam joined #gluster-dev
16:38 lkoranda joined #gluster-dev
17:04 lkoranda joined #gluster-dev
17:10 lkoranda joined #gluster-dev
17:21 shyam joined #gluster-dev
17:30 pppp joined #gluster-dev
17:55 Gaurav__ joined #gluster-dev
17:59 shyam joined #gluster-dev
18:04 shyam joined #gluster-dev
18:08 ashiq joined #gluster-dev
18:16 dlambrig joined #gluster-dev
18:17 ashiq joined #gluster-dev
18:19 jiffin joined #gluster-dev
18:23 ashiq could anyone look into http://review.gluster.org/#/c/11223/ and merge it
18:31 ndk joined #gluster-dev
18:38 dlambrig joined #gluster-dev
19:51 jrm16020 joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary