Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-08-31

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:03 jrm16020 joined #gluster-dev
01:10 jrm16020 joined #gluster-dev
01:19 EinstCrazy joined #gluster-dev
01:20 zhangjn joined #gluster-dev
01:33 ashiq joined #gluster-dev
01:37 badone_ joined #gluster-dev
01:41 zhangjn joined #gluster-dev
02:06 overclk joined #gluster-dev
02:09 sakshi joined #gluster-dev
02:20 overclk_ joined #gluster-dev
02:28 overclk joined #gluster-dev
03:00 kotreshhr joined #gluster-dev
03:15 vmallika joined #gluster-dev
03:29 gem joined #gluster-dev
03:36 ppai joined #gluster-dev
03:50 anekkunt joined #gluster-dev
03:57 kanagaraj joined #gluster-dev
03:57 shubhendu joined #gluster-dev
03:59 Byreddy joined #gluster-dev
04:02 sakshi joined #gluster-dev
04:04 itisravi joined #gluster-dev
04:17 pranithk joined #gluster-dev
04:18 kshlm joined #gluster-dev
04:19 pranithk kshlm: Did you get a chance to see the comments on http://review.gluster.com/11872
04:21 kdhananjay joined #gluster-dev
04:22 pranithk kdhananjay: I completed my reviews and merged the patches, I did one resubmit of small change for set_root_fsid patch. Did you get a chance to review?
04:22 kdhananjay pranithk: Yeah I saw that.
04:23 kdhananjay pranithk: Looked OK to me.
04:23 pranithk kdhananjay: cool
04:23 pranithk kdhananjay: That is the only patch left for 3.7 right?
04:23 pranithk kshlm: when is 3.7.4 release?
04:23 kdhananjay pranithk: Yep. I will backport it right now.
04:24 pranithk kdhananjay: Cool.
04:24 pranithk kdhananjay: You are gonna work on the perf bug after this?
04:24 kdhananjay pranithk: Thanks a lot for the reviews.
04:24 kdhananjay pranithk++
04:24 glusterbot kdhananjay: pranithk's karma is now 28
04:24 kshlm pranithk, later today.
04:24 kdhananjay pranithk: Just in time for 3.7.4 I guess. :)
04:24 kshlm Sometime in the evening.
04:24 pranithk kshlm: cool, thanks!
04:25 pranithk kshlm: Please do let us know what you think about http://review.gluster.com/#/c/11872
04:25 pranithk kdhananjay: yes :-). Hope regressions pass :-)
04:26 kshlm pranithk, Sure.
04:26 atinm joined #gluster-dev
04:27 pranithk atinm: You also give your comments about the change in solution for http://review.gluster.com/11872...
04:27 anekkunt kshlm, I have resent patch (http://review.gluster.org/#/c/11989/ ) as we discussed , Please could you review
04:28 atinm pranithk, I need to go through your comments first :)
04:28 kshlm pranithk, Please use review.gluster.org instead of review.gluster.com . The .com addresses were supposed to be redirects to help transistion to .org addresses.
04:28 kshlm They should have been removed a long time back, but haven't been.
04:28 pranithk kshlm: ah!
04:29 pranithk atinm: cool!
04:37 poornimag joined #gluster-dev
04:37 zhangjn joined #gluster-dev
04:37 vmallika joined #gluster-dev
04:42 kdhananjay pranithk_afk: http://review.gluster.org/#/c/12052/
04:44 ndarshan joined #gluster-dev
04:47 kotreshhr joined #gluster-dev
04:47 atinm pranithk_afk, there?
04:49 deepakcs joined #gluster-dev
04:49 pranithk atinm: back
04:49 pranithk atinm: tell me
04:50 atinm pranithk, I've a follow up question to your comment
04:50 mchangir joined #gluster-dev
04:50 overclk joined #gluster-dev
04:50 atinm pranithk, Let me reply to that and then we can discuss
04:50 pranithk atinm: cool
04:51 pranithk kdhananjay: gave +2, let's hope the regression passes in time
04:51 kdhananjay pranithk: Yeah. Thanks.
04:53 skoduri joined #gluster-dev
04:55 hgowtham joined #gluster-dev
04:56 pranithk atinm: Why should the value be shown in volume info output?
04:56 pranithk atinm: because it is a default and we don't show defaults...?
04:56 atinm pranithk, that's what I understood from the requirement and the patch
04:57 atinm pranithk, the current form of the patch should show it, isn't it?
04:57 atinm pranithk, if its a default option then volume info should show it
04:57 pranithk atinm: It shows, but it should not
04:57 pranithk atinm: Okay let me tell you the problem
04:57 atinm pranithk, eg : readdir-ahead
04:58 pranithk atinm: For 3 way replication afr needs to enable auto quorum and for 2 way replication we need 'none' quorum
04:58 atinm pranithk, we enable it by default
04:58 pranithk atinm: thats the only requirement.
04:58 pranithk atinm: We did it directly in afr code. But the problem is glusterd doesn't know about the default change. So volume get shows 'none'. This is the reason we had to rethink the solution.
04:59 atinm pranithk, as per glusterd, if we change any value in vme table then volume info should capture it
05:00 pranithk atinm: you mean .value?
05:00 atinm pranithk, .value is the default one
05:01 pranithk atinm: yes, that is the problem. default is different based on context of the afr volume. The present mechanism of vme table doesn't allow for context based defaults. So there is no single .value for the option. It is dependent completely on the replica count
05:02 pranithk atinm: hence the proposal to implement context based defaults feature
05:03 atinm pranithk, actually we do have a significance of '!' symbol, if we use it then we can handle those options in an exceptional way
05:03 pranithk atinm: Am I making sense?
05:03 atinm pranithk, that might help here?
05:03 pranithk atinm: thinking
05:04 atinm pranithk, we just need to handle this option in a special way and set it dynamically during volfile generation depending on replica count value
05:04 atinm pranithk, IIRC, afr does have some special handlers for heal related options?
05:04 pranithk atinm: How will volume get know about this?
05:04 pranithk atinm: yes, afr does
05:05 atinm pranithk, how does volume get behave for those options?
05:05 pranithk atinm: no idea
05:05 pranithk atinm: let me check
05:05 nishanth joined #gluster-dev
05:06 atinm pranithk, even I have forgotten that piece of code
05:06 kotreshhr left #gluster-dev
05:06 atinm pranithk, but yes if we are not supposed to show this option in volume info then the idea makes sense to me
05:07 kshlm joined #gluster-dev
05:08 pranithk atinm: nope, there is no need to show it in volume info.
05:11 Bhaskarakiran joined #gluster-dev
05:13 aravindavk joined #gluster-dev
05:14 nbalacha joined #gluster-dev
05:17 pranithk atinm: it assigns vme->value if the value is not in dictionary otherwise it loads the .so and gets the value
05:18 pranithk atinm: as per glusterd_get_default_val_for_volopt
05:18 skoduri joined #gluster-dev
05:20 atinm pranithk, in that case with out any change even it should display the correct value if afr sets it, isn't it?
05:20 atinm pranithk, I mean volume get api
05:21 pranithk atinm: Yes, it will, if afr sets it. But we don't want to do that....
05:22 atinm pranithk, now I am getting confused :(
05:22 pranithk atinm: :-)
05:22 atinm <pranithk> atinm: We did it directly in afr code. But the problem is glusterd doesn't know about the default change. So volume get shows 'none'. This is the reason we had to rethink the solution.
05:23 atinm pranithk, my point is how volume get is showing none here if afr sets it
05:24 pranithk atinm: afr changes the priv->quorum, when the volfile is loaded, but not the value of the options table. I don't think we should change it, because glusterd won't call init on the xlator, so afr can't really change that value
05:26 Manikandan joined #gluster-dev
05:27 asengupt joined #gluster-dev
05:29 itisravi pranithk: Trying to understand your solution. Are you saying that we don't write the value to a store, but just like client quorum type is calculated at run time by AFR, glusterd also needs a function (that gets volinfo as argument) which gets called during a 'volume get' command and spews out this value at run time?
05:31 pranithk itisravi: yes. Basically at the moment, the default value is static per option. There is no provision for choosing context based default till now. I think we need such a thing.
05:32 itisravi pranithk: Hmm.
05:33 pranithk itisravi: There is a way to do whatever we want to do for the volgen by giving "!" at the beginning of the .option value. But the problem with that approach is it is not granular enough. For example, cluster.self-heal-daemon option adds data/metadata/entry self-heal to "on" as part of handling that option.
05:33 hchiramm joined #gluster-dev
05:34 itisravi pranithk: right
05:34 kshlm joined #gluster-dev
05:37 jiffin joined #gluster-dev
05:40 pranithk itisravi: atinm: The main problem with the present solution is that 1) it remembers the option in the volinfo as if the user configured it and it will be remembered in the store. 2) solution is not generic enough that new options which want to have defaults based on the volume context can pick and choose.
05:41 krishnan_p joined #gluster-dev
05:41 pranithk krishnan_p: good you also joined in. we need your inputs also for http://review.gluster.org/11872
05:42 krishnan_p pranithk, looking at the patch just now ...
05:43 pranithk krishnan_p: So the problem is quorum needs default as none for 2 way or even-number replication where as it needs auto for 3-way or odd-number replication
05:43 krishnan_p pranithk, OK
05:44 kshlm joined #gluster-dev
05:45 pranithk krishnan_p: the present solution is to add this option in volinfo->dict with none/auto based on the replica-count. It will also be stored in the glusterd-store. This solution works. But traditionally we are not storing defaults in volinfo->dict. I am proposing that we enhance the existing vme defaults handling to add context based defaults
05:45 itisravi pranithk: If the semantics is that whatever user did not set (and hence the 'default' value) should not be present in /var/lib/glusterd/vols/volname/info, I guess it makes sense.
05:45 krishnan_p pranithk, and what makes a context?
05:45 Gaurav__ joined #gluster-dev
05:45 pranithk krishnan_p: At least for now quorum option for afr, it is replica-count
05:46 krishnan_p pranithk, could you propose a way to do this? I will help in reviewing the proposal.
05:46 pranithk krishnan_p: I already did. You can find my comments at the end in the comments for that patch.
05:49 hagarth joined #gluster-dev
05:51 pppp joined #gluster-dev
05:58 raghu joined #gluster-dev
06:00 anekkunt joined #gluster-dev
06:01 hgowtham itisravi++
06:01 glusterbot hgowtham: itisravi's karma is now 12
06:02 poornimag joined #gluster-dev
06:04 Saravana_ joined #gluster-dev
06:07 vmallika joined #gluster-dev
06:10 hagarth joined #gluster-dev
06:10 kshlm pranithk, We have been storing defaults in volinfo->dict for a little while now. This hasn't been implemented for all options yet.
06:11 kshlm pranithk, I see no problem in storing defaults in volinfo->dict.
06:11 kshlm We started storing it in volinfo->dict to improve visibilty of default values.
06:12 Gaurav__ hagarth: http://review.gluster.org/#/c/12050/
06:13 pranithk kshlm: Won't they show up in the volume info?
06:13 kshlm Not really a problem.
06:13 kshlm IMO.
06:13 pranithk kshlm: it isn't consistent is it? some defaults are shown, some aren't
06:15 pranithk kshlm: Volume get is a nice implementation which shows you the options considering the defaults. But why show them in volume info?
06:15 pranithk kshlm: which is traditionally only showing options that are configured?
06:15 pranithk kshlm: by user I mean.
06:16 kshlm Okay.
06:16 kshlm Volume get is more recent compared to the earlier default options change I'd done.
06:17 pranithk kshlm: I am not saying the existing options also need to change to not show in volume info if they are default. I would prefer if they don't appear in volume info if we it is possible.
06:17 pranithk kshlm: I know and it is a good direction IMO
06:17 kshlm When that change was done, we needed a way to show the default options, so we decided to add it to volinfo it self.
06:17 kshlm I agree with you argument now.
06:17 pranithk kshlm: makes sense. Now that we have volume get, I am against showing default options in "volume info"
06:18 pranithk kshlm: if possible that is...
06:19 kshlm I've not read your complete solution yet. I'll go through the review and the channel scrollback, and I'll get back to you if I have anything to add.
06:22 pranithk kshlm: krishnan_p|afk: atinm: cool, context based defaults seems like a natural progression to the existing infra already in glusterd volume options implementation. That is why I need everyone of your inputs to make sure we are going in the right direction.
06:24 atalur joined #gluster-dev
06:26 krishnan_p pranithk, if you have a proposal on how to do context based defaults, please send in gluster-devel.
06:27 krishnan_p pranithk, at the moment, only afr needs such a behavior. We should take this requirement into 4.0 for sure. For 3.x, we should spend effort where it's absolutely needed. Does that make sense?
06:29 a2__ joined #gluster-dev
06:29 pranithk krishnan_p: lets first decide on the solution and see if it makes sense. Then we can decide which release and all that. It is a simple solution, may be 1 to 2 days work and atalur will be implementing it.
06:30 krishnan_p pranithk, Like I already said, if you have a proposal send it to gluster-devel _before_ implementing it.
06:31 atinm hagarth, http://review.gluster.org/#/c/12050/ has passed both the regression
06:32 pranithk krishnan_p: yes I am already composing it.
06:32 krishnan_p pranithk, great
06:33 hagarth atinm: merged
06:33 atinm hagarth, thanks
06:33 * hagarth now rebases release-3.7 patches
06:36 a2 joined #gluster-dev
06:36 atinm hagarth, I am wondering whether this fix is sufficient enough as I see other spurious tests are not been part of bad_tests () in 3.7
06:37 hagarth atinm: I think we should just rebase run-tests.sh from mainline to release-3.7
06:37 atinm hagarth, yes
06:37 atinm Gaurav__, would you be able to send a quick patch for this?
06:38 Gaurav__ atinm, ya
06:38 Gaurav__ atinm, sure
06:39 hagarth atinm, Gaurav__: once done, let us merge without awaiting regression runs. a smoke run should be sufficient for this.
06:39 Gaurav__ hagrath, fine
06:39 hagarth i want to pick up as many relevant backports before we push out 3.7.4 today
06:39 atinm hagarth, yes, that makes sense
06:45 ndevos hagarth, kshlm, atinm, krishnan_p, overclk, pranithk: could you review http://review.gluster.org/11769 please?
06:45 ndevos thats the "remove inline" patch from jdarcy :)
06:45 overclk ndevos: already looking into it
06:45 zhangjn joined #gluster-dev
06:45 ndevos overclk: thanks!
06:46 * atinm will take some time to get to it as he needs to clear his back logs :)
06:47 pranithk krishnan_p: done.
06:47 ndevos atinm: you can also only review the glusterd changes ;-)
06:47 sakshi joined #gluster-dev
06:48 atinm ndevos, sure, I will do that :)
06:49 kshlm hagarth, Shouldn't we (re)announce the shutdown of forge.gluster.org on the mailing lists? We had just mentioned it in the weekly meeting last time.
06:50 krishnan_p pranithk, In the middle of things. I will look at it later. Thanks for sending the proposal
06:50 ndevos kshlm: oh, yes, and we need to add it to the "Gluster Weekly News" too (and find someone to do the blog post)
06:51 kshlm ndevos, I can volunteer to do the blog post this week.
06:52 kshlm It's enough I do it on my blog, isn't it?
06:52 pranithk krishnan_p: cool, thanks.
06:52 ndevos kshlm++ great, yeah, your blog is fine as long as it lands on planet.gluster.org
06:52 glusterbot ndevos: kshlm's karma is now 34
06:52 kshlm pranithk, Thanks for sending the proposal. I'll also be looking at it later.
06:53 kshlm ndevos, when is it done generally? Any specific day of the week?
06:53 pranithk nbalacha: what is '-k' in badh?
06:53 pranithk bash
06:53 ndevos kshlm: early this week, the notes from last week should be on https://public.pad.fsfe.org/p/gluster-weekly-news
06:53 nbalacha sticky
06:54 ndevos hmm, I dont see any notes from last week...
06:55 kshlm I was just about to ask about the same.
06:55 Gaurav__ hagarth, atinm, http://review.gluster.org/12056 will rebase all bad test from mainline to release-3.7
06:56 ndevos Gaurav__: I think each bad_test should have a bug filed, maybe those should get cloned to 3.7 too?
06:57 atinm itisravi, IIRC, arbiter-statfs.t was failing in NetBSD due to G_LOG issue, considering its been fixed should we remove this entry from bad_tests () ?
06:57 itisravi atinm: We could, and if it still fails, I'll look into it.
06:57 Gaurav__ ndevos, currently i have created new bug which describe rebasing all bad tests from mainline to release-3.7
06:58 Gaurav__ instead of cloning all separately
06:58 ndevos Gaurav__: right, and it would be good to have the bugs for each bad_test depend on that one
06:59 itisravi atinm: should I send  a patch to revert it?
06:59 Gaurav__ ndevos, yes
07:00 atinm itisravi, yes in mainline, and Gaurav__ can remove this entry in 12056?
07:01 Gaurav__ atinm, i am not getting you.
07:02 josferna joined #gluster-dev
07:03 Gaurav__ itisravi, atinm, i am removing arbiter-statfs.t from 3.7
07:03 itisravi Gaurav__: cool.
07:03 Gaurav__ itisravi, you can send the patch to revert it from mainline
07:04 itisravi Gaurav__: On it.
07:09 Manikandan joined #gluster-dev
07:10 krishnan_p ndevos, I am faced with an problem that might interest you. I have an ubiquitously used object (say volinfo) that wasn't initially designed with refcount. How does one change existing consumers to responsibly ref/unref the object? Do you have any ideas?
07:12 krishnan_p ndevos, Currently, I only have the brute-force approach, i.e, insert ref and unref at all possible codepaths that use volinfo more liberally now :(
07:13 ndevos krishnan_p: yeah, thats more or less the approach I took for the structures I converted, fortunately there were only few functions that needed modifications
07:14 ndevos krishnan_p: I *think* you can add a watch to the object and dump a stacktrace when the object is accessed, but that may be more difficult than auditing the code
07:15 krishnan_p ndevos, git-grep is a more accessible solution to identifying all consumers.
07:15 krishnan_p ndevos, I am inserting ref/unref in well-known util functions. THis was one is easy and seems tractable
07:16 ndevos krishnan_p: yes, that sounds like the easiest approach
07:16 krishnan_p ndevos, I would like to have a code structuring pattern that shakes existing consumers into realising that the object is now ref-counted.
07:17 krishnan_p ndevos, something like how pthread_mutex_{lock, unlock} enclosing the critical-region using '{' and '}' seems to instill little more discipline than otherwise.
07:18 ndevos krishnan_p: hmm, that would be nice, but that could make it difficult for *_init() and *_free() functions, single function GF_REF/PUT would work though
07:19 atalur joined #gluster-dev
07:20 krishnan_p ndevos, wouldn't they be analogous to pthread_mutex_init and destroy. What I am thinking is more synctactic presentation, not compile-time or run-time safety.
07:20 krishnan_p s/synctactic/syntactic
07:20 ndevos krishnan_p: yes, and I think GF_REF(...); { ... } GF_PUT(); would be nice
07:21 krishnan_p ndevos, OK. Let me see how it looks when I complete that patch. Meanwhile, http://review.gluster.org/12058 makes volinfo use GF_REF_* instead of hand-made refcounting.
07:22 ndevos krishnan_p: great, when you have something to show, add me to the reviewers :)
07:23 krishnan_p ndevos, of course. I was worried the size and spread of that patch may be off-putting, so I decided to have this discussion rather than throw a huge patch at you (and others) :)
07:25 ndevos krishnan_p: I dont like big patches, but will (try to) review them if they are or a good cause!
07:26 krishnan_p ndevos, you could complain about the size with your preference on how they could be chunked, while I will try to split them consumer-wise say
07:28 ndevos krishnan_p: I surely do that, but not sure if it would make sense for refcounting changes
07:28 krishnan_p ndevos, if you don't like the patch for good reasons, size or otherwise, I will address them :)
07:28 krishnan_p ndevos, Hmm. why do you think so?
07:28 krishnan_p ndevos, today none of the consumers are ref-count compliant. With incremental patches addressing one 'kind' of consumer at a time, we fix the issue incrementally.
07:29 krishnan_p ndevos, would that be OK?
07:29 ndevos krishnan_p: oh, in that case it would be possible to split, I thought the refcounting was going to be replaced
07:30 krishnan_p ndevos, OK.
07:31 Manikandan joined #gluster-dev
07:40 baojg joined #gluster-dev
07:41 pranithk raghu: tests/bugs/snapshot/bug-1109889.t is failing regression, you know anything about it?
07:43 raghu pranithk: I am looking into it. I have not completely root caused it. But it seems, requests are coming before the brick process is properly initialized.
07:43 pranithk raghu: Cool, I retriggered the regression run
07:43 raghu pranithk: I think it has been added to the bad tests
07:44 pranithk raghu: not in 3.7 I guess
07:44 raghu pranithk: ahh. ok. probably it has to be added there as well.
07:45 raghu pranithk: I mean 3.7 branch
07:46 pranithk raghu: got it :-)
07:52 skoduri joined #gluster-dev
08:02 skoduri joined #gluster-dev
08:05 josferna joined #gluster-dev
08:08 hchiramm skoduri++ thanks !
08:08 glusterbot hchiramm: skoduri's karma is now 3
08:10 zhangjn joined #gluster-dev
08:13 overclk joined #gluster-dev
08:20 sakshi joined #gluster-dev
08:22 josferna joined #gluster-dev
08:24 ndevos rastar, hchiramm: what do you think of letting Jenkins generate html contents of the glusterfs-specs repository when we post a patch?
08:28 rastar ndevos: that would be good
08:29 rastar ndevos: I was acutally checking if any gerrit plugin is available to render markdown files for diff
08:29 rastar ndevos: could not find any
08:29 ndevos rastar: oh, yeah, that would be nice too
08:31 hchiramm ndevos, yep, that would be good
08:34 ndevos hchiramm: what tools are used to generate the readthedocs html?
08:35 hchiramm its their hosted platform
08:35 hchiramm and we use mkdocs theme
08:36 hchiramm we can render glusterfs-specs the same way though
08:37 atalur joined #gluster-dev
08:41 jiffin ndevos: nbslave7i.cloud.gluster.org was assigned to me for debugging mount-nfs-auth.t, can please change that if it is not?
08:41 _Bryan_ joined #gluster-dev
08:42 ndevos jiffin: do you still need that, or should I unreserve it?
08:43 jiffin ndevos: u can unreserve it
08:44 jiffin ndevos: i am not using that machine anymore
08:44 ndevos jiffin: okay, doing so now, thanks
08:45 jiffin ndevos: thanks
08:46 jiffin ndevos++
08:46 glusterbot jiffin: ndevos's karma is now 195
08:57 rastar hchiramm: pm
08:57 hchiramm sure
09:02 skoduri joined #gluster-dev
09:05 kaushal_ joined #gluster-dev
09:09 rastar ndevos: kaushal_ possible candidate plugin for our gerrit?
09:09 rastar https://gerrit-review.googlesource.com/#/admin/projects/plugins/reviewers
09:10 ndevos rastar: yeah, if it could read the MAINTAINERS file that would be awesome :D
09:11 ndevos oh, and that requires us to have a correct and updated MAINTAINERS file <- hagarth
09:11 hagarth ndevos: my last update is still lying orphaned on gerrit.. needs some review and regression attention
09:12 ndevos hchiramm: you know the commands to generate html from markdown? can you create a Jenkins job for that (or find someone)?
09:13 ndevos hagarth: ah, that would be http://review.gluster.org/11330 ?
09:15 hagarth ndevos: damn, rebase fails now
09:15 hagarth will resubmit
09:15 * ndevos tried that too
09:15 ndevos hagarth: if you dont add reviewers, you may need to wait pretty long for a +1 ;-)
09:16 hagarth ndevos: in my experience adding reviewers hasn't helped too
09:16 rastar hchiramm: merge request for http://review.gluster.org/#/c/11969/9
09:16 hchiramm rastar, checking
09:16 * hagarth wonders about nuking all changes > 1m old without any review attention
09:16 ndevos hagarth: it helps me if you add me, I'm not browsing too much random changes
09:16 ndevos s/much/many/
09:17 hagarth ndevos: ok, noted
09:18 pppp joined #gluster-dev
09:19 hchiramm rastar, wil merge it soon
09:19 atinm joined #gluster-dev
09:19 krishnan_p joined #gluster-dev
09:19 rastar hchiramm: thanks!
09:19 ndevos hagarth: I try to keep an eye my review queue, but no guarantees if you get in there - http://review.gluster.org/#/q/is:open+reviewer:ndevos+AND+NOT+owner:ndevos
09:20 Gaurav__ joined #gluster-dev
09:20 hagarth ndevos: ok, best effort is what it looks like :)
09:21 ndevos hagarth: yes, or just ping me :)
09:21 byreddy_ joined #gluster-dev
09:21 hagarth ndevos: of course :)
09:22 ndevos krishnan_p: did you not want to have GF_REF() to return the pointer to the object, or NULL?
09:23 zhangjn joined #gluster-dev
09:27 itisravi atinm: we have a core for tests/bugs/glusterd/bug-948686.t  @ https://build.gluster.org/job/rackspace-regression-2GB-triggered/13859/consoleFull
09:28 hchiramm rastar, its merged.. Isnt it
09:29 rastar hchiramm: yes, thanks
09:29 rastar hchiramm++
09:29 glusterbot rastar: hchiramm's karma is now 63
09:32 hagarth we have a core for ./tests/bugs/distribute/bug-1063230.t too - https://build.gluster.org/job/rackspace-regression-2GB-triggered/13853/consoleFull
09:32 hagarth nbalacha, shyam ^^
09:33 nbalacha hagarth, will take a look
09:33 atinm itisravi, that's known
09:33 atinm itisravi, we are wokring on it to solve this :)
09:33 hagarth atinm: what triggers the core?
09:34 itisravi atinm: ah, should we add it to bad tests then? :)
09:34 atinm hagarth, I will get back to you in few minutes, in a meeting now
09:34 hagarth atinm: ok
09:38 baojg joined #gluster-dev
09:39 zhangjn joined #gluster-dev
09:41 overclk joined #gluster-dev
09:41 hagarth ndevos: rebased
09:43 spalai joined #gluster-dev
09:47 Manikandan raghu++, thanks for merging the patches :)
09:47 glusterbot Manikandan: raghu's karma is now 2
09:48 kdhananjay Not sure who maintains debug/trace. hagarth, could you maybe review http://review.gluster.org/#/c/12053/ ?
09:52 Bhaskarakiran joined #gluster-dev
10:02 poornimag joined #gluster-dev
10:02 hchiramm ndevos, pandoc -t html -o output.html input1.md input2.md
10:02 hchiramm it will do
10:04 pppp joined #gluster-dev
10:09 krishnan_p ndevos, thanks for reviewing the patch. I have posted my comments.
10:10 ndevos krishnan_p: ah, cool, I'll see them on a next pass :)
10:12 nbalacha joined #gluster-dev
10:13 krishnan_p joined #gluster-dev
10:13 hagarth kdhananjay: merged before seeing this message. thanks for that!
10:14 nbalacha atin, the crash occurs in glusterfs_rebalance_event_notify_cbk for  https://build.gluster.org/job/rackspace-regression-2GB-triggered/13853/consoleFull
10:14 nbalacha atinm, known issue?
10:15 kdhananjay hagarth: Oops! OK. Thanks. :)
10:18 hagarth nbalacha: would be helpful to have a root cause soon. It can be a potential blocker for 3.7.4 if it happens easily.
10:25 nbalacha joined #gluster-dev
10:27 kaushal_ joined #gluster-dev
10:27 atinm joined #gluster-dev
10:28 Gaurav__ joined #gluster-dev
10:30 vipulnayyar joined #gluster-dev
10:30 dlambrig joined #gluster-dev
10:32 Bhaskarakiran_ joined #gluster-dev
10:45 shubhendu joined #gluster-dev
10:47 nishanth joined #gluster-dev
10:55 ndarshan joined #gluster-dev
10:58 firemanxbr joined #gluster-dev
10:59 atinm hagarth, there?
10:59 atinm hagarth, this is related to the core what we see in one of the glusterd test file
11:00 atinm hagarth, since our volume list is still not URCU protected, we might very well end up in a case where one thread is operating on a stale volume and other thread received a start on the same volume, this could result into a crash
11:01 atinm hagarth, the similar issue we saw in volume-snapshot.t where we simplified the tests to remove unwanted glusterd restarts
11:02 hagarth atinm: yes
11:02 hagarth atinm: how about doing that for this test unit too?
11:02 atinm hagarth, krishnan_p was trying to come up with a solution for this, till then let me see whether we can simplify it or not
11:03 hagarth atinm: sure, as per our new policy we might block all commits on glusterd till this is fixed ;)
11:03 atinm hagarth, agreed, no one should be exception here :)
11:04 jrm16020 joined #gluster-dev
11:05 dlambrig joined #gluster-dev
11:13 krishnan_p hagarth, that should motivate more contributors to glusterd :)
11:14 hagarth krishnan_p: absolutely!
11:14 jrm16020_ joined #gluster-dev
11:23 shubhendu joined #gluster-dev
11:23 nishanth joined #gluster-dev
11:24 atinm hagarth, it seems like we can't simplify the test as the basis intention of the bug was to check whether sync works when glusterd instance on another node comes back
11:24 ndarshan joined #gluster-dev
11:26 krishnan_p hagarth, the fix is to make volinfo object ref-counted. Since it was designed that way, there are a lot of places where we need to safely access it (i.e, take a ref; do work; unref).
11:26 hagarth atinm: ok
11:28 hagarth krishnan_p: right, that will be involved. till we do it should we mark this as a bad test and use it before failing a run due to a core?
11:30 krishnan_p hagarth, makes sense to me.
11:40 jrm16020 joined #gluster-dev
11:59 itisravi kaushal_: would you take in a 3.7 backport that has passed linux but not netbsd regressions? Patch in question: http://review.gluster.org/#/c/11985/
12:00 byreddy_ joined #gluster-dev
12:00 itisravi FWIW, there has been no change in the various revisions of the patch- just rebased it multiple times with the hope that regression would pass.
12:01 kaushal_ itisravi, Sorry. We need both regressions to pass.
12:01 itisravi kaushal_: hmm alright.
12:01 kaushal_ Unless, there has been a discussion around this, and we've decided to accept changes, that I wasn't aware of.
12:02 ppai_ joined #gluster-dev
12:04 kkeithley1 joined #gluster-dev
12:11 nbalacha joined #gluster-dev
12:14 rgustafs joined #gluster-dev
12:16 dlambrig joined #gluster-dev
12:18 poornimag joined #gluster-dev
12:18 rgustafs joined #gluster-dev
12:26 ira joined #gluster-dev
12:37 kanagaraj joined #gluster-dev
12:44 kdhananjay joined #gluster-dev
12:46 shyam joined #gluster-dev
12:47 shubhendu joined #gluster-dev
12:51 nishanth joined #gluster-dev
12:56 jrm16020 joined #gluster-dev
13:12 overclk joined #gluster-dev
13:21 overclk_ joined #gluster-dev
13:28 shaunm joined #gluster-dev
13:32 kbyrne joined #gluster-dev
13:59 zhangjn joined #gluster-dev
14:00 zhangjn joined #gluster-dev
14:01 hagarth joined #gluster-dev
14:03 shubhendu joined #gluster-dev
14:06 rgustafs joined #gluster-dev
14:08 overclk joined #gluster-dev
14:08 aravindavk joined #gluster-dev
14:09 nbalacha joined #gluster-dev
14:12 asengupt joined #gluster-dev
14:17 zhangjn joined #gluster-dev
14:21 ira__ joined #gluster-dev
14:34 asengupt joined #gluster-dev
14:49 overclk joined #gluster-dev
14:49 zhangjn joined #gluster-dev
14:50 spalai joined #gluster-dev
14:57 nishanth joined #gluster-dev
15:01 wushudoin joined #gluster-dev
15:05 kshlm joined #gluster-dev
15:06 atinm joined #gluster-dev
15:07 kshlm hagarth, Could you merge this https://review.gluster.org/12059 ?
15:08 kshlm Kruthika asked for it. Pranith has given a +2 already.
15:08 hagarth kshlm: done, how about the cli fix from vijaikumar?
15:08 kshlm I'll merge its 3.7 backport once it's merged
15:08 kshlm Just merged onto 3.7
15:08 kshlm hagarth, thanks
15:09 spalai left #gluster-dev
15:09 hagarth kshlm: I am also waiting for regressions to pass on one of my fuse logging patches. it already has a +2 from raghu.
15:10 kshlm hagarth, Still 20 minutes to deadline.
15:14 asengupt joined #gluster-dev
15:15 atinmu joined #gluster-dev
15:23 atinmu joined #gluster-dev
15:25 hagarth kshlm: it will most probably miss, got affected by the spurious regression failures on release-3.7 and I just merged Gaurav's patch a few minutes back.
15:26 hagarth foster: ping, do you per chance happen to know what became of Eric's patch in this thread - https://lkml.org/lkml/2013/12/10/765 ?
15:30 cholcombe joined #gluster-dev
15:31 kshlm hagarth, Okay. Thanks for the update.
15:37 cristov joined #gluster-dev
16:02 nbalacha joined #gluster-dev
16:04 kanagaraj joined #gluster-dev
16:08 jrm16020 joined #gluster-dev
16:17 overclk joined #gluster-dev
16:26 a2__ joined #gluster-dev
16:32 a2 joined #gluster-dev
16:36 atalur joined #gluster-dev
16:49 a2__ joined #gluster-dev
16:54 a2 joined #gluster-dev
17:01 cholcombe joined #gluster-dev
17:13 Gaurav__ joined #gluster-dev
17:14 overclk joined #gluster-dev
17:59 nishanth joined #gluster-dev
17:59 atalur joined #gluster-dev
18:06 shaunm joined #gluster-dev
18:16 kanagaraj joined #gluster-dev
18:27 jrm16020 joined #gluster-dev
18:46 Gaurav__ joined #gluster-dev
19:33 jdarcy joined #gluster-dev
19:39 Gaurav__ joined #gluster-dev
20:06 dlambrig joined #gluster-dev
20:09 RedW joined #gluster-dev
20:45 badone_ joined #gluster-dev
20:58 foster hagarth: lost in time, iirc that patch had a regression that caused it to be reverted and there was never another solution identified
21:04 badone_ joined #gluster-dev
21:56 badone__ joined #gluster-dev
22:53 jobewan joined #gluster-dev
23:17 mjrosenb hrmm, this channel seems to be very active from 0200 to 0500 EST :-/

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary