Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-25

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 hagarth joined #gluster-dev
01:07 shyam joined #gluster-dev
02:28 kdhananjay joined #gluster-dev
02:40 hagarth joined #gluster-dev
03:18 atinm joined #gluster-dev
04:06 shubhendu joined #gluster-dev
04:07 pranithk joined #gluster-dev
04:13 sakshi joined #gluster-dev
04:20 gem joined #gluster-dev
04:27 rjoseph joined #gluster-dev
04:43 vimal joined #gluster-dev
04:44 nbalacha joined #gluster-dev
04:45 raghu joined #gluster-dev
04:56 pppp joined #gluster-dev
05:08 anrao joined #gluster-dev
05:11 ndarshan joined #gluster-dev
05:11 hgowtham joined #gluster-dev
05:16 ashiq joined #gluster-dev
05:21 rafi joined #gluster-dev
05:29 jiffin joined #gluster-dev
05:29 gem joined #gluster-dev
05:30 Manikandan joined #gluster-dev
05:30 spandit joined #gluster-dev
05:31 Manikandan ashiq, thanks!
05:31 Manikandan ashiq++
05:31 glusterbot Manikandan: ashiq's karma is now 5
05:32 Bhaskarakiran joined #gluster-dev
05:35 atinm gem, hey
05:35 gem atinm, hey
05:35 atinm gem, 11388 is the one which ports all the remaining gf_log () instances right?
05:36 gem atinm, the bug id?
05:36 gem atinm, ah.. 11388 patch yes
05:36 spandit_ joined #gluster-dev
05:36 atinm gem, ok
05:36 anekkunt joined #gluster-dev
05:44 kdhananjay joined #gluster-dev
06:01 ashiq anoopcs++
06:01 atinm gem, I've a comment on your patch
06:01 glusterbot ashiq: anoopcs's karma is now 12
06:02 gem atinm, okay. checking.
06:03 gem atinm, Those gf_log messages are commented out :)
06:03 gem atinm, and one in glusterd-sm as well
06:06 Gaurav__ joined #gluster-dev
06:06 soumya_ joined #gluster-dev
06:06 atinm gem, commented out??
06:06 atinm gem, I did a git grep though
06:06 atinm gem, let me check
06:07 atinm gem, got it now
06:07 gem atinm, okay.
06:07 overclk joined #gluster-dev
06:10 spandit_ joined #gluster-dev
06:13 atalur joined #gluster-dev
06:21 deepakcs joined #gluster-dev
06:23 pranithk joined #gluster-dev
06:35 atinm gem, thr?
06:35 gem atinm, here
06:35 atinm gem, the bug id what you have used for your patch is already closed
06:35 atinm gem, can you create a new bug for this?
06:36 gem atinm, oh.. sure
06:37 anrao joined #gluster-dev
06:38 vimal joined #gluster-dev
06:40 gem atinm, https://bugzilla.redhat.com/show_bug.cgi?id=1235538
06:40 glusterbot Bug 1235538: medium, unspecified, ---, bugs, NEW , Porting the left out gf_log messages to the new logging API
06:41 atinm cool
06:41 atinm change the bug id in your patch
06:42 gem atinm, Done. Too intimidated by bug ids. :)
06:56 gem joined #gluster-dev
07:04 ashishpandey joined #gluster-dev
07:05 saurabh_ joined #gluster-dev
07:06 shubhendu joined #gluster-dev
07:18 kdhananjay joined #gluster-dev
07:21 rjoseph joined #gluster-dev
07:22 kshlm joined #gluster-dev
07:24 Saravana joined #gluster-dev
07:27 vimal joined #gluster-dev
07:34 Guest10815 joined #gluster-dev
07:47 arao joined #gluster-dev
08:19 krishnan_p joined #gluster-dev
08:20 krishnan_p pranithk, ndevos could you guys review http://review.gluster.org/11399 ?
08:37 spalai joined #gluster-dev
08:38 spandit joined #gluster-dev
08:44 gem joined #gluster-dev
08:45 ndevos krishnan_p: yeah, I'll have a look at it in the next few hours
08:45 krishnan_p ndevos, thanks for the quick response.
08:45 ndevos krishnan_p: I'm trvelling to the DevOps days again, will have booth duty this afternoon
08:45 krishnan_p ndevos, I just put it out there so that you could add it to list of patches you would be reviewing.
08:46 krishnan_p ndevos, then I guess, pranithk may take a look if has the time.
08:46 * ndevos opened it and will come across it in his browser at one point
08:46 atalur joined #gluster-dev
08:46 ppai joined #gluster-dev
08:46 ndevos krishnan_p: thats fine, I do have a table and plan to work from there, just get interruptions from visitors asking questions
08:46 krishnan_p ndevos, it fixes a stack overflow - just put it out there :)
08:46 krishnan_p s/put/putting
08:46 ndevos krishnan_p: is there a bug it addresses?
08:47 krishnan_p ndevos, it does. I will update the commit with the bug id.
08:47 ndevos krishnan_p: ah, thanks, that helps understanding the problem :)
08:47 * ndevos switches trains, will be back online later
08:53 arao joined #gluster-dev
08:54 ashish joined #gluster-dev
08:57 gem joined #gluster-dev
08:57 mator joined #gluster-dev
08:58 mator left #gluster-dev
09:00 hchiramm arao, ping
09:00 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:03 ababu joined #gluster-dev
09:05 hchiramm arao, ping
09:05 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:05 atalur joined #gluster-dev
09:26 pranithk xavih: hey! hi, shall I rebase my patch?
09:29 ababu joined #gluster-dev
09:29 ashiq atalur++ thanks :)
09:29 glusterbot ashiq: atalur's karma is now 4
09:39 kdhananjay joined #gluster-dev
09:45 jiffin1 joined #gluster-dev
09:53 xavih pranithk: yes, please. We'll make the other changes we talked in another patch
10:17 vmallika joined #gluster-dev
10:27 jiffin joined #gluster-dev
10:31 spalai1 joined #gluster-dev
10:36 pranithk joined #gluster-dev
10:40 rafi1 joined #gluster-dev
10:43 rafi joined #gluster-dev
10:47 anrao joined #gluster-dev
10:48 arao joined #gluster-dev
11:04 hagarth joined #gluster-dev
11:17 krishnan_p joined #gluster-dev
11:19 Manikandan joined #gluster-dev
11:22 vmallika joined #gluster-dev
11:30 Manikandan joined #gluster-dev
11:32 krishnan_p ndevos, there? I would like to discuss review.gluster.org/#/c/11399
11:33 ndevos krishnan_p: partially :)
11:33 spalai1 left #gluster-dev
11:34 krishnan_p ndevos, you had asked why we call gf_mem_set_acct_info when mem-accounting is not enabled.
11:34 kdhananjay joined #gluster-dev
11:34 ndevos krishnan_p: yeah, I just opened the change
11:34 krishnan_p ndevos, I don't know why. But I see that GF_MALLOC for instance calls the function without checking if mem-accounting is enabled
11:35 ndevos krishnan_p: I do not think we need that?
11:35 * ndevos opens the code
11:35 krishnan_p ndevos, sorry. I got it wrong.
11:38 krishnan_p ndevos, snapd has more than one ctx :( I got tricked.
11:38 krishnan_p ndevos, I am used to the invariance of one ctx per glusterfs process.
11:39 ndevos krishnan_p: right, I do not see any calls to gf_mem_set_acct_info() where mem_acct_enable is not checked
11:39 krishnan_p ndevos, thanks for checking
11:39 krishnan_p ndevos, sorry for wasting your time
11:39 krishnan_p ndevos, I will abandon this patch
11:39 ndevos krishnan_p: hmm, you mean that THIS changes while gf_mem_set_acct_info() got called?
11:40 ndevos krishnan_p: okay, I guess you have an idea on what to (not) do :)
11:41 krishnan_p ndevos, nope. global_xlator->ctx may be different from a glfs->master->ctx (or something to that effect)
11:42 ndevos krishnan_p: ah, yes, right
11:42 krishnan_p ndevos, I haven't debugged any gfapi cores, so don't know of invariances that they hold.
11:42 krishnan_p ndevos, what is suspicious is global_xlator.ctx.mem_acct_enable is 841823282.
11:44 ndevos krishnan_p: that mostly is up the the application, the API should set/restore THIS when called, maybe that is not done everywhere and global_xlator got free'd and THIS got not restored?
11:44 raghu joined #gluster-dev
11:44 * ndevos is guessing a little, no idea if this matches what you were thinking
11:45 krishnan_p ndevos, i am not entirely sure of what's going on
11:45 krishnan_p ndevos, I am discovering things as I look closer ;)
11:45 ndevos krishnan_p: have a good journey then!
11:46 krishnan_p ndevos, thanks ;)
11:47 soumya_ krishnan_p, with multiple glfs_inits happening in parallel, global_xlator->ctx would point to the last glfs->master->ctx
11:55 ira joined #gluster-dev
11:59 spalai joined #gluster-dev
12:03 krishnan_p joined #gluster-dev
12:15 rjoseph joined #gluster-dev
12:15 vmallika joined #gluster-dev
12:24 kkeithley_ joined #gluster-dev
12:27 rafi joined #gluster-dev
12:30 pranithk joined #gluster-dev
12:39 ndevos soumya_: I've left a response in http://review.gluster.org/11387, looks okay, but should that change not be two patches?
12:40 soumya_ ndevos, was just replying to your patches..I had hit second issue during regression run.. since its a one-liner fix, have put it in same patch
12:40 soumya_ will help in backporting too
12:40 ashiq hchiramm++
12:40 glusterbot ashiq: hchiramm's karma is now 45
12:40 soumya_ but if you think it should be in separate patch, I shall submit a new one
12:56 firemanxbr joined #gluster-dev
12:57 soumya_ ndevos, would you like to have new patch submitted for the fix?
13:00 hagarth joined #gluster-dev
13:08 ndevos soumya_: yes, please dont smuggle in any unrelated changes :)
13:09 kkeithley1 joined #gluster-dev
13:09 soumya_ ndevos, okay..i will submit new patch :)
13:10 ndevos soumya_: thanks!
13:11 ndevos soumya_: also, would the xprt_list/client entries not use refcounting to be more correct?
13:11 ndevos soumya_: the != NULL check would be fine for now, with a comment on why it is needed (and explain more in the commit message)
13:12 ndevos soumya_: a TODO added to that section of the code would be good too, something like "TODO: use refcounting for xprt_list->client"
13:15 husanux9 joined #gluster-dev
13:15 kkeithley1 joined #gluster-dev
13:20 husanux3 joined #gluster-dev
13:20 kkeithley1 joined #gluster-dev
13:21 csim ndevos: the gluster-patch-acceptance-tests is migrated to github already or not ?
13:21 ndevos csim: I thought so, I was pushing changes to the github repo, but JustinCl1ft is in charge of that repo
13:22 ndevos csim: btw, I'm setting up elk.cloud.gluster.org to have a go at logstash'ing jenkins regression test logs
13:23 csim ndevos: using rpm ?
13:23 ndevos csim: yes, is there a better way?
13:23 csim ndevos: well no, but I tend to avoid rpm from vendors :)
13:24 ndevos csim: its mainly for testing out, I have some elastic engineers here that can help out with setting up
13:24 csim ndevos: I am not sure if I should be happy to test, or if I should just feel sad that test will become prod as always :)
13:25 husanux4 joined #gluster-dev
13:25 csim ( but I think elk would be a good solution, just not my priority vs other stuff )
13:26 csim JustinCl1ft: so, gluster-patch-acceptance-tests ?
13:29 ndevos csim: sue, not your priority and not mine either, but some graphs about regression tests/logs would be cool
13:29 ndevos s/sue/sure/
13:29 csim ndevos: I did have munin for that :)
13:29 shyam joined #gluster-dev
13:29 hagarth shyam: ping, will you be responding on the MDS thread or do you want me to?
13:30 ndevos csim: munin for the jenkins slaves, but I want stats/graphs per regression run and see improvemenst in the logging infra that we're changing
13:30 csim ( if only I remmebered the password :/ )
13:30 csim ndevos: oh that
13:30 csim ndevos: like real and useful graphs ?
13:30 shyam hagarth: Checking which thread, is this the one on devel about MDS for EC/AFR?
13:30 hagarth shyam: yes
13:30 csim can't you just take random one and invent the rest, as per industry best practice :p
13:30 hagarth shyam: and dht too
13:31 ndevos csim: hehe, yeah, and in future tie glusterfs into elastic, something like a storage-provide-plugin
13:32 shyam hagarth: responding...  (sometime by my noon)
13:33 hagarth shyam: no hurry, I was just trying to see if I had to chip in with a response there.
13:34 shyam hagarth: no issues :) wanted to respond with some detail
13:36 hagarth ok cool :)
13:39 ashiq joined #gluster-dev
13:45 spalai left #gluster-dev
13:51 kdhananjay joined #gluster-dev
14:03 hgowtham joined #gluster-dev
14:13 nbalacha joined #gluster-dev
14:13 hagarth joined #gluster-dev
14:22 kshlm joined #gluster-dev
14:33 soumya_ joined #gluster-dev
14:38 pranithk joined #gluster-dev
14:43 atalur joined #gluster-dev
14:45 pranithk atalur: Reviewed all your patches. losing sparseness patch will cause temporary data corruption for appending writes...
14:45 pranithk atalur: Wait, wrong sentence
14:45 pranithk atalur: without the patch which fixed losing sparseness it may cause data corruption for appending writes
14:45 atalur pranithk, http://review.gluster.org/#/c/10448/13/tests/basic/afr/replace-brick-self-heal.t. Why do we need EXPECT_WITHIN there? (your comment)
14:46 pranithk atalur: checking...
14:46 pranithk atalur: Hmmm... how do we know metadata self-heal completed by then?
14:47 atalur pranithk, Actually that check is expecting correct metadata in the source brick itself.. Are you saying we need expect within to ensure that reverse heals don't happen? :-/
14:48 pranithk atalur: Oh damn! sorry :-)
14:48 pranithk atalur: wait
14:48 atalur pranithk, that check is just redundant I think..
14:48 atalur pranithk, okay
14:49 pranithk atalur: We still need to wait for heal to complete :-). What if after heal because of a bug the xattr is removed and we don't catch it because we are not waiting for enough time?
14:49 pranithk atalur: But it is still theoretical
14:49 atalur pranithk, resending the patch :)
14:49 pranithk atalur: Why are we not waiting for heal to complete at the beginning?
14:49 pranithk atalur: wait
14:50 pranithk atalur: And then we can just call EXPECT always?
14:50 atalur pranithk, okay.. what is beginning here?
14:50 pranithk atalur: Just after volume heal command
14:50 pranithk atalur: line 63
14:50 atalur pranithk, got it! I'll add a check to get heal-count as 0
14:50 pranithk atalur: brilliant!
14:50 atalur pranithk, will change the rest to expect
14:51 pranithk atalur: Lets get these patches in for 3.7., 3.6 as soon as possible. Good job overall.
14:51 pranithk atalur: :-)
14:51 atalur pranithk, about the data corruption w/o sparse-fix-patch.. I'm not sure I understand how.. explain? :)
14:52 rjoseph joined #gluster-dev
14:53 pranithk atalur: 2 bricks. One brick is down. New file is created, file is opened in append mode. And lets assume writes are always happening
14:54 atalur pranithk, go on..
14:54 atalur pranithk, checking the patch again!
14:54 pranithk atalur: Now when entry self-heal completes and afr starts writing to the file, on the just created file also writes succeed
14:54 pranithk atalur: Lets say the file sizes are 1MB(source) and 50kb(sink)
14:55 pranithk atalur: When data self-heal is triggered it wouldn't truncate the file to 1MB, but starts heal.
14:55 pranithk atalur: While heal is happening lets say 1more MB is written
14:56 pranithk atalur: files will end up with 2MB and 1MB+50kb
14:56 pranithk atalur: got it?
14:56 atalur pranithk, going through the messages.. 1 min
14:57 pranithk atalur: files opened with O_APPEND will keep writing at the end of the file... without truncate on the sink it will never write to the expected offset of the file...
14:57 atalur pranithk, yes!! correct..
14:58 pranithk atalur: Such an innocent looking bug with disaster consequence. Port it fast!
14:58 atalur pranithk, hmm! thanks :)
14:58 atalur pranithk, haha! Will do.. Waiting on Kruthika's review to merge it?
14:59 pranithk atalur: I will go for dinner now.. Will do one more round of reviews. Yes, lets wait for her review as well. But do you want to add a test case for the fix?
15:00 atalur pranithk, I'll modify the testcase sparse-file-heal.t in a different patch. Will that be fine?
15:01 pranithk atalur: Should be fine.
15:01 atalur pranithk, no wait.. I see that regressions are not done for this patch yet. Will add to this one itself.. I'll resend in another half an hour or so.
15:01 pranithk atalur: your wish
15:02 atalur pranithk, will let you know once done :)
15:02 pranithk atalur: I'm off
15:02 pranithk atalur: sure
15:02 atalur pranithk, okay. Bye
15:02 kkeithley1 joined #gluster-dev
15:02 kkeithley1 joined #gluster-dev
15:04 pranithk joined #gluster-dev
15:05 lpabon joined #gluster-dev
15:39 rafi1 joined #gluster-dev
15:43 ppai joined #gluster-dev
15:52 RedW joined #gluster-dev
15:56 kshlm joined #gluster-dev
16:02 kkeithley_ left #gluster-dev
16:10 josferna joined #gluster-dev
16:12 pranithk joined #gluster-dev
16:23 rafi joined #gluster-dev
16:27 jrm16020 joined #gluster-dev
16:31 mribeirodantas joined #gluster-dev
16:46 anrao joined #gluster-dev
16:51 krishnan_p joined #gluster-dev
17:01 arao joined #gluster-dev
17:15 rafi joined #gluster-dev
17:31 mribeirodantas joined #gluster-dev
17:40 Gaurav__ joined #gluster-dev
18:33 kkeithley_ joined #gluster-dev
18:36 kkeithley_ ndevos, nixpanic: ping. are you still around? Did you have a chance to look at posix-helpers.c in http://review.gluster.org/11130 ?
18:55 vimal joined #gluster-dev
19:49 arao joined #gluster-dev
21:26 badone joined #gluster-dev
21:38 mribeirodantas_ joined #gluster-dev
22:20 anrao joined #gluster-dev
22:22 kaushal_ joined #gluster-dev
23:54 Debloper joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary