Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-07-07

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 hagarth joined #gluster-dev
00:12 pranithk hagarth: dude, when are you coming back to Bangalore?
00:16 hagarth pranithk: I am in Bangalore now
00:16 pranithk hagarth: hey!
00:16 pranithk hagarth: when did you arrive?
00:17 hagarth pranithk: got here last morning, fighting a bit of jet lag
00:19 pranithk hagarth: cool
00:19 pranithk hagarth: what did you bring from US apart from chocolates?
00:19 hagarth pranithk: not even that ;)
00:20 pranithk hagarth: devuda!
00:21 hagarth pranithk: what keeps you up early in the morning?
00:22 pranithk hagarth: https://bugzilla.redhat.co​m/show_bug.cgi?id=1240284
00:22 glusterbot Bug 1240284: unspecified, unspecified, ---, bugs, NEW , Disperse volume: NFS crashed
00:22 pranithk xavih: 34TB allocation failure...
00:23 hagarth pranithk: is this a leak or a corruption?
00:23 pranithk hagarth: corruption
00:25 hagarth pranithk: why are there so many pending op(0) frames, any idea?
00:25 pranithk hagarth: nope :-(
00:25 hagarth pranithk: does it happen consistently?
00:27 pranithk hagarth: not sure... It happened once
00:28 hagarth pranithk: ok.. might be useful to monitor nfsstat/nfsiostat on the clients to see if there were re-transmissions
00:29 pranithk hagarth: Hmm... interesting...
00:33 pranithk hagarth: so you stayed there for more than month and got no chocolates :-(
00:37 hagarth pranithk: I am saving myself and others from the excesses of sugar :-/
00:37 hagarth pranithk: I was away for 3 weeks
00:41 pranithk hagarth: :-)
01:13 pranithk xavih: you were right, it was 4+2, I am sorry I must have seen wrong volfile
01:13 pranithk xavih: I just saw in the code that it is 4+2 and most probably the failure is what you suspected it is...
02:29 shubhendu joined #gluster-dev
02:52 shaunm_ joined #gluster-dev
02:56 overclk joined #gluster-dev
03:34 atinm joined #gluster-dev
03:36 kdhananjay joined #gluster-dev
03:48 nishanth joined #gluster-dev
04:05 spalai left #gluster-dev
04:08 kanagaraj joined #gluster-dev
04:09 itisravi joined #gluster-dev
04:10 shubhendu joined #gluster-dev
04:19 sakshi joined #gluster-dev
04:25 krishnan_p joined #gluster-dev
04:27 anmol joined #gluster-dev
04:36 nbalacha joined #gluster-dev
04:45 rafi joined #gluster-dev
04:46 hagarth joined #gluster-dev
04:46 vimal joined #gluster-dev
04:59 soumya_ joined #gluster-dev
05:00 nkhare joined #gluster-dev
05:00 gem joined #gluster-dev
05:04 deepakcs joined #gluster-dev
05:06 ndarshan joined #gluster-dev
05:08 jiffin joined #gluster-dev
05:15 hgowtham joined #gluster-dev
05:16 Manikandan joined #gluster-dev
05:20 ashish joined #gluster-dev
05:23 ashiq joined #gluster-dev
05:25 nishanth joined #gluster-dev
05:25 deepakcs joined #gluster-dev
05:26 pppp joined #gluster-dev
05:26 raghu joined #gluster-dev
05:28 G_Garg joined #gluster-dev
05:30 kshlm joined #gluster-dev
05:39 vmallika joined #gluster-dev
05:41 rafi joined #gluster-dev
05:43 asengupt joined #gluster-dev
05:45 spandit joined #gluster-dev
05:46 spandit_ joined #gluster-dev
05:52 deepakcs joined #gluster-dev
05:58 spalai joined #gluster-dev
05:58 anekkunt joined #gluster-dev
05:59 rafi raghu: can you merge http://review.gluster.org/#/c/11543/
06:01 pppp joined #gluster-dev
06:05 pranithk joined #gluster-dev
06:07 hagarth left #gluster-dev
06:12 kdhananjay joined #gluster-dev
06:17 spalai1 joined #gluster-dev
06:21 Saravana_ joined #gluster-dev
06:23 josferna joined #gluster-dev
06:26 Bhaskarakiran joined #gluster-dev
06:43 anrao joined #gluster-dev
07:03 pranithk xavih: ping me xavi ping me :-)
07:03 pranithk xavih: I just want you to confirm, if I missed anything with my analysis, then we can revert part of the fix...
07:04 anrao ndevos++
07:04 glusterbot anrao: ndevos's karma is now 173
07:04 pranithk xavih: Unfortunately I am not able to recreate it easily :-( But I am very sure this could be the bug.
07:05 rjoseph joined #gluster-dev
07:11 xavih pranithk: what are you talking about ?
07:17 kotreshhr joined #gluster-dev
07:23 xavih pranithk: I've seen your email now :P. I think you are right. We'll need to revert part of the patch
07:27 itisravi joined #gluster-dev
07:27 pranithk xavih: I think that must be it, it fits the symptoms
07:28 xavih pranithk: I've seen that there's another ec_sleep() called inside the inode lock. I'm not sure if this is right, but if it's, the solution would be to move the ec_sleep() call in ec_lock() inside the locked region
07:29 pranithk xavih: I am sending the patch
07:29 pranithk xavih: Sorry?
07:29 pranithk xavih: Ah!
07:30 xavih pranithk: Instead of calling ec_sleep() after UNLOCK(inode->lock), we can call it before the UNLOCK
07:30 xavih pranithk: the only question is the lock order, but there's another place where this is done
07:30 pranithk xavih: lock inside lock is generally risky, I thought that is why you did it that way. That is the reason I fixed it this way before...
07:30 pranithk xavih: exactly! :-)
07:31 pranithk xavih: Which is risky as well :-)
07:31 pranithk xavih: For now lets leave it this way?
07:31 xavih pranithk: ok
07:32 pranithk xavih: thanks for the tip that the size looked like pointer :-)
07:32 pranithk xavih: In the morning when I was going through code I found it is 4+2
07:34 xavih pranithk: yes, I've seen it :)
07:34 pranithk xavih: I need to leave for lunch now, I am running local regression of ec tests. Will post the patch once I am back from lunch
07:34 xavih pranithk: ok. Thanks :)
07:35 atalur joined #gluster-dev
07:37 atalur joined #gluster-dev
07:39 atalur joined #gluster-dev
08:06 xavih pranithk: to be able to detect these problems in the future, it would be useful to put a pair of GF_ASSERT(fop->refs > 0) in ec_sleep() and ec_fop_data_release()
08:06 xavih pranithk: you can add this to the patch
08:27 pranithk xavih: that makes sense, will do.
08:33 hagarth joined #gluster-dev
08:36 itisravi joined #gluster-dev
08:51 atinm itisravi, ping
08:51 glusterbot atinm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
08:56 itisravi atinm: http://review.gluster.org/#/c/11556/
08:57 atinm itisravi, thanks
08:57 itisravi atinm: np
09:26 Manikandan joined #gluster-dev
09:32 sakshi joined #gluster-dev
09:34 sakshi joined #gluster-dev
09:39 pranithk xavih: Now that some scary bugs are away, shall we resume our discussion about size in readdirp being wrong? remember that bug? :-)
09:39 soumya_ joined #gluster-dev
09:42 xavih pranithk: yes
09:44 pranithk xavih: I am not sure how to fix it properly :-(
09:44 pranithk xavih: We won't know if the iatts we get in readdirp is of latest version or not :-(
09:45 pranithk xavih: And we don't seem to maintain it in any place at the moment...
09:46 xavih pranithk: in previous self-heal implementation, I tried to not mark a directory as healed until all entries were healed
09:46 xavih pranithk: a similar approach should cover most of the cases
09:48 pranithk xavih: That is the case even now.
09:48 pranithk xavih: ah!
09:48 pranithk xavih: But that is not correct right? because of hard links?
09:49 xavih pranithk: right
09:49 xavih pranithk: I think I already asked that, but how afr solves this ?
09:53 pranithk xavih: yes yes, afr maintains about good/bad bricks in inode. Even though ec has similar fields in inode-ctx, but they don't mean the same thing...
09:54 pranithk xavih: there were 2 things that were problems:
09:54 pranithk xavih: 1) if inode-ctx doesn't exist it was giving all 1s
09:55 pranithk xavih: 2) when it does, bad will be 0 even when some bricks are not upto date.
09:58 xavih pranithk: not sure how afr handles good/bad differently, but there will always be cases in which that information is not up to date
09:58 ppai joined #gluster-dev
09:58 xavih pranithk: there's no way to be absolutely sure without a lock and a lookup
09:58 pranithk xavih: Afr is conservative, if it is not sure it will unwind with NULL inode so stat will come explicitly
09:59 xavih pranithk: stat will lock and do the lookup. So this is the way to go I think...
10:00 itisravi_ joined #gluster-dev
10:00 pranithk xavih: But in that case we will always unwind with NULL inode :-(
10:00 pranithk xavih: is that okay for next release?
10:01 pranithk xavih: and we can improve upon it?
10:01 xavih pranithk: why ?
10:01 xavih pranithk: there will be cases in which we have information about bad bricks
10:02 pranithk xavih: Because bad is zero if write succeeds on all the bricks irrespective of versions...
10:02 pranithk xavih: And we are allowing writes on all bricks.
10:02 xavih pranithk: no. bad is only updated when a brick fails
10:02 xavih pranithk: success doesn't clear a bad bit
10:03 xavih pranithk: only heal clears it
10:03 pranithk xavih: Hmm.. that is not what I see :-(.
10:03 pranithk xavih: This is the test case I execute
10:08 asengupt joined #gluster-dev
10:09 pranithk xavih: Sorry took a while to remember. While a 1GB file is under heal we do a truncate to see what the size is, it sometimes gives non-zero value. And I remember seeing bad as 0 in inode-ctx
10:09 pranithk xavih: Let me ask the question differently
10:09 pranithk xavih: If we have 2+1
10:09 anekkunt joined #gluster-dev
10:10 xavih pranithk: if a file is being healed, parent directory should not be marked as healthy
10:10 pranithk xavih: how many parent directories?
10:10 xavih pranithk: and readdirp should not use that brick to get directory contents
10:10 pranithk xavih: It could have 1000s of hardlinks
10:10 xavih pranithk: oh, I see, sorry
10:11 atinm joined #gluster-dev
10:11 pranithk xavih: yeah, that is the problem :-(. For simple case of no extra hardlinks that solution would be perfect :-)
10:12 xavih pranithk: ok. I see two problems here: 1) self-heal shouldn't touch trusted.ec.size, and 2) bad shouldn't be set to 0 until self-heal has finished
10:12 nbalachandran_ joined #gluster-dev
10:12 xavih pranithk: if those conditions are true, readdirp shouldn't have any problem
10:12 pranithk xavih: let me think...
10:13 pranithk xavih: 2) I understand.
10:13 G_Garg joined #gluster-dev
10:13 xavih pranithk: in previous implementation, reads ans writes in self-heal were issued as subfops, and this caused the trusted.ec.size to not be updated
10:13 pranithk xavih: 1) I am not understanding :-(. If self-heal doesn't set trusted.ec.size how will that brick know the up-to-date size?
10:14 pranithk xavih: Even now it is not updated...
10:14 xavih pranithk: no, no, I mean that writes shouldn't touch trusted.ec.size and self-heal should repair it at the beginning, before starting data self-heal
10:14 krishnan_p joined #gluster-dev
10:15 kkeithley1 joined #gluster-dev
10:15 xavih pranithk: that is how it was done before
10:17 pranithk xavih: You mean only size is it? not the version
10:18 xavih pranithk: yes
10:18 pranithk xavih: we were setting version same as well which had problems... that is why we set self-heal bit now.
10:18 pranithk xavih: Hmm... let me think about this.
10:19 xavih pranithk: well, all metadata (excluding version) should be healed before data heal, and after data heal some metadata needs to be restored again, like modification time
10:20 xavih pranithk: this way, all readdirp information (except modification time) will be valid
10:20 xavih pranithk: I need to go. I'll be back in ~3 hours
10:23 pranithk xavih: Okay, I will need some time to think through these things. I will update you once I finalized on a solution...
10:28 Manikandan joined #gluster-dev
10:36 itisravi joined #gluster-dev
10:41 Manikandan joined #gluster-dev
10:44 ira joined #gluster-dev
10:47 kotreshhr joined #gluster-dev
10:48 rjoseph joined #gluster-dev
10:53 kshlm joined #gluster-dev
10:53 pppp joined #gluster-dev
10:53 krishnan_p joined #gluster-dev
10:54 nbalacha joined #gluster-dev
10:55 rjoseph joined #gluster-dev
10:55 nkhare_ joined #gluster-dev
11:00 kotreshhr1 joined #gluster-dev
11:02 krishnan_p kotreshhr1, are milind's comment blocking merge for http://review.gluster.com/#/c/11549/ ?
11:02 krishnan_p kotreshhr1, if not, I will merge it
11:03 kotreshhr1 krishnan_p: No, it was more of need info, technical debt.
11:03 kotreshhr1 krishnan_p: You can go ahead and merge it.
11:03 krishnan_p kotreshhr1, thanks
11:04 atinm krishnan_p, kotreshhr1 : has the patch passed linux regression?
11:04 krishnan_p atinm, it has.
11:04 atinm krishnan_p, it has passed
11:05 atinm krishnan_p, smoke has overridden it
11:05 kotreshhr1 atinm: It's passed, flag is not updated
11:05 krishnan_p atinm, this is the problem with using Gluster Build System for both smoke and regression :(
11:05 atinm krishnan_p, in that case the query what you use might not hold true
11:05 krishnan_p atinm,  this is affecting my gerrit queries ;(
11:05 krishnan_p atinm, yeah. I would have merged this long time back!
11:06 atinm krishnan_p, better to just have netbsd +1 in the query and then go through individually
11:06 krishnan_p atinm, then I would be looking on all patches that failed Linux regression too.
11:07 krishnan_p atinm, instead we should fix smoke test passing successfully, yet voting -1. It looks wrong to me.
11:15 hgowtham joined #gluster-dev
11:17 atinm rafi, I believe you will run bug triage meeting today as well
11:17 rafi atinm: :)
11:18 rafi atinm: If you are  busy, I can do that ;)
11:18 atinm rafi, held up in a meeting :(
11:19 atinm rafi, once I am free I will join u
11:19 rafi atinm: ok, I will host it,
11:19 rafi atinm: np
11:37 kotreshhr joined #gluster-dev
11:40 dlambrig_ joined #gluster-dev
11:42 nkhare joined #gluster-dev
11:46 rafi1 joined #gluster-dev
11:46 hagarth joined #gluster-dev
11:47 kshlm joined #gluster-dev
11:48 rjoseph joined #gluster-dev
11:51 pranithk xavih: Had to rebase the patch http://review.gluster.com/11506, because http://review.gluster.com/11556 was not merged and the test was failing. Could you give +2 again?
11:51 pranithk xavih: same for http://review.gluster.com/11558
11:52 atinm joined #gluster-dev
11:56 rafi1 REMINDER: Gluster Community Bug Triage meeting starting in another 5 minutes in #gluster-meeting
12:01 rafi joined #gluster-dev
12:05 soumya_ joined #gluster-dev
12:05 kdhananjay joined #gluster-dev
12:19 krishnan_p joined #gluster-dev
12:19 rjoseph joined #gluster-dev
12:36 anrao joined #gluster-dev
12:37 pppp joined #gluster-dev
12:42 Manikandan joined #gluster-dev
12:50 rafi atinm++ ndevos++ kkeithley_++ pranithk++ soumya_++ thanks guys
12:50 glusterbot rafi: atinm's karma is now 15
12:50 glusterbot rafi: ndevos's karma is now 174
12:50 glusterbot rafi: kkeithley_'s karma is now 4
12:50 glusterbot rafi: pranithk's karma is now 25
12:50 glusterbot rafi: soumya_'s karma is now 2
12:50 soumya_ rafi++
12:50 glusterbot soumya_: rafi's karma is now 17
12:51 rafi soumya_: :)
12:51 soumya_ :)
13:10 Manikandan joined #gluster-dev
13:21 pppp joined #gluster-dev
13:21 pranithk xavih: thanks xavi :-)
13:21 pranithk xavih: for giving +2 again.
13:22 kshlm joined #gluster-dev
13:25 xavih pranithk: np :)
13:27 Saravana_ joined #gluster-dev
13:27 shyam joined #gluster-dev
13:27 pranithk left #gluster-dev
13:28 pousley joined #gluster-dev
13:34 G_Garg joined #gluster-dev
13:41 jrm16020 joined #gluster-dev
14:17 kotreshhr left #gluster-dev
14:22 spalai left #gluster-dev
14:43 soumya_ joined #gluster-dev
14:45 shyam joined #gluster-dev
15:08 kdhananjay joined #gluster-dev
15:08 topshare joined #gluster-dev
15:09 topshare joined #gluster-dev
15:09 topshare joined #gluster-dev
15:11 shubhendu joined #gluster-dev
15:40 pranithk joined #gluster-dev
15:46 RedW joined #gluster-dev
16:02 shyam joined #gluster-dev
16:05 nishanth joined #gluster-dev
16:07 spalai1 joined #gluster-dev
16:08 nbalacha joined #gluster-dev
16:08 anrao joined #gluster-dev
16:34 rafi joined #gluster-dev
16:38 firemanxbr joined #gluster-dev
16:46 pousley_ joined #gluster-dev
16:53 gem joined #gluster-dev
16:55 gem joined #gluster-dev
16:56 pranithk joined #gluster-dev
16:56 gem joined #gluster-dev
16:58 gem joined #gluster-dev
16:59 ndevos anyone online that could quickly review http://review.gluster.org/11568 for stupid mistakes?
17:00 ndevos it is basically https://github.com/gluster/glusterfs/blo​b/release-3.5/doc/release-notes/3.5.4.md for 3.5.5
17:00 gem joined #gluster-dev
17:02 gem joined #gluster-dev
17:03 ndevos kkeithley_, obnox, *: pimp your review stats? ^
17:04 gem joined #gluster-dev
17:07 gem joined #gluster-dev
17:09 gem joined #gluster-dev
17:28 kkeithley_ haha, yeah, chrome plate my review stats please
17:36 G_Garg joined #gluster-dev
17:39 kkeithley_ ndevos: reviewed
17:39 ndevos kkeithley++ thank you
17:39 glusterbot ndevos: kkeithley's karma is now 79
18:07 jiffin joined #gluster-dev
18:33 spalai1 left #gluster-dev
18:34 vimal joined #gluster-dev
19:00 ira joined #gluster-dev
19:21 topshare joined #gluster-dev
19:24 pousley joined #gluster-dev
19:32 anrao joined #gluster-dev
19:39 dlambrig_ joined #gluster-dev
19:53 jrm16020 joined #gluster-dev
20:28 dlambrig_ joined #gluster-dev
22:24 badone joined #gluster-dev
22:24 hagarth joined #gluster-dev
22:39 wushudoin| joined #gluster-dev
22:43 hagarth joined #gluster-dev
22:44 wushudoin| joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary