Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-08-05

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:33 pranithk joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:57 pppp joined #gluster-dev
02:01 krishnan_p joined #gluster-dev
02:29 gem joined #gluster-dev
03:05 kdhananjay joined #gluster-dev
03:12 krishnan_p joined #gluster-dev
03:16 pranithk joined #gluster-dev
03:25 Byreddy joined #gluster-dev
03:32 atinm joined #gluster-dev
03:39 kotreshhr joined #gluster-dev
03:46 shubhendu joined #gluster-dev
03:52 sakshi joined #gluster-dev
04:00 itisravi joined #gluster-dev
04:02 kotreshhr left #gluster-dev
04:13 deepakcs joined #gluster-dev
04:17 overclk joined #gluster-dev
04:19 gem joined #gluster-dev
04:25 rafi joined #gluster-dev
04:35 ppai joined #gluster-dev
04:36 jiffin joined #gluster-dev
04:36 kotreshhr joined #gluster-dev
04:44 ndarshan joined #gluster-dev
04:45 overclk joined #gluster-dev
04:53 kanagaraj joined #gluster-dev
04:55 Gaurav__ joined #gluster-dev
04:56 nbalacha joined #gluster-dev
05:00 aravindavk joined #gluster-dev
05:00 hagarth joined #gluster-dev
05:01 vimal joined #gluster-dev
05:04 vmallika joined #gluster-dev
05:06 hgowtham joined #gluster-dev
05:12 kshlm joined #gluster-dev
05:13 Manikandan joined #gluster-dev
05:14 aspandey joined #gluster-dev
05:27 Bhaskarakiran joined #gluster-dev
05:27 pppp joined #gluster-dev
05:36 anekkunt joined #gluster-dev
05:38 ashiq joined #gluster-dev
05:49 atinm joined #gluster-dev
05:53 Gaurav__ joined #gluster-dev
06:16 kdhananjay joined #gluster-dev
06:21 vimal joined #gluster-dev
06:25 hchiramm rafi++ thanks !
06:25 glusterbot hchiramm: rafi's karma is now 25
06:35 raghu joined #gluster-dev
06:36 pcaruana joined #gluster-dev
06:39 gem joined #gluster-dev
06:44 atinm joined #gluster-dev
06:48 Saravana_ joined #gluster-dev
07:03 itisravi joined #gluster-dev
07:09 josferna joined #gluster-dev
07:13 raghu joined #gluster-dev
07:14 atinm joined #gluster-dev
07:16 gem joined #gluster-dev
07:16 kotreshhr joined #gluster-dev
07:18 arao joined #gluster-dev
07:18 overclk joined #gluster-dev
07:19 Manikandan joined #gluster-dev
07:20 pranithk joined #gluster-dev
07:28 ppai joined #gluster-dev
07:32 skoduri joined #gluster-dev
07:32 overclk joined #gluster-dev
07:35 krishnan_p joined #gluster-dev
07:38 raghu overclk: I have addressed the review comments and sent the patch for review (http://review.gluster.org/#/c/11449/)
07:38 overclk raghu, ah nice :) I'll take a look.
07:49 gem joined #gluster-dev
08:32 shaunm joined #gluster-dev
08:41 aravindavk joined #gluster-dev
08:46 Saravana_ joined #gluster-dev
08:56 ashiq joined #gluster-dev
08:59 aravindavk joined #gluster-dev
09:20 Saravana_ joined #gluster-dev
09:26 pranithk xavih: ping me once you are online, I need to talk to you about https://bugzilla.redhat.com​/show_bug.cgi?id=1235964#c6
09:26 glusterbot Bug 1235964: high, unspecified, ---, xhernandez, ASSIGNED , Disperse volume: FUSE I/O error after self healing the failed disk files
09:26 Manikandan joined #gluster-dev
09:26 xavih pranithk: Hi
09:27 pranithk xavih: hey xavih :-)
09:27 xavih pranithk: I'm checking it now
09:31 xavih pranithk: I'm unable to reproduce the last problem Backer reports...
09:31 pranithk xavih: This is a regression from the first self-heal implementation to this one. Because mount was always doing heal in the first implementation in heal it was clearing 'bad' but now that shd process can also clear it, there is a need to clear 'bad' may be in 'xattrop' is my thinking...
09:32 pranithk xavih: oh
09:32 Saravana_ joined #gluster-dev
09:33 aravindavk joined #gluster-dev
09:33 pranithk xavih: you mean https://bugzilla.redhat.com​/show_bug.cgi?id=1235964#c7 ?
09:33 glusterbot Bug 1235964: high, unspecified, ---, xhernandez, ASSIGNED , Disperse volume: FUSE I/O error after self healing the failed disk files
09:33 xavih pranithk: yes
09:33 pranithk xavih: yeah even I was not able to re-create it, I just posted response @ https://bugzilla.redhat.com​/show_bug.cgi?id=1236050#c3
09:34 glusterbot Bug 1236050: high, high, ---, pkarampu, ASSIGNED , Disperse volume: fuse mount hung after self healing
09:34 pranithk xavih: I think he updated both bugs with same info :-)
09:34 pranithk xavih: I thought I will wait for him to respond before we post our BIG production ready mail :-)
09:35 pranithk xavih: I pinged him on #gluster as well... but no reply yet...
09:36 xavih pranithk: yes, it will be better to wait. If the problem is real, it's quite important...
09:37 xavih pranithk: regarding the 'bad' flags, I think it's a bit problematic...
09:38 xavih pranithk: currently bad bricks are filtered out right at the beginning. We would need to do it later after the first xattrop
09:39 xavih pranithk: let me check...
09:44 pranithk xavih: But we changed the code to not do that for ec_internal_fop?
09:44 pranithk xavih: so I guess we should be good to go?
09:45 pranithk xavih: brb
09:48 xavih pranithk: bad bricks are removed from mask before any subfop is started. Later internal fops take the same mask as the parent.
09:48 xavih pranithk: If we don't do this on ec_child_select(), we should do it after xattrop()
09:50 xavih pranithk: however I see two problems: 1) fops that don't do xattrop() should filter bad bricks on ec_child_select(). This can be done, but won't solve the problem of clearing 'bad' after the self-heal daemon has finished healing the file
09:51 kotreshhr joined #gluster-dev
09:51 pranithk xavih: I am sorry I lost you :-(
09:52 pranithk xavih: before internal fops we don't even call ec_child_select for the fop
09:52 pranithk xavih: so I am not understanding where the mask is filtered?
09:53 xavih pranithk: oops, sorry. You are right...
09:53 xavih pranithk: ok, so I need to rethink the problem... I was wrong...
09:53 pranithk xavih: so I think in prepare_update_cbk we can clear it?
09:53 pranithk xavih: cool.
09:55 xavih pranithk: yes, in ec_prepare_update_cbk() we have enough information to determine which bricks are good and which ones are bad
09:56 xavih pranithk: in this case, 'bad' field won't be very useful. It will only be required for fops that do not call xattrop(), but even in this case the contents of 'bad' could be wrong if an external self-heal has healed the file
10:00 pranithk xavih: for that duration only...
10:00 pranithk xavih: Which fops don't do xattrop now?
10:01 pranithk xavih: for almost all of them we added xattrop right?
10:01 xavih pranithk: I'm checking it now...
10:02 ndarshan joined #gluster-dev
10:03 aravindavk joined #gluster-dev
10:08 pranithk xavih: except open/opendir... I think we changed it for all of them...
10:08 pranithk xavih: brb
10:20 kotreshhr joined #gluster-dev
10:26 ndarshan joined #gluster-dev
10:33 xavih pranithk: sorry...
10:33 pranithk xavih: Have a meeting now. I can add this code in prepare_update_cbk() to clear 'bad'...
10:33 xavih pranithk: we also have readdir with offset != 0
10:33 kshlm joined #gluster-dev
10:33 kshlm joined #gluster-dev
10:33 pranithk xavih: that is not a big deal I think. In the middle of readdirs, it has to go bad, so its fine IMO
10:33 aravindavk joined #gluster-dev
10:33 pranithk xavih: okay gotta run
10:33 xavih pranithk: yes, we'll talk later
10:35 akay1 joined #gluster-dev
10:38 kshlm joined #gluster-dev
10:53 hgowtham rafi++
10:53 glusterbot hgowtham: rafi's karma is now 26
11:05 firemanxbr joined #gluster-dev
11:18 Bhaskarakiran joined #gluster-dev
11:20 dlambrig_ joined #gluster-dev
11:21 pppp joined #gluster-dev
11:27 spalai joined #gluster-dev
11:34 pranithk xavih: meeting is over... there?
11:34 ira joined #gluster-dev
11:34 xavih pranithk: I've 30 minutes before I leave
11:35 pranithk xavih: :-) let's complete
11:35 xavih pranithk: as you said, readdir is not a problem. For open, maybe we should ignore bad in this case. We try to open on all bricks, even if bad is set. It won't harm
11:35 pranithk xavih: yes, for both open and opendir
11:36 xavih pranithk: yes
11:36 pranithk xavih: I think we are good to go then :-)
11:36 pranithk xavih: So in prepare_update_cbk() we need to clear 'bad' if the versions are all good
11:36 xavih pranithk: I thing this means that bad won't be used at all, because all other fops will compute it from the result of xattrop
11:36 xavih pranithk: s/thing/think/
11:37 Manikandan joined #gluster-dev
11:37 xavih pranithk: do we need to keep 'bad' for something ?
11:37 pranithk xavih: oh man, I totally forgot. Remember I sent you a mail where we needed to mark the mask properly when only the fop fails? We are not marking it anywhere?
11:38 pranithk xavih: You suggested me to use fop->good for doing ec_xattrop after the fop. But It didn't work and I didn't follow up on that.
11:38 pranithk xavih: if we also solve this problem without using 'bad' then I don't think we need 'bad' at all.
11:41 xavih pranithk: I think we can do that without using 'bad'
11:41 pranithk xavih: I think we also need to solve that part for complete fix.
11:41 xavih pranithk: we can remove from fop->mask those bricks considered bad (and not being healed if appropriate) once we have the xattrop result
11:42 pranithk xavih: You mean even the ones where the actual fop failed?
11:43 xavih pranithk: then, when an answer is accepted, we can use its fop->good for the following xattrop
11:44 pranithk xavih: I think this is what I did, but our upstream tests failed... Let me check it once again. I totally forgot what I did. But we do have general consensus about the idea. I will send something on upstream.
11:44 pranithk xavih: Are there any other bugs you don't have time to look into that I can take/
11:45 xavih pranithk: not at the moment. Only what Backer has reported
11:45 pranithk xavih: he didn't report on either Bugzilla or #gluster. We shall give it one more day?
11:46 xavih pranithk: maybe until tomorrow. Meantime I'll try to reproduce the latest problem
11:46 pranithk xavih: it is not happening. Don't spend time with same steps. I tried it 3 times
11:47 kshlm joined #gluster-dev
11:48 xavih pranithk: ok, we could wait until tomorrow to see if he comes back (maybe he is on another time zone)...
11:49 pranithk xavih: great!
11:49 shubhendu joined #gluster-dev
11:50 ndarshan joined #gluster-dev
11:52 pranithk xavih: thanks xavi. I will get this done. Ashish is working on auto healing on replace-brick
11:52 xavih pranithk: if you are too busy, I can take that :)
11:53 pranithk xavih: I need to get this done by End of Week. If you will have time to do that by End of week, feel free to :-). Otherwise I will take it up
11:54 overclk joined #gluster-dev
11:54 rafi joined #gluster-dev
11:55 xavih pranithk: it seems easy. I'll try to make a patch this afternoon and upload it. If I can't do it today, you can take it tomorrow. Is this ok ?¿
11:55 kdhananjay1 joined #gluster-dev
11:55 rafi joined #gluster-dev
11:55 pranithk xavih: totally sir. If I don't see the patch in the morning, you will see it in review board from me...
11:55 pranithk xavih: I mean my morning
11:55 xavih pranithk: good :)
11:55 pranithk xavih: alright!
11:56 xavih pranithk: I think I'll surely have something by (your) tomorrow morning :)
11:57 shyam joined #gluster-dev
11:57 pranithk xavih: great! one thing I forgot to tell you. In his test case, he doesn't wait for self-heal to complete. You should add EXPECT_WITHIN $HEAL_TIMEOUT "0" get_pending_heal_count $V0 at some places.
11:57 poornimag joined #gluster-dev
11:57 pranithk xavih: apart from that, we can use the same test case.
11:58 xavih pranithk: ok :)
11:58 overclk hchiramm: mind having a look at https://github.com/gluster/glusterfs-specs/pull/1 ?
11:58 kdhananjay joined #gluster-dev
11:58 hchiramm overclk, done. :)
11:59 rafi REMINDER: Gluster Community meeting starting in another 1 minutes in #gluster-meeting
11:59 jrm16020 joined #gluster-dev
11:59 overclk hchiramm: thanks, that was fast :)
12:00 hchiramm overclk++ np
12:00 glusterbot hchiramm: overclk's karma is now 15
12:01 rjoseph joined #gluster-dev
12:02 kshlm joined #gluster-dev
12:04 kanagaraj joined #gluster-dev
12:06 ira ira--
12:06 glusterbot ira: Error: You're not allowed to adjust your own karma.
12:06 ira :)
12:06 nbalacha joined #gluster-dev
12:08 surabhi joined #gluster-dev
12:11 kkeithley ira++
12:11 glusterbot kkeithley: ira's karma is now 2
12:12 hchiramm kkeithley++ for reviewing those patches
12:12 glusterbot hchiramm: kkeithley's karma is now 89
12:13 kkeithley yw
12:25 Manikandan joined #gluster-dev
12:25 itisravi_ joined #gluster-dev
12:28 spalai left #gluster-dev
12:28 spalai joined #gluster-dev
12:33 ndarshan joined #gluster-dev
12:34 surabhi joined #gluster-dev
12:35 shubhendu joined #gluster-dev
12:42 Bhaskarakiran joined #gluster-dev
12:55 spalai left #gluster-dev
12:59 pousley joined #gluster-dev
13:10 kotreshhr left #gluster-dev
13:13 aravindavk joined #gluster-dev
13:17 atinm joined #gluster-dev
13:21 skoduri kkeithley, so by using 'export-symbols', dynamic loader first shall look for a symbol in that list first before looking at global procedure table..right?
13:24 kkeithley no, it works the other way around. Only the symbols in the list are exported from the .so.  So, e.g. in snapview-client.so the svc_lookup() is completely invisible to everything else
13:28 kkeithley inside snapview-client.so the svc_lookup reference in the fops table should resolve to its version of svc_lookup().  And the rpc/xdr reference to svc_lookup should resolve to the one in libntirpc.
13:30 kkeithley does that make sense?
13:31 skoduri yes .. :)
13:32 kkeithley Make that: And the ganesha reference to svc_lookup() should resolve to the one in libntirpc.so because it is global (.globl)
13:32 rafi joined #gluster-dev
13:47 hgowtham joined #gluster-dev
13:47 kdhananjay joined #gluster-dev
13:50 vmallika joined #gluster-dev
13:51 pppp joined #gluster-dev
13:53 _Bryan_ joined #gluster-dev
14:08 overclk joined #gluster-dev
14:22 nbalacha joined #gluster-dev
14:32 lkoranda joined #gluster-dev
14:46 dlambrig_ joined #gluster-dev
14:57 csim hchiramm: https://github.com/gluster/glusterweb/pull/4
14:57 csim hchiramm: sorry, tasks started to appear, and i am peeling layers of windows one by one to finish the interrupted tasks
15:00 jiffin joined #gluster-dev
15:01 kanagaraj joined #gluster-dev
15:02 jiffin joined #gluster-dev
15:08 sankarshan_ joined #gluster-dev
15:14 jobewan joined #gluster-dev
15:16 kshlm joined #gluster-dev
15:18 Gaurav__ joined #gluster-dev
15:25 kdhananjay joined #gluster-dev
15:26 wushudoin joined #gluster-dev
15:43 dlambrig_ joined #gluster-dev
15:51 vimal joined #gluster-dev
15:53 Manikandan joined #gluster-dev
16:02 Gaurav__ joined #gluster-dev
16:10 kshlm joined #gluster-dev
16:18 pppp joined #gluster-dev
16:19 hagarth joined #gluster-dev
16:20 cholcombe joined #gluster-dev
16:29 poornimag joined #gluster-dev
16:35 overclk joined #gluster-dev
16:42 pranithk joined #gluster-dev
16:54 aravindavk joined #gluster-dev
16:54 vimal joined #gluster-dev
17:01 shaunm joined #gluster-dev
17:13 atinm joined #gluster-dev
17:25 wushudoin| joined #gluster-dev
17:30 wushudoin| joined #gluster-dev
17:36 shyam joined #gluster-dev
17:40 krishnan_p joined #gluster-dev
17:44 dlambrig_ joined #gluster-dev
17:47 kotreshhr joined #gluster-dev
17:50 gem joined #gluster-dev
18:04 lpabon joined #gluster-dev
18:52 kotreshhr left #gluster-dev
19:07 lkoranda joined #gluster-dev
19:08 shyam joined #gluster-dev
21:27 dlambrig_ joined #gluster-dev
21:30 badone_ joined #gluster-dev
21:36 pousley__ joined #gluster-dev
21:36 pousley_ joined #gluster-dev
23:03 shyam joined #gluster-dev
23:59 jbautista- joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary