Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-12-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:58 ashiq joined #gluster-dev
01:59 vbellur joined #gluster-dev
03:22 magrawal joined #gluster-dev
03:24 shubhendu joined #gluster-dev
03:50 itisravi joined #gluster-dev
03:58 atinm joined #gluster-dev
04:04 riyas joined #gluster-dev
04:05 kdhananjay joined #gluster-dev
04:40 jiffin joined #gluster-dev
04:43 aravindavk joined #gluster-dev
04:43 suliba joined #gluster-dev
04:51 skoduri joined #gluster-dev
04:53 nbalacha joined #gluster-dev
04:55 karthik_us joined #gluster-dev
04:58 rafi joined #gluster-dev
04:58 nishanth joined #gluster-dev
05:07 poornima_ joined #gluster-dev
05:12 ndarshan joined #gluster-dev
05:14 sanoj joined #gluster-dev
05:14 prasanth joined #gluster-dev
05:24 Anjana joined #gluster-dev
05:30 susant joined #gluster-dev
05:31 skoduri_ joined #gluster-dev
05:31 kotreshhr joined #gluster-dev
05:33 ppai joined #gluster-dev
05:41 sankarshan joined #gluster-dev
05:42 ankitraj joined #gluster-dev
05:45 atinm joined #gluster-dev
05:46 ankitraj joined #gluster-dev
05:49 hchiramm joined #gluster-dev
05:51 apandey joined #gluster-dev
06:02 karthik_us joined #gluster-dev
06:03 ashiq joined #gluster-dev
06:15 Saravanakmr joined #gluster-dev
06:17 hgowtham joined #gluster-dev
06:19 asengupt joined #gluster-dev
06:27 atinm joined #gluster-dev
06:48 mchangir joined #gluster-dev
06:58 k4n0 joined #gluster-dev
07:00 riyas_ joined #gluster-dev
07:03 pranithk1 joined #gluster-dev
07:12 Saravanakmr joined #gluster-dev
07:26 skoduri_ joined #gluster-dev
07:39 riyas joined #gluster-dev
07:40 devyani7 joined #gluster-dev
07:40 devyani7 joined #gluster-dev
07:50 nbalacha rafi, ping
07:53 rafi nbalacha: pong
08:06 pranithk1 aravindavk: Hey do you want to do 3.9.1 release?
08:07 kdhananjay joined #gluster-dev
08:11 xavih joined #gluster-dev
08:19 xavih pranithk1: I would like to talk about http://review.gluster.org/16074
08:20 pranithk1 xavih: okay :-). Tell me. Was the commit message good? Did you understand the problem?
08:21 xavih pranithk1: I do not completely understand the problem. It seems like the problem is that the same frame is used to unlock two inode locks in parallel, with two distinct owners. However I don't see how the patch solves this problem, so I must be missing something...
08:22 pranithk1 xavih: Ah! the owner gets modified in ec_inodelk because flock->l_owner will be set to the actial owner which won't be changed
08:23 xavih pranithk1: who will change the owner ?
08:23 pranithk1 xavih: Sorry didn't get the question. Who will change the owner where?
08:25 xavih pranithk1: you say that we need to use flock->l_owner because it won't change. But who will change the owner just assigned to the frame before inodelk returns ?
08:26 pranithk1 xavih: The patch is changing it in "http://review.gluster.org/#/c/16074/1/xlators/cluster/ec/src/ec-locks.c"
08:27 pranithk1 xavih: so here is the problem with earlier code...
08:27 pranithk1 xavih: when we passed a frame in ec_unlock() that frame is copy_frame() in ec_fop_data_allocate() so that new fop->frame gets the same lk-owner as the parent frame
08:27 pranithk1 xavih: which could have the wrong owner
08:28 pranithk1 xavih: but that will be rectified at line: 748....
08:28 pranithk1 xavih: so the unlock and lock are always wound with correct lk-owner
08:28 xavih pranithk1: sorry. I don't get it... let me check the code a little more time...
08:29 xavih pranithk1: probably I'm not understanding the root cause of the problem...
08:30 pranithk1 xavih: may I explain the RC?
08:30 pranithk1 xavih: Could you open ec_unlock_lock()
08:31 xavih pranithk1: in ec_unlock_lock() we set the owner just before calling ec_inodelk()...
08:31 xavih pranithk1: yes, please :)
08:31 xavih pranithk1: I'm there
08:32 pranithk1 xavih: for rename we have two locks, so it is possible for two different threads to execute that code. Based on the order it can call ec_unlock with same lk-owner for both the inode locks which will be wrong
08:33 xavih pranithk1: a single rename can execute unlocks in parallel ?
08:34 xavih pranithk1: it shouldn't happen. Code is sequential for unlocks on a single rename
08:34 pranithk1 xavih: Consider this case
08:34 pranithk1 xavih: 1) rename /a -> /d/a. Locks are acquired on '/' and '/d'
08:34 xavih pranithk1: ok
08:35 pranithk1 xavih: after the rename both locks on '/' and '/d' we will wait for one second
08:35 pranithk1 xavih: rather should wait
08:36 pranithk1 xavih: After this some other client is consistently doing operations on '/'
08:37 pranithk1 xavih: wait, I think I myself am getting confused. Let me give a very simple case. I will start over and think it through and give the case.
08:37 xavih pranithk1: ok
08:40 pranithk1 xavih: so here is the think. The original 'ec_fop_data_t* fop' which acquired the lock will be alive until unlock happens right?
08:41 pranithk1 xavih: s/think/thing
08:41 xavih pranithk1: I'm thinking...
08:42 rastar joined #gluster-dev
08:44 pranithk1 xavih: That is not true....
08:44 xavih pranithk1: I think this is not always true... the last owner of the lock that is used to create the unlock timer is the one used to later unlock the inode
08:50 xavih pranithk1: I think I'm seeing something, but using your first example... what was wrong with it ?
08:50 xavih pranithk1: the problem happens because of parallel unlock of delayed unlocks, right ?
08:50 pranithk1 xavih: I got confused in the middle. Basically I found a way for parallel unlocks to happen
08:50 xavih pranithk1: in theory it could happen just with a single rename, right ?
08:50 pranithk1 xavih: no, but timer thread executes one timer object at a time
08:51 xavih pranithk1: oh, that's true. Then I still don't see the cause...
08:52 pranithk1 xavih: wait, I will get it. I found a case where that will happen. I saw that in normal case it won't happen.
08:52 pranithk1 xavih: I just forgot the exact case :-D
08:52 pranithk1 xavih: but this is the code path and the fix actually prevented the hang.
08:52 xavih pranithk1: oh, the problem could happen if the timer thread unlocks one of the locks and another thread issues an immediate unlock of the other lock due to a contention detected, for example...
08:52 xavih pranithk1: would that case be a valid example ?
08:53 xavih pranithk1: or even two independent threads, without using the timer thread if the inodes are used for other operations
08:54 pranithk1 xavih: If other operations are there, then they will be transferred to other frame right?
08:55 xavih pranithk1: I'm not su sure... let's try a case... mv /f /a/f; touch /g; touch /a/g
08:55 xavih pranithk1: rename fop takes both locks
08:56 pranithk1 xavih: yes
08:56 xavih pranithk1: then the two create fops take ownership of the locks and proceed
08:57 xavih pranithk1: they finish but the rename is still ongoing (blocked for any cause)
08:57 xavih pranithk1: rename still owns the locks
08:57 pranithk1 xavih: thinking
08:57 xavih pranithk1: no, it's not a valid case because unlocks would have been sequentially
08:58 pranithk1 xavih: yeah not this...
08:58 xavih pranithk1: the only possibility is that the timer thread intervenes...
08:59 xavih pranithk1: one of the locks taken by rename is processed by the timer thread. The other one needs to be processed by the rename itself
08:59 k4n0 joined #gluster-dev
08:59 pranithk1 xavih: yeah
09:00 pranithk1 xavih: so the first one has to go to timer thread which needs to be cancelled by new fop and immediately scheduled for unlock
09:00 pranithk1 xavih: second lock is happening in the rename itself
09:00 loadtheacc joined #gluster-dev
09:00 pranithk1 xavih: I am trying to think of the exact steps
09:01 xavih pranithk1: no, if the timer is cancelled, the unlock won't be executed using the rename frame... it needs to timeout and do the unlock from the timer thread
09:01 pranithk1 xavih: I saw some case like this only... :-(
09:02 xavih pranithk1: the only case I see is if the rename thread is put to sleep because there's a lot of work in other threads, and when it wakes, it's at the same time that the timer thread issues the unlock...
09:03 xavih pranithk1: mv /f /a/f -> rename has locks for / and /a
09:03 pranithk1 xavih: well that can happen too, but for that the thread has to sleep for 1 second...
09:04 xavih pranithk1: yes, but I don't see another possibility...
09:04 xavih pranithk1: how many cores has the test machine ?
09:04 pranithk1 xavih: I could recreate on my machine which has 4 cores
09:04 pranithk1 xavih: I mean my laptop
09:05 xavih pranithk1: and how many threads are using gluster ?
09:05 pranithk1 xavih: quite a few because io-threads is also enabled
09:05 xavih pranithk1: if there's a lot of work (and I think there's because you are doing a lot of operations), maybe a thread could sleep up to 1 second. Not sure though...
09:09 pranithk1 xavih: brb... I will ping you
09:21 xavih pranithk1: I think it's a lot simpler... when we decide that an inode lock can be unlocked, we call ec_unlock_now(). Here we can further delay the real unlock of the inode if we need to update the version/size data.
09:22 xavih pranithk1: the calls to ec_update_info() are sequentialized, but the calls to ec_unlock_lock() can be parallelized because they are called from the callback of xattrop in ec_update_size_version_done()
09:32 pranithk1 xavih: hey, I think you are right....
09:33 pranithk1 xavih: yay! you are right. Damn it totally forgot the code path
09:33 pranithk1 xavih: so yes, two threads executing the code path
09:34 pranithk1 xavih: sorry xavi, will remember to document the exact case I saw :-/
09:34 pranithk1 xavih: will go for lunch now. ttyl
09:34 xavih pranithk1: yes, it must be this
09:34 xavih pranithk|lunch: I'll review the patch
09:35 ppai joined #gluster-dev
09:45 purpleidea i'm just off to bed, but can someone confirm if the red hat VPN is not working properly or if it's just me? /cc ppai kkeithley vbellur ?
09:45 ppai purpleidea: let me just try that out
09:45 purpleidea ppai: thanks!
09:46 ppai purpleidea: works for me
09:47 purpleidea ppai: which server did you try? (OpenVpn? udp/tcp ?)
09:48 ppai purpleidea: vpnc 66.187.233.55
09:49 purpleidea ppai: weird, vpnc is working for me, just not openvpn
09:49 purpleidea thanks anyways!
09:51 ppai purpleidea: may be the links to mojo pages in the "openvpn" mail on announce-list will help ? check those out
09:53 purpleidea yeah, i think they must have changed a setting related to this "now official" openvpn. Anyways, I gotta sleep! Thanks again!
09:59 ndevos purpleidea: yes, you need a new CA cert, other than that, the old openvpn config should work
10:00 misc yup, but the new CA is behind the VPN :)
10:01 ndevos but it is the same CA cert that is used for other IT supported things, so it may well be on your systems already :)
10:02 ndevos oh, and purpleidea, tls-remote=ovpn.redhat.com instead of the real hostname of the vpn gateway
10:03 * ndevos actually has the openvpn config in an Ansible role!
10:03 ndevos (for NetworkManager even)
10:04 k4n0 joined #gluster-dev
10:05 misc oh, nice
10:08 pranithk1 xavih: sure xavi, let me know if we can do it in a simpler way
10:09 pranithk1 xavih: I was initially thinking to change ec_inodelk to take lk-owner as well, but stuffed it in gf_lock_t for now because no one else is using it. But there is always a problem if someone later messes with gf_lock_t l_owner in future...
10:10 pranithk1 xavih: Let me know if you fine changing ec_inodelk to take lk-owner, that will be better IMO
10:10 pranithk1 xavih: I mean future proof
10:11 xavih pranithk1: I was thinking about this...
10:14 xavih pranithk1: btw, it seems that protocol/client doesn't use the owner set in the frame for inodelk, it takes the one set in flock. Is that right ?
10:18 pranithk1 xavih: huh really
10:18 pranithk1 xavih: checking
10:19 pranithk1 xavih: as per the code it is not doing that...
10:19 pranithk1 xavih: did you see it somewhere?
10:19 pranithk1 xavih: " I was thinking about this..." Are you saying it is better to change the function signature?
10:21 xavih pranithk1: the owner is copied from flock in gf_proto_flock_from_flock(), right ?
10:22 xavih pranithk1: yes, I was thinking to change the signature
10:22 xavih pranithk1: but wanted to check how the owner is used in protocol/client
10:23 pranithk1 xavih: but that won't matter. frame->root->lk_owner is the one locks xlator uses
10:24 xavih pranithk1: yes, but how that owner is passed to the brick ?
10:24 pranithk1 xavih: as part of serialization of frame variables
10:24 pranithk1 xavih: wait, let me find the function
10:25 pranithk1 xavih: rpc_clnt_record
10:25 pranithk1 xavih: that is the function
10:26 xavih pranithk1: thanks, I didn't saw that... :/
10:27 xavih pranithk1: so the owner field inside flock is ignored by all xlators ?
10:27 pranithk1 xavih: At the moment, but not sure about future
10:27 xavih pranithk1: or does it have any meaning ?
10:27 xavih pranithk1: ok
10:27 pranithk1 xavih: It is better to take the extra parameter in ec_inodelk IMO
10:27 xavih pranithk1: I also prefer that
10:28 pranithk1 xavih: cool, this is the only doubt I had when I sent the patch :-). I will resend it with the new changes.... Any other things you wanted to discuss about for this patch?
10:29 pranithk1 xavih: Thanks for the extra work you had to put in for finding the RC yet again :-). I should have documented it :-/
10:29 xavih pranithk1: no, that's all. The main problem is that I was unable to identify the real problem. Now it's clear :)
10:29 xavih pranithk1: no problem :)
10:31 pranithk1 xavih: cool. Will resend patch by tomorrow...
10:31 xavih pranithk1: great :)
10:43 karthik_us|afk joined #gluster-dev
11:20 mchangir joined #gluster-dev
11:21 skoduri_ joined #gluster-dev
11:25 shubhendu joined #gluster-dev
11:28 aravindavk joined #gluster-dev
11:44 ankitraj joined #gluster-dev
12:01 rastar joined #gluster-dev
12:28 atinm joined #gluster-dev
12:30 nbalacha joined #gluster-dev
12:41 rraja joined #gluster-dev
12:50 kkeithley purpleidea: was working fine for me this yesterday and this am using the new openvpn
12:52 kdhananjay joined #gluster-dev
12:52 ira joined #gluster-dev
12:55 susant joined #gluster-dev
13:02 kkeithley mchangir++
13:02 glusterbot kkeithley: mchangir's karma is now 3
13:06 BlackoutWNCT1 joined #gluster-dev
13:40 k4n0 joined #gluster-dev
13:44 pranithk1 joined #gluster-dev
13:45 susant joined #gluster-dev
13:53 kotreshhr left #gluster-dev
14:02 shaunm joined #gluster-dev
14:05 gem joined #gluster-dev
14:21 susant left #gluster-dev
14:23 lkoranda joined #gluster-dev
14:40 susant joined #gluster-dev
14:40 k4n0 joined #gluster-dev
14:55 ankitraj joined #gluster-dev
15:21 mchangir joined #gluster-dev
15:27 mchangir joined #gluster-dev
15:36 nishanth joined #gluster-dev
15:36 annettec joined #gluster-dev
15:39 mchangir_ joined #gluster-dev
15:48 k4n0 joined #gluster-dev
15:49 susant joined #gluster-dev
15:49 susant left #gluster-dev
15:52 nbalacha joined #gluster-dev
15:59 wushudoin joined #gluster-dev
16:11 susant joined #gluster-dev
16:11 susant left #gluster-dev
16:11 nbalacha joined #gluster-dev
16:18 riyas joined #gluster-dev
16:37 raghu joined #gluster-dev
16:38 k4n0 joined #gluster-dev
16:42 lpabon joined #gluster-dev
16:52 nbalacha joined #gluster-dev
17:17 rafi joined #gluster-dev
17:25 susant joined #gluster-dev
17:25 susant left #gluster-dev
17:28 k4n0 joined #gluster-dev
17:33 susant joined #gluster-dev
17:33 susant left #gluster-dev
18:11 annettec joined #gluster-dev
18:17 ashiq joined #gluster-dev
18:32 ankitraj joined #gluster-dev
18:40 ashiq joined #gluster-dev
18:42 hchiramm joined #gluster-dev
18:51 susant joined #gluster-dev
18:51 susant left #gluster-dev
18:53 cholcombe vbellur, so i heard you were checking out my rust bindings.  If I can help with anything let me know :).  I'm currently working on async IO which I think I have an idea of how to implement
19:05 dlambrig_ joined #gluster-dev
19:17 susant joined #gluster-dev
19:27 lpabon joined #gluster-dev
19:51 k4n0 joined #gluster-dev
20:16 ashiq joined #gluster-dev
20:16 lpabon joined #gluster-dev
20:25 ashiq_ joined #gluster-dev
20:32 skoduri_ joined #gluster-dev
21:00 lpabon joined #gluster-dev
21:20 gem joined #gluster-dev
21:35 k4n0 joined #gluster-dev
21:54 k4n0 joined #gluster-dev
22:46 dlambrig_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary