Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-07-07

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Alghost joined #gluster-dev
00:39 Alghost joined #gluster-dev
00:39 mchangir_ joined #gluster-dev
00:42 gyadav__ joined #gluster-dev
00:54 gyadav__ joined #gluster-dev
01:00 vbellur joined #gluster-dev
01:07 Alghost joined #gluster-dev
01:14 purpleidea joined #gluster-dev
01:14 purpleidea joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:52 Alghost_ joined #gluster-dev
02:27 gyadav__ joined #gluster-dev
02:44 prasanth joined #gluster-dev
02:49 gyadav__ joined #gluster-dev
02:56 ashiq joined #gluster-dev
03:36 Saravanakmr joined #gluster-dev
03:37 susant joined #gluster-dev
03:38 nbalacha joined #gluster-dev
03:43 mgethers joined #gluster-dev
03:50 itisravi joined #gluster-dev
03:57 atinm joined #gluster-dev
04:03 skumar joined #gluster-dev
04:24 Alghost joined #gluster-dev
04:31 jiffin joined #gluster-dev
04:35 Shu6h3ndu joined #gluster-dev
04:43 ppai joined #gluster-dev
04:52 gyadav__ joined #gluster-dev
05:04 sanoj joined #gluster-dev
05:10 amarts joined #gluster-dev
05:10 ankitr joined #gluster-dev
05:14 karthik_us joined #gluster-dev
05:15 susant joined #gluster-dev
05:19 itisravi atinm: I've sent https://review.gluster.org/#/c/17721 that should fix stats-dump.t
05:22 pkalever joined #gluster-dev
05:36 hgowtham joined #gluster-dev
05:37 kotreshhr joined #gluster-dev
05:40 prasanth joined #gluster-dev
05:42 apandey joined #gluster-dev
05:43 sanoj joined #gluster-dev
05:46 ashiq joined #gluster-dev
05:51 Alghost Can I get any examples to develop new xlator?
05:54 Saravanakmr joined #gluster-dev
05:55 Alghost I couldn't understand translator-development.md on github.. because Im pretty a new
05:58 mchangir_ Alghost, take a look at rot-13 xlator sources
05:59 Alghost mchangir_: Thanks you :D
06:12 sona joined #gluster-dev
06:20 aravindavk joined #gluster-dev
06:25 Saravanakmr joined #gluster-dev
06:31 msvbhat joined #gluster-dev
06:32 skumar joined #gluster-dev
06:32 hgowtham joined #gluster-dev
06:36 kdhananjay joined #gluster-dev
06:43 vbellur joined #gluster-dev
06:47 rafi joined #gluster-dev
06:50 skoduri joined #gluster-dev
06:50 skumar joined #gluster-dev
07:02 susant joined #gluster-dev
07:06 ankitr joined #gluster-dev
07:09 itisravi joined #gluster-dev
07:19 itisravi ndevos: please merge https://review.gluster.org/#/c/17678/ when you have some free cycles. Thanks!
07:31 skumar_ joined #gluster-dev
08:39 aravindavk joined #gluster-dev
08:45 aravindavk joined #gluster-dev
08:55 aravindavk joined #gluster-dev
08:55 skumar_ joined #gluster-dev
08:58 pranithk1 joined #gluster-dev
09:00 itisravi joined #gluster-dev
09:01 aravindavk joined #gluster-dev
09:19 aravindavk joined #gluster-dev
09:27 apandey joined #gluster-dev
09:31 pranithk1 xavih: hey, are you there?
09:34 susant joined #gluster-dev
09:35 xavih pranithk1: hi
09:39 pranithk1 xavih: hey, how are you?
09:39 xavih pranithk1: busy, but fine :)
09:39 xavih pranithk1: today I'm at the office
09:40 xavih pranithk1: I've just answered your last email
09:40 pranithk1 xavih: yeah, I figured :-)
09:40 amarts :O
09:41 pranithk1 xavih: Actually I pinged you for parallel writes
09:41 pranithk1 xavih: I made some decisions because you were not available...
09:41 xavih pranithk1: yes, I saw it, but this morning you were not connected...
09:41 pranithk1 xavih: yeah, I had meetings till like 1PM
09:41 xavih pranithk1: I've seen the comment in the patch
09:42 pranithk1 xavih: you are okay with that?
09:42 pranithk1 xavih: I am a tad bit worried about what we should do if perf drops. Shall we live with it until we also do posix change?
09:43 xavih pranithk1: if I've understood it well, you are returning size as an xdata in inodelk, right ?
09:43 pranithk1 xavih: in xattrop
09:43 xavih pranithk1: ah, ok
09:44 xavih pranithk1: not sure about the performance impact
09:44 pranithk1 xavih: that I will get the data...
09:45 xavih pranithk1: it should be small, as it only involves a local filesystem call, that will be most probably cached
09:45 pranithk1 xavih: even if the perf drop is with in 10% I feel this is a neat solution. We can do the posix change and get the perf up for all workloads
09:45 xavih pranithk1: there's no additional network roundtrip or anything else
09:45 pranithk1 xavih: yeah :-)
09:45 pranithk1 xavih: since we would have just done lookup, it should be in caches as well
09:45 pranithk1 xavih: so it won't necessarily hit the disk
09:46 xavih pranithk1: yes. I think this would be the best solution
09:46 pranithk1 xavih: cool. So that part I think is taken care in that case
09:46 xavih pranithk1: and it would be also usable by all other fops that do send iatt structures, reducing the number of system calls
09:46 pranithk1 xavih: oh, I am not sending iatt structure, just size
09:46 xavih pranithk1: I mean to make posix cache some metadata
09:47 xavih pranithk1: yes, yes, I know
09:47 pranithk1 xavih: cool :-)
09:47 pranithk1 xavih: posix cache also I will implement. It is long pending. rabhat tried it but I never found time to review it :-(
09:47 xavih pranithk1: I'm saying it would be useful if it's cached in posix xlator
09:47 pranithk1 xavih: One second, let me just gather my thoughts on the lock conflict case. just give me 1 minute
09:47 xavih pranithk1: I've already had this in mind for a long time...
09:51 pranithk1 xavih: you mean caching at posix?
09:51 xavih pranithk1: yes
09:51 pranithk1 xavih: yeah, so that will be done
09:51 pranithk1 xavih: as for the locks, this is what I have in mind based on your comments
09:52 msvbhat joined #gluster-dev
09:53 pranithk1 xavih: We store the locks that are granted in ascending order of offset.
09:54 aravindavk joined #gluster-dev
09:54 pranithk1 xavih: I mean fl_start
09:54 xavih pranithk1: right
09:54 pranithk1 xavih: We store the locks that are yet to be granted based on fl_end
09:54 pranithk1 xavih: only then I think we can do what you are asking for...?
09:55 xavih pranithk1: let me think...
09:55 xavih pranithk1: why do you need to sort waiting queue by fl_end ?
09:56 xavih pranithk1: granted locks will be a simple list with no overlaps, right ?
09:56 pranithk1 xavih: Because the comparison is between end of one lock with start of other
09:56 pranithk1 xavih: Not if these are shared locks
09:56 pranithk1 xavih: shared locks are screwing things up :-)
09:57 ndevos nigelb, misc: smoke tests (fedora builds) fail on builder9.rht.gluster.org because of 'At least 120MB more space needed on the / filesystem.'
09:58 pranithk1 xavih: wait, I think my idea was wrong. We need to check both ends with both starts
09:58 xavih pranithk1: right, we need to keep a list of granted regions. Each region can be in read or write mode, but not both.
09:58 xavih pranithk1: yes, the general case cannot be solved with a simple list
09:58 pranithk1 xavih: cool
09:59 xavih pranithk1: we need to have a list of segments with no overlaps
09:59 pranithk1 xavih: hmm... how? because there can be overlaps for shared-locks...
10:00 xavih pranithk1: suppose we have 2 regions: [10..20) and [15..30)
10:00 pranithk1 xavih: okay...
10:01 xavih pranithk1: we can easily build a list of 3 segments: [10..15) with 1 reference, [15..20) with 2 references, and [20..30) with 1 reference
10:01 xavih pranithk1: another region can split some of the segments if needed
10:02 xavih pranithk1: the final result is a sequence of non-overlapping segments
10:02 xavih pranithk1: sorted by fl_start
10:02 xavih pranithk1: this can be easily compared with another list sorted by fl_start, even if there are overlaps
10:03 xavih pranithk1: however we must try to avoid starvation of some request with high fl_start values...
10:04 pranithk1 xavih: starvation part is taken care
10:04 xavih pranithk1: how are you preventing starvation ?
10:04 pranithk1 xavih: If one lock conflicts next locks are not allowed to get lock
10:05 xavih pranithk1: but if we have the list sorted, one lock with high fl_start can be blocked undefinitely, even if other locks are granted
10:06 xavih pranithk1: because we can receive new lock requests with smaller fl_start
10:06 pranithk1 xavih: the newer locks shouldn't be granted until the older ones are granted
10:06 pranithk1 xavih: This part I will check again, but this is how locks xlator prevents starvation...
10:07 aravindavk joined #gluster-dev
10:07 xavih pranithk1: but then we cannot sort the list, going again to the start point
10:08 pranithk1 xavih: oh why? We are only sorting the granted locks...
10:08 pranithk1 xavih: right?
10:08 pranithk1 xavih: To prevent starvation, waiting locks should be first come first serve...?
10:09 xavih pranithk1: if the list of waiting locks is not sorted, it's useless to have the granted locks list sorted. It will take O(n) to find the collision
10:09 xavih pranithk1: unless we use a BST
10:09 pranithk1 xavih: ha ha
10:10 pranithk1 xavih: sorry, I forgot the whole point :-)
10:10 pranithk1 xavih: okay, let us back up a bit
10:10 xavih pranithk1: yes, this way we prevent starvation, but we also prevent parallel execution of some operations that could be executed concurrently
10:11 xavih pranithk1: what do you think about this:
10:11 pranithk1 xavih: tell me
10:11 xavih pranithk1: have the granted list as I explained (list of granted segments linked to the corresponding locks)
10:11 xavih pranithk1: have the waiting list sorted by fl_start
10:12 pranithk1 xavih: okay... and?
10:13 xavih pranithk1: now we compare iteratively (cost O(n)) and launch all compatible requests (no read/write collision)
10:13 xavih pranithk1: but we remember the first entry that cannot be granted
10:13 pranithk1 xavih: okay
10:13 xavih pranithk1: and the next round of traversing the lists, we start from this item, giving it a bit more priority
10:14 xavih pranithk1: well, there are combinations that also cause starvation, I think... :(
10:14 pranithk1 xavih: yeah. Okay so here is some information
10:15 pranithk1 xavih: Both afr and locks do this O(M*N) checks
10:15 pranithk1 xavih: I have never found this to be the reason for performance degradation for more than 4 years now...
10:15 pranithk1 xavih: Do you think we can park this problem for later when we find this to be the perf bottleneck?
10:16 xavih pranithk1: sure
10:16 pranithk1 xavih: Are you angry or disappointed or this suggestion makes sense for now?
10:17 xavih pranithk1: it's ok for now
10:18 pranithk1 xavih: okay. The reason I asked is because I felt I was saying No to the ideas you are coming up with. I didn't want to make you angry or something by saying No :-).
10:19 xavih pranithk1: it's ok. I understand that there are timings to be met
10:20 xavih pranithk1: I feel that many times we take the easy solution due to timing restrictions instead of spending some more time in a better, more general solution
10:20 skumar_ joined #gluster-dev
10:20 pranithk1 xavih: Let us do one thing.
10:21 pranithk1 xavih: yeah, nowadays I am also feeling the same way.
10:21 pranithk1 xavih: We will come up with the complete solution. If we can implement it with in the time frame for 3.12, we will go ahead and implement it.
10:21 xavih pranithk1: it's ok for the short term, but in the mid or long-term we need to take another "easy" solution to bypass or solve a problem caused by an incomplete initial design...
10:21 pranithk1 xavih: If we can't then we park it for later
10:21 xavih pranithk1: and this never ends...
10:22 xavih pranithk1: no, no, for this particular case, it's ok to have O(N*M) because most use cases won't have big M or N
10:22 xavih pranithk1: I'm talking in general for other areas...
10:23 pranithk1 xavih: yeah, this is the third solution I said we will do later, even I am feeling a bit guilty :-(, that is why I am asking.
10:23 xavih pranithk1: the parallel write thing will already have too much issues to concentrate on this...
10:23 pranithk1 xavih: yeah, testing it will take a while
10:23 xavih pranithk1: today I've seen something we need to take care...
10:23 xavih pranithk1: let me check...
10:24 pranithk1 xavih: oh, what is it?
10:24 pranithk1 xavih: I am going to concentrate on lk fop after getting these two features out. parallel-writes and posix-cache
10:27 sanoj joined #gluster-dev
10:29 xavih pranithk1: ec-common.c, line 1069
10:29 xavih pranithk1: this will probably be an issue for parallel writes
10:29 pranithk1 xavih: this is on master?
10:29 xavih pranithk1: yes
10:30 xavih pranithk1: probably it's as easy as adding a LOCK(), but we need to check it
10:30 xavih pranithk1: or moving the code inside the previous LOCK'd region
10:30 shyam joined #gluster-dev
10:31 xavih pranithk1: probably dirty management will also have some issues...
10:32 pranithk1 xavih: I will go through the whole transaction...
10:33 pranithk1 xavih: thanks for finding this
10:33 pranithk1 xavih: all the update related management of structure will need a careful re-look
10:34 pranithk1 xavih: We have 2 more weeks for merge.
10:34 pranithk1 xavih: We will be thorough
10:34 pranithk1 xavih: In a week branching will happen, but the merge will happen post that..
10:34 xavih pranithk1: yes, we were doing many things without much care because we were safe that we were the only ones working on the data. Now we can have multiple writes...
10:34 pranithk1 xavih: yeah
10:34 pranithk1 xavih: I will do the needful
10:35 xavih pranithk1: I'll try to help you as much as I can, but I have very limited time now...
10:35 pranithk1 xavih: I understand. Don't worry
10:36 pranithk1 xavih: I think we completed critical part of discussion today.
10:36 xavih pranithk1: if you can update the patch in gerrit from time to time, I can take a look
10:36 pranithk1 xavih: Rest is just finding contentions and handling them
10:36 pranithk1 xavih: yeah, I will do one update by Monday
10:36 pranithk1 xavih: With all updates from today along with the update/dirty races
10:37 xavih pranithk1: great
10:37 pranithk1 xavih: that's all I wanted to ask Xavi. Thanks for your time :-)
10:37 xavih pranithk1: you're welcome :)
10:38 pranithk1 xavih: last week even I was busy, so not much progress was there, this week I am almost done with the initial comments.
10:39 pranithk1 xavih: Let us be conservative about merging this patch in 3.12
10:39 xavih pranithk1: I don't think we get a 10% drop... it should be mostly unnoticeable, I think...
10:39 pranithk1 xavih: I am fine even if it doesn't make it to 3.12
10:39 pranithk1 xavih: yeah
10:40 xavih pranithk1: it would be great to have it in 3.12, but we must be absolutely sure it works
10:40 pranithk1 xavih: yeah, the last thing we want is to miss some size/version update
10:40 pranithk1 xavih: dirty is a bit safe :-), nothing bad would happen to data, just extra heals would happen
10:41 xavih pranithk1: but it could impact performance by sending unnecessary requests and starting self-heals
10:41 xavih pranithk1: but yes, it won't be the worst thing :)
10:42 pranithk1 xavih: yeah. Nothing irrecoverable would happen
10:42 pranithk1 xavih: okay cool. I have one more meeting now. I will work on the patch a bit later.
10:42 mchangir_ what's up with ./tests/basic/stats-dump.t ?
10:42 xavih pranithk1: good :)
10:42 pranithk1 mchangir_: itisravi sent a patch to fix it
10:43 xavih pranithk1: See you
10:43 pranithk1 xavih: cya!
10:43 mchangir_ pranithk1, wow! thanks
10:43 pranithk1 mchangir_: https://review.gluster.org/17721
10:43 amarts joined #gluster-dev
10:49 uebera|| joined #gluster-dev
10:49 uebera|| joined #gluster-dev
10:50 uebera|| joined #gluster-dev
10:53 Alghost joined #gluster-dev
11:16 amarts joined #gluster-dev
11:23 xavih joined #gluster-dev
11:35 ankitr joined #gluster-dev
12:02 Saravanakmr joined #gluster-dev
12:08 major joined #gluster-dev
12:11 skoduri joined #gluster-dev
12:50 vbellur joined #gluster-dev
12:51 vbellur1 joined #gluster-dev
12:51 vbellur joined #gluster-dev
12:52 vbellur joined #gluster-dev
12:52 vbellur joined #gluster-dev
12:53 vbellur joined #gluster-dev
12:56 vbellur joined #gluster-dev
13:12 skumar joined #gluster-dev
13:23 susant joined #gluster-dev
13:30 jstrunk joined #gluster-dev
13:32 shyam joined #gluster-dev
13:38 msvbhat joined #gluster-dev
13:49 pranithk1 joined #gluster-dev
13:55 shaunm joined #gluster-dev
13:56 susant left #gluster-dev
13:57 vbellur joined #gluster-dev
14:06 ndevos kkeithley: care to review WebNFS export permission checking? https://review.gluster.org/17718
14:24 vbellur joined #gluster-dev
14:30 ankitr joined #gluster-dev
14:49 nbalacha joined #gluster-dev
14:52 vbellur joined #gluster-dev
15:14 hgowtham joined #gluster-dev
15:18 amarts joined #gluster-dev
15:19 msvbhat joined #gluster-dev
15:19 shyam joined #gluster-dev
15:21 vbellur joined #gluster-dev
15:29 kotreshhr left #gluster-dev
16:06 shaunm joined #gluster-dev
16:07 gyadav__ joined #gluster-dev
16:24 shyam joined #gluster-dev
16:44 vbellur joined #gluster-dev
17:03 ankitr joined #gluster-dev
17:20 major_ joined #gluster-dev
17:21 skoduri joined #gluster-dev
17:30 tannerb3 joined #gluster-dev
17:32 Humble joined #gluster-dev
17:43 rafi joined #gluster-dev
17:52 shyam joined #gluster-dev
17:55 vbellur joined #gluster-dev
18:10 shyam joined #gluster-dev
18:29 vbellur joined #gluster-dev
18:54 vbellur joined #gluster-dev
18:54 vbellur joined #gluster-dev
18:55 vbellur joined #gluster-dev
18:56 vbellur1 joined #gluster-dev
18:57 vbellur joined #gluster-dev
19:01 rafi joined #gluster-dev
19:01 sona joined #gluster-dev
19:19 rafi1 joined #gluster-dev
19:31 rafi joined #gluster-dev
20:02 rafi1 joined #gluster-dev
20:03 lkoranda joined #gluster-dev
21:34 gyadav__ joined #gluster-dev
22:06 gyadav__ joined #gluster-dev
22:18 ankitr joined #gluster-dev
23:06 vbellur joined #gluster-dev
23:16 Alghost joined #gluster-dev
23:34 shaunm joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary