Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-06-22

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:39 shyam joined #gluster-dev
00:44 Alghost joined #gluster-dev
00:46 gyadav joined #gluster-dev
00:48 gyadav joined #gluster-dev
00:50 mchangir joined #gluster-dev
00:56 gyadav joined #gluster-dev
00:58 Alghost_ joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:49 gyadav joined #gluster-dev
01:50 prasanth joined #gluster-dev
02:31 susant joined #gluster-dev
02:51 vbellur joined #gluster-dev
03:13 apandey joined #gluster-dev
03:21 susant joined #gluster-dev
03:26 apandey joined #gluster-dev
03:29 ppai joined #gluster-dev
03:35 gyadav joined #gluster-dev
03:39 rastar joined #gluster-dev
03:49 nbalacha joined #gluster-dev
03:49 itisravi joined #gluster-dev
03:51 riyas joined #gluster-dev
03:56 apandey_ joined #gluster-dev
04:01 atinm joined #gluster-dev
04:14 msvbhat joined #gluster-dev
04:18 Shu6h3ndu joined #gluster-dev
04:24 poornima joined #gluster-dev
04:30 pkalever joined #gluster-dev
04:37 aravindavk joined #gluster-dev
04:37 ndarshan joined #gluster-dev
04:40 gyadav joined #gluster-dev
04:46 smit joined #gluster-dev
04:51 ankitr joined #gluster-dev
04:51 kotreshhr joined #gluster-dev
04:55 ndarshan joined #gluster-dev
05:00 jiffin joined #gluster-dev
05:02 skumar joined #gluster-dev
05:10 pranithk1 joined #gluster-dev
05:16 sanoj joined #gluster-dev
05:18 apandey joined #gluster-dev
05:27 karthik_us joined #gluster-dev
05:28 hgowtham joined #gluster-dev
05:37 prasanth joined #gluster-dev
05:41 Saravanakmr joined #gluster-dev
05:59 Saravanakmr joined #gluster-dev
06:05 kdhananjay joined #gluster-dev
06:13 kotreshhr joined #gluster-dev
06:21 Saravanakmr joined #gluster-dev
06:24 rastar joined #gluster-dev
06:26 skoduri_ joined #gluster-dev
06:26 psony joined #gluster-dev
06:48 rafi joined #gluster-dev
06:52 ashiq joined #gluster-dev
07:04 ankitr joined #gluster-dev
07:06 ankitr joined #gluster-dev
07:20 ankitr joined #gluster-dev
07:21 aravindavk joined #gluster-dev
07:26 ankitr joined #gluster-dev
07:28 msvbhat joined #gluster-dev
07:28 sona joined #gluster-dev
07:52 ankitr joined #gluster-dev
07:55 Saravanakmr joined #gluster-dev
08:03 gyadav joined #gluster-dev
08:20 ankitr joined #gluster-dev
08:25 sanoj joined #gluster-dev
08:32 apandey_ joined #gluster-dev
08:38 kotreshhr left #gluster-dev
08:38 rafi1 joined #gluster-dev
08:40 kotreshhr1 joined #gluster-dev
08:45 atinm joined #gluster-dev
08:48 ankitr joined #gluster-dev
08:51 zoja joined #gluster-dev
08:58 apandey__ joined #gluster-dev
09:08 rafi1 joined #gluster-dev
09:13 Saravanakmr joined #gluster-dev
09:13 ankitr joined #gluster-dev
09:15 vbellur joined #gluster-dev
09:28 msvbhat_ joined #gluster-dev
09:42 Shu6h3ndu joined #gluster-dev
09:52 Saravanakmr joined #gluster-dev
10:09 Saravanakmr joined #gluster-dev
10:10 smit joined #gluster-dev
10:16 atinm joined #gluster-dev
10:24 smit joined #gluster-dev
10:24 itisravi joined #gluster-dev
10:27 Saravanakmr joined #gluster-dev
10:29 Saravanakmr joined #gluster-dev
10:30 sanoj joined #gluster-dev
10:35 ankitr joined #gluster-dev
10:38 Shu6h3ndu joined #gluster-dev
10:40 vbellur joined #gluster-dev
10:42 msvbhat joined #gluster-dev
10:58 Saravanakmr joined #gluster-dev
11:00 susant joined #gluster-dev
11:01 apandey joined #gluster-dev
11:02 aravindavk joined #gluster-dev
11:13 ankitr joined #gluster-dev
11:16 pranithk1 skumar: hey where are we deallocating new_key?
11:17 pranithk1 xavih: I am not sure this is the implementation you wanted. I will check for correctness, could you check for code neatness?
11:19 skumar pranithk1: I am not deallocating it
11:19 pranithk1 skumar: won't it be a leak?
11:20 pranithk1 apandey: ^^ your help is much appreciated as well here :-)
11:21 apandey pranithk1: checking..
11:21 skumar pranithk1: is it not going to be releases as part of dict_unref
11:22 pranithk1 skumar: no, keys are allocated again inside dict_set_xxx
11:22 ndevos do others also see a broken image in the upper-left corner of review.gluster.org?
11:22 pranithk1 ndevos: yes
11:22 pranithk1 ndevos: more like distracting than broken
11:22 pranithk1 ndevos: I need to pay lot of attention to see where the links are
11:23 ndevos hmm, I dont even know what image there was before... but the links seem to be in their usual place for me
11:33 xavih pranithk1, skumar: I've just posted some comments
11:33 pranithk1 xavih: saw them :-)
11:50 skoduri joined #gluster-dev
11:55 smit joined #gluster-dev
12:08 rraja joined #gluster-dev
12:10 pranithk1 xavih: seems like skumar has some doubts but he is afraid to ask it seems. So I am kind of putting him on spot with this message :-)
12:10 pranithk1 skumar: ^^
12:11 pranithk1 skumar: please ask your doubts
12:13 pranithk1 skumar: okay... whenever you are ready I guess :-)
12:14 shyam joined #gluster-dev
12:16 susant joined #gluster-dev
12:18 ndevos hi sona, have you seen the comments in https://github.com/gluster/gluster-debug-tools/pull/2 ?
12:19 ndevos just wondering if you plan to send an update for it
12:19 pranithk1 skumar: hey we can remove new_key variable in ec_dict_data_combine() right?
12:21 skumar pranithk1: completely?
12:21 pranithk1 skumar: yeah, as xavih told we don't need strdup also.
12:21 skumar pranithk1: if we are not using strdup we can
12:22 sona ndevos, hey, yes, i have seen them, will work on them asap
12:22 pkalever joined #gluster-dev
12:23 Alghost joined #gluster-dev
12:23 pranithk1 skumar: even otherwise we can I think.. anyways, resend the patch with those removed and I think we will be done...
12:24 xavih pranithk1, skumar: we don't really need to gf_strdup() the xattr name. It won't be modified anywhere
12:25 xavih pranithk1, skumar: it could even be declared as 'const char *', but that's not necessary
12:26 skumar xavih: const qualifier was giving warnings
12:29 pranithk1 xavih: I am attempting parallel write based on the results I sent you with and without delay-gen feature
12:30 xavih skumar: do we really need to allocate memory ?
12:31 skumar xavih: updating the patch
12:32 xavih pranithk1: is it working ? the lock sharing logic assumes a single writer in some places
12:32 xavih pranithk1: care should be taken to not break it...
12:32 pranithk1 xavih: So the idea I have is the following. I will look at the code details a bit later.
12:33 pranithk1 xavih: If a write is wound then all other writes/reads that are conflicting with that area will not be wound....
12:34 pranithk1 xavih: If a read is wound then writes that are conflicting with it won't be wound...
12:34 pranithk1 xavih: something similar to afr_transaction_eager_lock_init()
12:37 pranithk1 xavih: I think it is difficult to clearly explain it. I will send a rough patch. the main thing I wanted to know is if you also agree about the results...
12:37 pranithk1 xavih: that the single write is a bottleneck
12:39 pranithk1 xavih: skumar's code changes look okay to me now. If you also agree, we will take care of the rest.
12:42 xavih pranithk1: sorry. I was away...
12:42 pranithk1 xavih: np :-)
12:43 xavih pranithk1: maybe it's easier if we treat each non-overlapping region as we are handling now the whole file
12:43 xavih pranithk1: so in each region there will only be a single writer
12:43 xavih pranithk1: the remaining logic is the same
12:44 xavih pranithk1: but I haven't thought about it in detail yet...
12:45 xavih pranithk1: ec suffers a lot from latencies because it manages many more bricks than replicate, so your results could be valid
12:45 xavih pranithk1: I've already seen performance issues as more bricks are added
12:46 xavih pranithk1: but I haven't made any specific tests to see the relation between latencies and performance
12:46 pranithk1 xavih: perfect. I will give it a try.
12:46 xavih pranithk1: since the problem is seen with odirect, it's quite possible that we are loosing a lot of IOPS with small requests
12:47 pranithk1 xavih: so my hypothesis is that if we have 10 writes and if each brick can give responses late for 10% of the writes then quite a few writes will take a hit
12:47 xavih pranithk1: that cannot be merged with other requests on the fly
12:47 pranithk1 xavih: Oh the performance was bad even without odirect
12:47 pranithk1 xavih: so with replica+odirect at the customer site we saw 900MBps
12:47 pranithk1 xavih: with ec+odirect 25MBps
12:47 pranithk1 xavih: with ec without odirect 100MBps
12:48 xavih pranithk1: single writer ?
12:48 pranithk1 xavih: yeah similar command to the one I shared
12:48 xavih pranithk1: I've made tests with ec with a single writer near 1GB/s
12:48 pranithk1 xavih: I was writing 64MB because my tests are on my laptop and theirs is some 1GB or something
12:49 xavih pranithk1: what was the block size ?
12:49 pranithk1 xavih: when the disks are not slow EC outperforms replica on my laptop sometimes
12:49 pranithk1 xavih: 128KB
12:49 pranithk1 xavih: count was 102400
12:49 pranithk1 xavih: 4+2
12:50 pranithk1 xavih: hey 1 sec, just check skumar's patch once, he can go ahead and backport if everything goes smooth from both of us..
12:50 pkalever joined #gluster-dev
12:50 xavih pranithk1: sure, sorry...
12:51 xavih pranithk1, skumar: the patch seems ok to me...
12:52 pranithk1 skumar: port it to 3.11 and 3.10
12:52 xavih pranithk1: I don't understand what's happening here. I tested with 1 MB blocks (from the application side) that are converted to 8 parallel requests of 128KB (I suppose by FUSE)
12:52 pranithk1 xavih: it is write-behind xavi
12:52 xavih pranithk1: in that scenario, tweaking event-threads, I reached more than 900 MB/s
12:53 xavih pranithk1: oh
12:53 xavih pranithk1: I've always thought it was fuse...
12:53 pranithk1 xavih: write-behind also stores odirect :-), you can change the behavior with strict-o-direct on
12:53 pranithk1 xavih: but by default it will cache
12:53 xavih pranithk1: that's another thing to change... if write-behind sends bigger blocks, ec will benefit a lot
12:54 pranithk1 xavih: True, facebook guys sent a patch for it I think: http://review.gluster.org/16079
12:54 xavih pranithk1: with 128KB blocks, each brick receives 32KB per write (on a 4+x config), worse for bigger configuration
12:54 pranithk1 xavih: Agreed :-)
12:54 xavih pranithk1: this doesn't take the full benefit of each I/O operation
12:55 xavih pranithk1: so, if odirect is not present, I don't understand what's hapening with your tests. 100 MB/s seem too small for me...
12:56 pranithk1 xavih: do you want to give it a try when you have a chance?
12:56 xavih pranithk1: oh, wait... is it a pure disperse or distribute-dispersed ?
12:56 pranithk1 xavih: I posted the delaygen xlator upstream
12:56 pranithk1 xavih: pure disperse
12:56 pranithk1 xavih: but the customer had distributed disperse I think
12:56 pranithk1 xavih: how does it matter?
12:57 pranithk1 xavih: don't review delay-gen though, it is an ugly patch which achieves the job :-)
12:57 xavih pranithk1: ah, in that case it's ok, but I don't understand how afr can reach 900 MB/s on a pure replicate. Are you using ram disks ?
12:58 pranithk1 xavih: no my theory is that afr sends all the writes over the network without waiting for anything
12:58 xavih pranithk1: well, I lied a bit... the tests I did (long time ago) used a single client but with multithreading...
12:58 pranithk1 xavih: so even if 10% of writes are affected by slow disks, the rest will be acknowledged which leads to more writes from the application
12:59 xavih pranithk1: yes, yes, but maybe the difference is too big to be caused only by this. Anyway I'm talking without any numbers so you might be right
13:00 pranithk1 xavih: I don't understand write-behind quite well at the moment, but I am learning that too a bit
13:00 nbalacha joined #gluster-dev
13:00 kdhananjay joined #gluster-dev
13:00 xavih pranithk1: your test seems to indicate a big drop in performance when adding latency to some requests, so it's a possible candidate
13:01 pranithk1 xavih: okay, I take it as a go ahead for an experimental patch for parallel writes?
13:02 xavih pranithk1: I think it'll be a good thing to have, and it must improve performance for some workload, but unfortunately I won't be able to help much for now...
13:03 xavih pranithk1: it's not a simple patch I think. Maybe the easiest approach is what I told you earlier
13:04 xavih pranithk1: I mean to have a list of fragments being processed and keep the same logic we currently have but attached to each segment independently
13:04 pranithk1 xavih: Don't worry, I will send a rough draft if it improves performance. Then we can discuss and improve the patch
13:04 xavih pranithk1: there will be some issues to solve, like correctly managing the iatt returned from each write, since now they could be different and we must not handle it as a failure (at least not all)
13:05 pranithk1 xavih: yeah agreed :-)
13:06 jstrunk joined #gluster-dev
13:06 pranithk1 xavih: I may send something by next weekend
13:06 pranithk1 jstrunk: hey! how are you?
13:06 xavih pranithk1: great, thanks :)
13:07 pranithk1 xavih: I am hoping a week is enough :-), let's see
13:07 jstrunk Hi.
13:07 jstrunk Doing well
13:07 pranithk1 jstrunk: How is gluster treating so far?
13:07 jstrunk Not bad at all. Still a lot to learn though
13:08 pranithk1 jstrunk: good. which modules did you pick up first?
13:09 jstrunk I haven't really gotten into the code yet. I'm still working on figuring out where I'll be spending my time relative to gluster, heketi, etc
13:09 jstrunk though i did get it build and a cluster running on my laptop. :)
13:10 pranithk1 jstrunk: way to go!
13:11 pranithk1 jstrunk: you designed any distributed systems before?
13:12 pranithk1 jstrunk: there are quite a few open problems that will add a lot of value to gluster. It will be interesting to see what you will pick up :-). I am eagerly waiting to be honest
13:12 jstrunk Yeah. I built a "brick-based" storage system in grad school very similar to ceph.
13:12 pranithk1 jstrunk: I mean solutions to those problems of course :-)
13:12 pranithk1 jstrunk: neat.
13:13 jstrunk Right now, it looks like my first big challenge is going to be looking at scaling the umber of volumes we can service from a cluster
13:13 jstrunk for supporting lots of containers
13:14 pranithk1 jstrunk: ah! brick splitting etc?
13:15 jstrunk well, i think something that builds on that. I've heard the target of supporting 1M containers (~1M vols) from 1 cluster!
13:15 jstrunk got a few zeros to add :)
13:15 pranithk1 jstrunk: he he
13:17 msvbhat joined #gluster-dev
13:17 jstrunk pranithk1: I've been told memory is one of the big limiters at this point, so that's probably a reasonable starting point for me
13:19 pranithk1 jstrunk: yeah, IMO there are 1) memory 2) Number of threads per brick 3) how much storage shall each brick serve or are we going to split the disk to multiple bricks etc are all some of the things that need to be solved 3) is going to be a lot of fun.
13:21 jstrunk pranithk1: I'll add those to my list. I'm looking forward to it.
13:21 pranithk1 jstrunk: all the best man. Gotta leave now. cya later
13:21 pranithk1 xavih: thanks for the discussion xavi. Cya!
13:21 jstrunk pranithk1: Have a good evening
13:25 smit joined #gluster-dev
13:45 nishanth joined #gluster-dev
13:56 pkalever joined #gluster-dev
14:29 obnox pkalever: hey. i finally pushed the patches I have on gluster-block to github PRs ... did not have the time to figure out the gerrit-way... :-D
14:29 obnox jstrunk: hey there :-)
14:30 jstrunk obnox: I seem to be popular this morning. How are things?
14:32 obnox jstrunk: going ok. just saw you active on this channel and tossed you a greeting :-)
14:34 nbalacha joined #gluster-dev
14:57 susant joined #gluster-dev
14:57 susant left #gluster-dev
14:59 ndevos obnox: if you cloned from review.gluster.org over ssh, you can use 'git review' to post patches back
15:00 ndevos obnox: 'git review' uses the git-remote that you named 'gerrit', in case you have your github repo as well
15:09 obnox ndevos: of course i cloned from github
15:09 obnox :-)
15:11 wushudoin joined #gluster-dev
15:11 nbalacha joined #gluster-dev
15:11 ndevos obnox: try this then: git remote add gerrit ssh://obnox@review.gluster.org/gluster-block.git
15:11 ndevos obnox: if you 'git fetch' that, you should be able to run 'git review' in the branch where your change is and see it getting posted to gerrit
15:12 ndevos 'git review' is packaged in the git-review RPM, you may need to install that...
15:12 kotreshhr1 left #gluster-dev
15:21 amye joined #gluster-dev
15:23 pkalever joined #gluster-dev
15:24 msvbhat joined #gluster-dev
15:27 pranithk1 joined #gluster-dev
15:27 uebera|| ndevos: Looking at review.gluster.org, the 3.8.13.md release notes mention 13 patches, but the list only contains 8 bug IDs?
15:28 nbalacha joined #gluster-dev
15:40 ndevos uebera||: oh, yes, 8 bugs is correct, I'll update the doc
15:46 atinm rastar, when is the next 3.10 update?
15:59 kkeithley 3.10.x are supposed to be on the 30th of each month
16:00 kkeithley 3.8.x on the 10th, 3.11.x on the 20th, 3.10.x on the 30th.
16:02 kkeithley https://www.gluster.org/community/release-schedule/
16:04 gyadav joined #gluster-dev
16:08 ndevos uebera||: care to review and +1 the 3.8.13 release notes? https://review.gluster.org/17610
16:08 uebera|| give me a sec... ;)
16:11 uebera|| ndevos: Looks good to me.
16:15 ndevos uebera||: thanks! do you have a Gerrit account? if so, you can login and click the blue [reply] button to leave a code-review +1 there
16:15 uebera|| No, I don't have one yet.
16:16 uebera|| Ah, I could sign in via GitHub... using chrome-unstable atm, though, which does not allow me to log in correctly.
16:17 ndevos oh, do you want me to wait for a bit, or shall I go and merge it already?
16:17 Saravanakmr joined #gluster-dev
16:17 uebera|| ndevos: Give me a minute, let's see whether it works with chrome-stable.
16:18 ndevos uebera||: sure, take your time
16:22 uebera|| ndevos: done.
16:23 ndevos uebera||++ thanks!
16:23 glusterbot ndevos: uebera||'s karma is now 1
16:23 ndevos uebera||: I think you can set your real name under https://review.gluster.org/#/settings/contact
16:24 ndevos otherwise your Reviewed-by tag will get added with your username - not *that* important, but it looks nicer when there is a real name :)
16:24 uebera|| right.
16:26 ndevos cool, much better :D
16:27 ndevos oh, thats unfortunate, there is no email associated with you yet
16:30 uebera|| I've registered one, but thanks to greylisting, it will take a while for the confirmation mail to arrive.
16:31 uebera|| Just received it.
16:33 obnox ndevos: i can try that.
16:35 ndevos uebera||: thats something I could not easily see, but next time when you review a change, you'll be listed with name+email
16:36 ndevos obnox: yes, you could, its really simple!
16:36 ndevos obnox: you can try it by adding a new CONTRIBUTING file to gluster-block :D
16:41 skoduri joined #gluster-dev
16:47 obnox ndevos: i heard that gluster-block will move to github merges in a few weeks anyways ...
16:50 obnox ndevos: works ... https://review.gluster.org/#/c/17613/
16:50 obnox thx
16:50 obnox ndevos++
16:50 glusterbot obnox: ndevos's karma is now 359
16:50 msvbhat joined #gluster-dev
16:50 obnox whatever weird magic that "git-review" thing did -- scary!
16:51 obnox didn't even need configuration
16:53 ndevos obnox: gluster-block was in GitHub before, and moved to Gerrit, I wonder why pkalever would want to move it back again
16:53 ndevos reviewing in Gerrit is so much simpler than in GitHub...
16:54 ndevos obnox++ for the patch!
16:54 glusterbot ndevos: obnox's karma is now 11
16:54 obnox https://review.gluster.org/#/c/17614/
16:54 obnox second one
16:54 obnox ndevos: dunno if gerrit is easier. it's a matter of what one is used to, i guess
16:57 obnox ha! trying to add pkalever as reviewer... it autocompletes his ID. but then says: "
16:57 obnox Prasanna Kumar Kalever <pkalever@redhat.com> does not identify a registered user or group"
16:57 obnox really? ;-)
16:58 ndevos try prasanna.kalever@redhat.com
16:58 ndevos I have not figured out yet how to compare updated patches with their original in GitHub, once I find the right button, GitHub should be just as easy :)
17:04 obnox ndevos: right, github has the disadvantage of not tracking previous versions of a patch. i.e. if you amend, it gets hairy.
17:05 ndevos obnox: hmm, and that is something I do constantly in with Gerrit...
17:11 obnox sure. so do i
17:11 obnox then again, github at least allows some amount of patch*sets* instead of forcing to single patches per review request
17:29 rastar joined #gluster-dev
17:33 sona joined #gluster-dev
17:39 skumar joined #gluster-dev
17:51 jiffin joined #gluster-dev
17:59 kotreshhr joined #gluster-dev
18:06 pkalever obnox: this should work prasanna.kalever@redhat.com; thanks ndevos
18:17 kotreshhr left #gluster-dev
18:18 lkoranda joined #gluster-dev
18:21 nishanth joined #gluster-dev
18:47 anoopcs joined #gluster-dev
18:49 msvbhat joined #gluster-dev
19:04 jiffin1 joined #gluster-dev
19:53 mst__ joined #gluster-dev
19:54 mst__ HI there, can anyone tell me what will be the right way to promote a gluster patch for stable backporting?
21:01 glustin joined #gluster-dev
21:18 lkoranda joined #gluster-dev
23:01 Alghost joined #gluster-dev
23:45 owlbot joined #gluster-dev
23:58 Alghost joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary