Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-03-07

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 cholcombe joined #gluster-dev
00:16 rastar joined #gluster-dev
01:28 rastar joined #gluster-dev
01:45 vinurs joined #gluster-dev
02:11 mchangir joined #gluster-dev
02:19 rastar joined #gluster-dev
02:48 ilbot3 joined #gluster-dev
02:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:59 rastar joined #gluster-dev
03:01 ppai joined #gluster-dev
03:09 rastar joined #gluster-dev
03:27 rastar joined #gluster-dev
03:45 atinm joined #gluster-dev
03:48 prasanth joined #gluster-dev
03:54 itisravi joined #gluster-dev
03:59 rastar joined #gluster-dev
04:04 gyadav joined #gluster-dev
04:25 rastar joined #gluster-dev
04:28 skumar joined #gluster-dev
04:41 Shu6h3ndu joined #gluster-dev
04:44 karthik_us joined #gluster-dev
04:51 jiffin joined #gluster-dev
04:52 nishanth joined #gluster-dev
05:09 aravindavk joined #gluster-dev
05:15 ashiq joined #gluster-dev
05:17 ndarshan joined #gluster-dev
05:17 msvbhat joined #gluster-dev
05:24 Saravanakmr joined #gluster-dev
05:27 apandey joined #gluster-dev
05:28 rafi joined #gluster-dev
05:28 rastar joined #gluster-dev
05:29 atmosphere joined #gluster-dev
05:29 atm0sphere joined #gluster-dev
05:31 rafi1 joined #gluster-dev
05:42 mchangir joined #gluster-dev
05:42 pkalever joined #gluster-dev
05:49 rastar joined #gluster-dev
06:11 rjoseph joined #gluster-dev
06:12 hgowtham joined #gluster-dev
06:13 nbalacha joined #gluster-dev
06:15 nbalacha joined #gluster-dev
06:17 rafi1 joined #gluster-dev
06:18 susant joined #gluster-dev
06:27 atm0sphere joined #gluster-dev
06:37 susant joined #gluster-dev
06:38 sanoj joined #gluster-dev
06:42 atm0s joined #gluster-dev
06:44 atm0sphere joined #gluster-dev
06:47 asengupt joined #gluster-dev
06:49 rastar joined #gluster-dev
06:54 atm0sphere joined #gluster-dev
06:56 ankitr joined #gluster-dev
06:59 Saravanakmr joined #gluster-dev
07:07 msvbhat joined #gluster-dev
07:18 kotreshhr joined #gluster-dev
08:03 pranithk1 joined #gluster-dev
08:05 pranithk1 xavih: hey I have an update about the perf results.
08:06 pranithk1 xavih: the number of calls over the network are back to what it was in 3.8.x
08:07 pranithk1 xavih: but the on-disk latencies are more which is leading to 25% regression for some of the benchmarks. They do deep directory creation and on the last directory, they do one more mkdir and a new empty file create.
08:07 xavih pranithk1: good :)
08:07 pranithk1 xavih: This used to have 85% regression
08:07 pranithk1 xavih: Now only ~25%
08:08 pranithk1 xavih: I am guessing it to be the marking dirty when over the wire we are sending trusted.ec.dirty. I think that would be the only change between the two versions with the current patch
08:09 xavih pranithk1: phone...
08:09 pranithk1 xavih: sure sure, let me know
08:13 pranithk1 xavih: Let me find some data. I will ping you in 20. But I am suspecting it to be just that.
08:20 nishanth joined #gluster-dev
08:24 cholcombe_ joined #gluster-dev
08:27 k4n0 joined #gluster-dev
08:33 xavih pranithk1: but trusted.ec.dirty is not sent more times than before. The difference is that we force an unlock if we need to update dirty
08:33 pranithk1 xavih: yes it is :-)
08:33 pranithk1 xavih: I am almost done with my analysis
08:34 pranithk1 xavih: give me 5 minutes I will give you a link
08:34 xavih pranithk1: but this shouldn't happen unless it's detected that something failed. Do the test cause some failure ?
08:35 xavih pranithk1: I think the good solution will be to move all dirty management to the background, but this requires more complex changes
08:35 xavih pranithk1: this is what I was trying to do
08:35 pranithk1 xavih: yeah :-)
08:36 pranithk1 xavih: https://paste.fedoraproject.org/paste/vAmYCz~amPk8JrejnzwJY15M1UNdIGYhyRLivL9gydE
08:37 pranithk1 xavih: master: num-syscalls: https://paste.fedoraproject.org/paste/kgbig2Ou7IcZTqZJjysz5V5M1UNdIGYhyRLivL9gydE
08:37 pranithk1 xavih: 3.8.8: https://paste.fedoraproject.org/paste/n8h3vo~9iYJXypPzlF2p9l5M1UNdIGYhyRLivL9gydE
08:37 pranithk1 xavih: ignore write calls, they are network writes not pread/pwrite on the files
08:37 atm0sphere joined #gluster-dev
08:37 xavih pranithk1: all links say 'paste not found' :-/
08:38 pranithk1 xavih: OMG
08:38 pranithk1 xavih: give me a minute
08:38 pranithk1 xavih: could you add '=/' for all these links at the end?
08:38 pranithk1 xavih: I think terminator has a bug :-)
08:39 xavih pranithk1: no, still not found...
08:39 xavih pranithk1: sorry, it works
08:42 xavih pranithk1: the only difference seems the additional 200 setxattr calls
08:43 xavih pranithk1: I'm sorry, but right now I'm in the middle of an urgent work for a customer...
08:43 pranithk1 xavih: no problem
08:43 pranithk1 xavih: I will send out the patch removing that
08:43 xavih pranithk1: I'll look at it later
08:43 xavih pranithk1: ok :)
08:43 pranithk1 xavih: It will work exactly like 3.8.8
08:43 pranithk1 xavih: you just review
08:44 pranithk1 xavih: We just need to remove "+    /* If ctx->have_info is false and lock->query is true, it means that we'll
08:44 pranithk1 +     * send the xattrop anyway, so we can use it to update dirty counts, even
08:44 pranithk1 +     * if it's not necessary to do it right now. */
08:44 pranithk1 +    if (!ctx->have_info && lock->query)
08:44 pranithk1 +            link->optimistic_changelog = _gf_false;
08:44 pranithk1 "
08:44 pranithk1 xavih: Even for setxattr/setattr it is setting dirty and this is the reason for the problem
08:45 pranithk1 xavih: I will prepare a patch and get it tested today. Will send out a patch after confirmation that things look good
08:45 pranithk1 xavih: We thought it is a good optimization but it is proving to regress some of the benchmarks
08:45 pranithk1 xavih: I will ping you in say 4 hours
09:04 ppai joined #gluster-dev
09:04 asengupt joined #gluster-dev
09:10 xavih pranithk1: sure. I'll review it
09:11 pranithk1 xavih: cool. I will ping you in 2-3 hours
09:11 pranithk1 xavih: I want it to be verified today itself
09:11 pranithk1 xavih: by our perf guys I mean
09:12 xavih pranithk1: no problem. If the only change is the one you have said, it seems ok
09:12 sanoj joined #gluster-dev
09:12 pranithk1 xavih: yeah
09:16 magrawal joined #gluster-dev
09:17 ankitr joined #gluster-dev
09:23 glusterbot` joined #gluster-dev
09:41 glusterbot joined #gluster-dev
09:42 skumar_ joined #gluster-dev
09:45 Saravanakmr joined #gluster-dev
09:47 pranithk1 joined #gluster-dev
09:50 pranithk1 xavih: Is the locks issue the one you are working on for your customer?
09:59 asengupt joined #gluster-dev
09:59 rastar joined #gluster-dev
10:00 atm0sphere joined #gluster-dev
10:10 ashiq joined #gluster-dev
10:18 skoduri joined #gluster-dev
10:22 rraja joined #gluster-dev
10:22 ndarshan joined #gluster-dev
10:28 pranithk1 joined #gluster-dev
10:29 pranithk1 xavih: seems like I got disconnected and connected back, not sure if you replied for "xavih: Is the locks issue the one you are working on for your customer?"
10:35 skumar__ joined #gluster-dev
10:35 Saravanakmr joined #gluster-dev
10:40 kotreshhr left #gluster-dev
10:44 msvbhat joined #gluster-dev
10:54 skoduri joined #gluster-dev
11:13 ndevos anyone knows how I can assign GitHub issues to someone that is not in the assignee list on the right side?
11:13 ndevos karthik_us: https://github.com/gluster/glusterfs/issues/123 should be assigned to you, do you have an option to assign it to yourself?
11:19 mchangir joined #gluster-dev
11:19 ndevos pkalever: btw, there are some qemu patches that touch the block/gluster.c file, maybe you want to review those too?
11:21 skumar joined #gluster-dev
11:21 ndarshan joined #gluster-dev
11:21 pkalever pkalever: sure Niels, will have a look
11:22 pkalever ndevos: ^^
11:23 xavih pranithk1: the urgent problem was another thing, but yes, the locks issue if for another customer
11:24 ndevos pkalever: thanks! the patches (v2) I mean are at https://patchwork.kernel.org/project/qemu-devel/list/?submitter=1274&state=1&q=gluster
11:24 msvbhat joined #gluster-dev
11:25 gyadav joined #gluster-dev
11:25 pkalever ndevos: thanks for the link!
11:26 ndevos pkalever: if you're not subscribed to the qemu-block list, I recommend you do, and just filter anything that does not have 'gluster' in the subject ;-)
11:27 karthik_us ndevos, no I don't have an option to assign it to myself.
11:28 skoduri joined #gluster-dev
11:29 hgowtham #REMINDER: gluster community bug triage to take place in 30 minutes at #gluster-meeting
11:30 karthik_us ndevos, could you please assign that to me?
11:32 nh2 joined #gluster-dev
11:32 karthik_ joined #gluster-dev
11:33 shyam joined #gluster-dev
11:36 pranithk1 xavih: Okay, in that case I will try to complete it today. I am held up because of this performance issue...
11:36 gyadav joined #gluster-dev
11:36 pranithk1 xavih: I will take up the review as soon as I am done with this issue. I think in 2 more hours or so, we will know the results.
11:37 xavih pranithk1: don't worry. We can live with it for now
11:37 pranithk1 xavih: I am going to US tomorrow and will be busy with other work till end of march. So it is better to complete it today.
11:43 pkalever ndevos: I have subscribe long back ;p just that I don't have a good filter, thanks for the reminder
11:44 pkalever will add one
11:51 ndevos karthik_: sorry, I have no idea how to assign it to you, you're not in the list I can select for assignees :-/
11:52 ndevos karthik_: maybe you can file a bug in bugzilla and add the bug in a comment, I can close the issue for you
11:53 karthik_ ndevos, Sure. I will file a bug
12:06 asengupt_ joined #gluster-dev
12:09 karthik_us joined #gluster-dev
12:40 ira joined #gluster-dev
12:46 ndevos hgowtham++ :)
12:46 glusterbot ndevos: hgowtham's karma is now 57
12:57 pranithk1 xavih: results are out and it works slightly better than old code :-)
12:57 pranithk1 xavih: I am sending the patch out
13:06 Saravanakmr joined #gluster-dev
13:15 mchangir joined #gluster-dev
13:16 atinm joined #gluster-dev
13:20 skoduri joined #gluster-dev
13:37 nishanth joined #gluster-dev
13:40 rastar joined #gluster-dev
13:47 shyam joined #gluster-dev
13:57 susant left #gluster-dev
14:08 mchangir joined #gluster-dev
14:09 mchangir ALL: Gluster RPC Internals - Lecture #2 - starting NOW: Blue Jeans Meeting ID: 1546612044
14:13 pranithk1 xavih: my work is done with the patch :-). I will start reviewing your patch now.
14:18 rraja joined #gluster-dev
14:19 nh2 joined #gluster-dev
14:19 ppai joined #gluster-dev
14:22 nbalacha joined #gluster-dev
14:41 pranithk1 xavih: hey is it okay to talk about the patch now?
14:42 pranithk1 xavih: I didn't look at it fully but calling pl_inode_unref(pinode) outside locks is very dangerous? https://review.gluster.org/#/c/16838/5/xlators/features/locks/src/entrylk.c@803
14:44 pranithk1 xavih: if two threads call this function in parallel it may lead to inode_unref(NULL) which can lead to leaks?
14:57 pranithk1 xavih: okay I will leave the comments on the patch.
14:57 pranithk1 xavih: I mean after my review is done. It will take some time. I will ping you once tomorrow
15:00 nh2 joined #gluster-dev
15:11 atinm joined #gluster-dev
15:31 rastar joined #gluster-dev
15:34 pkalever left #gluster-dev
15:40 msvbhat joined #gluster-dev
15:43 nh2 joined #gluster-dev
16:01 major so looking at the lvm-snapshot code .. am I right in understanding that glusterd_is_lvm_cmd_available() is being called on the client-side of the protocol?
16:02 susant joined #gluster-dev
16:05 nbalacha joined #gluster-dev
16:08 major seems like a testof some would be more appropriate in glusterd_snapshot_create_commit() in validating that each brick can support snapshots
16:10 rastar joined #gluster-dev
16:11 major or something near that end of the protocol
16:12 wushudoin joined #gluster-dev
16:12 wushudoin joined #gluster-dev
16:17 major not even certain an LVM test can be done from the current dict data
16:18 major not prior to glusterd_take_brick_snapshot_task() at least .. I think testing for filesystems that support snapshots can be done earlier ..
16:20 major so maybe at the top of glusterd_take_brick_snapshot_task() a brick-side test needs to be done to validate that the brick in question even supports snapshots
16:27 major actually .. glusterd_snap_create_clone_common_prevalidate() looks like a better landing
16:31 rafi joined #gluster-dev
16:31 nbalacha joined #gluster-dev
16:46 msvbhat joined #gluster-dev
17:04 mchangir joined #gluster-dev
17:25 Humble joined #gluster-dev
17:27 nh2 joined #gluster-dev
17:40 nbalacha joined #gluster-dev
17:52 nh2 joined #gluster-dev
18:00 jiffin joined #gluster-dev
18:19 nh2 joined #gluster-dev
18:31 cholcombe joined #gluster-dev
19:02 rastar joined #gluster-dev
19:08 nh2 joined #gluster-dev
19:20 vbellur joined #gluster-dev
19:37 nh2 joined #gluster-dev
19:38 vbellur joined #gluster-dev
19:38 vbellur joined #gluster-dev
19:39 vbellur joined #gluster-dev
19:39 vbellur joined #gluster-dev
19:40 vbellur joined #gluster-dev
19:41 vbellur joined #gluster-dev
19:43 k4n0 joined #gluster-dev
19:45 jiffin joined #gluster-dev
19:48 msvbhat joined #gluster-dev
19:51 nishanth joined #gluster-dev
19:58 nh2 joined #gluster-dev
19:59 vbellur joined #gluster-dev
20:37 vbellur joined #gluster-dev
21:22 nh2 joined #gluster-dev
23:10 vbellur joined #gluster-dev
23:10 vbellur joined #gluster-dev
23:11 vbellur joined #gluster-dev
23:11 vbellur joined #gluster-dev
23:12 vbellur joined #gluster-dev
23:13 vbellur1 joined #gluster-dev
23:23 amarts joined #gluster-dev
23:28 vbellur joined #gluster-dev
23:50 vbellur joined #gluster-dev
23:52 vbellur joined #gluster-dev
23:55 vbellur joined #gluster-dev
23:55 vbellur joined #gluster-dev
23:56 vbellur joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary