Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-12-30

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 zhangjn joined #gluster-dev
00:58 zhangjn joined #gluster-dev
00:59 EinstCrazy joined #gluster-dev
01:28 zhangjn joined #gluster-dev
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:11 zhangjn joined #gluster-dev
03:38 raghug joined #gluster-dev
03:44 atinm joined #gluster-dev
03:48 sakshi joined #gluster-dev
03:48 nishanth joined #gluster-dev
03:54 kanagaraj joined #gluster-dev
03:56 nbalacha joined #gluster-dev
03:57 shubhendu joined #gluster-dev
04:14 poornimag joined #gluster-dev
04:15 Manikandan joined #gluster-dev
04:17 raghug xavih: pranithk: +1 to lookup and resolve split
04:17 pppp joined #gluster-dev
04:18 raghug as me and pranith discussed, 1. dht should only create missing directories as part of heal and remaining part of heal is done in resolve
04:19 raghug 2. In lookup, whenever dht figures out it needs healing, it marks the inode as such before unwinding lookup.
04:20 raghug 3. the need-heal will result in resolve-and-resume code in fuse, protocol/server, nfs, gfapi to send a resolve. Only when no resolution is required the fop is resumed
04:22 raghug 4. even resolve-and-resume needs to do a name-less lookup first and then send a resolve (if necessary)
04:24 zhangjn joined #gluster-dev
04:25 vmallika joined #gluster-dev
04:31 kshlm joined #gluster-dev
04:54 ggarg joined #gluster-dev
04:54 apandey joined #gluster-dev
05:14 skoduri joined #gluster-dev
05:15 ndarshan joined #gluster-dev
05:17 gem joined #gluster-dev
05:20 skoduri joined #gluster-dev
05:20 rafi joined #gluster-dev
05:21 aravindavk joined #gluster-dev
05:26 asengupt joined #gluster-dev
05:28 overclk joined #gluster-dev
05:28 zhangjn joined #gluster-dev
05:28 gem joined #gluster-dev
05:30 atalur joined #gluster-dev
05:30 pranithk joined #gluster-dev
05:33 kotreshhr joined #gluster-dev
05:34 Bhaskarakiran joined #gluster-dev
05:34 pppp joined #gluster-dev
05:44 kshlm joined #gluster-dev
05:56 RedW joined #gluster-dev
06:06 nishanth joined #gluster-dev
06:09 kshlm joined #gluster-dev
06:11 vimal joined #gluster-dev
06:16 asengupt joined #gluster-dev
06:32 josferna joined #gluster-dev
06:32 kshlm joined #gluster-dev
06:39 rafi1 joined #gluster-dev
06:42 aravindavk joined #gluster-dev
06:52 nbalacha joined #gluster-dev
06:53 josferna joined #gluster-dev
07:00 Saravana_ joined #gluster-dev
07:02 kshlm joined #gluster-dev
07:04 nishanth joined #gluster-dev
07:06 Apeksha joined #gluster-dev
07:09 Humble joined #gluster-dev
07:20 vimal joined #gluster-dev
07:23 josferna joined #gluster-dev
07:27 kshlm atinm, checkout https://coreos.com/etcd/docs/latest/runtime-configuration.html for info on how to add new members to an etcd cluster during runtime.
07:28 atinm kshlm, I followed the same
07:28 kshlm with the newer etcd versions?
07:28 rafi joined #gluster-dev
07:29 atinm kshlm, yup
07:29 kshlm did you check with etcd 2.2.2
07:29 atinm kshlm, I always get connection refused error
07:29 atinm kshlm, yes I upgraded to 2.2.2
07:29 kshlm Okay. Let me try.
07:30 * kshlm will attempt it after lunch.
07:32 atinm kshlm, I created an issue also just now https://github.com/coreos/etcd/issues/4101
07:46 nbalacha joined #gluster-dev
08:08 kshlm joined #gluster-dev
08:44 atinm joined #gluster-dev
08:45 ggarg joined #gluster-dev
08:58 zhangjn joined #gluster-dev
09:23 apandey joined #gluster-dev
09:24 skoduri pranithk, ping
09:24 glusterbot skoduri: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:29 pranithk skoduri: listen to glusterbot! :-P wassup?
09:30 skoduri pranithk, :P I tried the same valgrind test on my local VM..i could see some leaks..I am guessing it could be because we call glfs_fini while exiting the program and not all memory is freed as part of it
09:30 zhangjn joined #gluster-dev
09:31 pranithk skoduri: ah!
09:32 skoduri pranithk, that's the reason it may be showing all inode_* references which would have well may be in valid address space before fini()
09:33 josferna joined #gluster-dev
09:37 pranithk skoduri: could be
09:37 pranithk skoduri: It would be easier to get the program they use...
09:38 skoduri pranithk, yeah..so that still doesn't explain for the process taking so much RAM..
09:39 skoduri is it because they are doing lookup on millions of entries and each one in turn resulting lots of ctx/space being created and used?
09:42 skoduri Also since our inode table lru_limit in case of gfapi is 131072, if all those inodes hold a reference and in active_list, what would be the behavior of lookup coming on new entry?
09:51 shubhendu joined #gluster-dev
10:00 zhangjn joined #gluster-dev
10:01 skoduri pranithk, ^^ ??
10:03 pranithk skoduri: It will kick one inode out of lru...
10:04 skoduri but as I said if all those inodes hold a reference and in active list?
10:04 skoduri but not in lru list
10:04 skoduri pranithk, ^^
10:07 Manikandan joined #gluster-dev
10:15 Manikandan joined #gluster-dev
10:42 rastar kshlm: we need your opinion on this http://review.gluster.org/#/c/12594/33/xlators/mgmt/glusterd/src/glusterd-volume-set.c
10:43 rastar what should be the GD_OP_VERSION here? 3_7_6 or 3_7_7?
10:43 kshlm Gerrit is slow :/
10:44 kshlm 3.7.6 has been released already, so it should be the next version.
10:44 kshlm 30707 or GD_OP_VERSION_3_7_7.
10:45 kshlm You will need to define the macro in libglusterfs/src/globals.h
10:46 rastar atinm: ^^^
10:46 rastar kshlm: thanks!
10:46 rastar kshlm++
10:46 glusterbot rastar: kshlm's karma is now 47
10:47 kshlm rastar, Is the change going to be backported?
10:47 rastar kshlm: yes
10:48 zhangjn joined #gluster-dev
10:48 kshlm Then it should be 3.7.7, else it would be 3.8.0
10:48 rastar kshlm: it has to be backported, affects vm use case when using write behind.
10:58 EinstCrazy joined #gluster-dev
10:59 atinm joined #gluster-dev
11:05 Manikandan joined #gluster-dev
11:12 shubhendu joined #gluster-dev
11:22 atinm kshlm, I won't be able to attend today's community meeting, could you represent as Gluster 4.0 representative and provide the status?
11:23 kshlm okay
11:32 ggarg joined #gluster-dev
11:45 zhangjn joined #gluster-dev
11:55 kotreshhr left #gluster-dev
11:56 rafi joined #gluster-dev
12:39 atalur joined #gluster-dev
12:47 pranithk joined #gluster-dev
13:03 atalur_ joined #gluster-dev
13:04 pranithk joined #gluster-dev
13:20 josferna joined #gluster-dev
13:28 pranithk dlambrig: Is there a way to stop migrations temporarily?
13:28 pranithk josferna: ^^
13:28 josferna u can
13:28 pranithk josferna: how?
13:29 Bhaskarakiran joined #gluster-dev
13:30 josferna 1. gluster vol set <volname> cluster.tier-promote-frequency <secs >
13:30 josferna similarly cluster.tier-demote-frequency
13:30 josferna set it to a huge number
13:30 pranithk josferna: This doesn't change placement of the ongoing fops right? It just won't do rebalance?
13:30 josferna so that cycles are huge time like in hours
13:31 pranithk josferna: got it
13:31 pranithk josferna: This doesn't change placement of the ongoing fops right? It just won't do rebalance?
13:31 josferna nope .. on going fops are immune from this
13:31 josferna yes ... migrations will be detailed
13:32 josferna delayed
13:32 josferna actually u can kill the tiering migration process also :P
13:32 josferna and to start it again just issue
13:32 josferna gluster vol start force
13:32 josferna the brute way
13:32 josferna :)
13:33 josferna another point here
13:34 josferna if there are migrations going on when u set gluster vol set <volname> cluster.tier-promote/demote-frequency <secs > it will not stop the current migration
13:34 pranithk josferna++: this is helpful, thanks
13:34 glusterbot pranithk: josferna's karma is now 2
13:34 pranithk josferna: got it
13:34 josferna cool
13:50 rafi joined #gluster-dev
13:56 nbalacha joined #gluster-dev
14:03 pranithk joined #gluster-dev
14:04 rafi joined #gluster-dev
14:12 rafi joined #gluster-dev
15:18 hagarth joined #gluster-dev
15:37 vmallika joined #gluster-dev
15:46 ndk joined #gluster-dev
16:01 ggarg joined #gluster-dev
16:24 raghug joined #gluster-dev
17:45 nishanth joined #gluster-dev
18:10 dlambrig joined #gluster-dev
19:38 dlambrig joined #gluster-dev
21:20 shaunm joined #gluster-dev
22:19 zoldar joined #gluster-dev
22:24 zoldar joined #gluster-dev
22:45 zoldar joined #gluster-dev
22:50 zoldar joined #gluster-dev
23:58 zhangjn joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary