Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-01-26

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 ira joined #gluster-dev
00:25 baojg joined #gluster-dev
00:46 baojg joined #gluster-dev
01:33 EinstCrazy joined #gluster-dev
02:06 baojg joined #gluster-dev
02:10 EinstCrazy joined #gluster-dev
02:31 EinstCrazy joined #gluster-dev
02:41 Gaurav joined #gluster-dev
03:00 EinstCrazy joined #gluster-dev
03:41 gem joined #gluster-dev
03:42 Humble joined #gluster-dev
03:45 baojg joined #gluster-dev
03:47 nbalacha joined #gluster-dev
04:00 spalai joined #gluster-dev
05:10 overclk joined #gluster-dev
05:23 nishanth joined #gluster-dev
05:26 zhangjn joined #gluster-dev
05:30 nbalacha joined #gluster-dev
05:37 Manikandan joined #gluster-dev
05:40 mchangir_ joined #gluster-dev
05:41 EinstCrazy joined #gluster-dev
05:45 spalai joined #gluster-dev
05:46 vimal joined #gluster-dev
05:47 zhangjn joined #gluster-dev
06:21 spalai joined #gluster-dev
06:24 zhangjn joined #gluster-dev
06:25 zhangjn joined #gluster-dev
06:26 EinstCrazy joined #gluster-dev
06:40 zhangjn joined #gluster-dev
06:41 EinstCrazy joined #gluster-dev
07:01 aravindavk joined #gluster-dev
07:05 gem joined #gluster-dev
07:16 Gaurav joined #gluster-dev
07:37 aravindavk joined #gluster-dev
08:00 EinstCrazy joined #gluster-dev
08:09 zhangjn joined #gluster-dev
08:13 EinstCra_ joined #gluster-dev
08:16 badone joined #gluster-dev
08:23 Manikandan joined #gluster-dev
08:49 spalai joined #gluster-dev
09:19 spalai joined #gluster-dev
09:25 josferna joined #gluster-dev
09:33 Gaurav joined #gluster-dev
09:33 Apeksha joined #gluster-dev
09:59 zhangjn joined #gluster-dev
10:16 csaba joined #gluster-dev
10:38 EinstCrazy joined #gluster-dev
10:42 Gaurav__ joined #gluster-dev
10:51 Gaurav joined #gluster-dev
10:55 josferna joined #gluster-dev
11:03 skoduri joined #gluster-dev
11:41 zhangjn joined #gluster-dev
12:01 skoduri joined #gluster-dev
12:06 nbalacha joined #gluster-dev
12:27 ira joined #gluster-dev
12:31 zhangjn joined #gluster-dev
12:32 zhangjn joined #gluster-dev
12:33 zhangjn joined #gluster-dev
12:34 zhangjn joined #gluster-dev
12:35 zhangjn joined #gluster-dev
12:40 zhangjn joined #gluster-dev
12:43 skoduri joined #gluster-dev
13:01 shubhendu joined #gluster-dev
13:11 spalai left #gluster-dev
14:00 atinm joined #gluster-dev
14:21 raghu joined #gluster-dev
14:30 zhangjn joined #gluster-dev
14:31 zhangjn joined #gluster-dev
14:35 NTmatter joined #gluster-dev
14:52 shyam joined #gluster-dev
15:01 spalai joined #gluster-dev
15:08 zhangjn joined #gluster-dev
15:19 aravindavk joined #gluster-dev
15:36 spalai left #gluster-dev
15:40 aravindavk joined #gluster-dev
15:57 cholcombe joined #gluster-dev
16:20 spalai joined #gluster-dev
16:29 wushudoin joined #gluster-dev
16:43 spalai left #gluster-dev
16:46 spalai joined #gluster-dev
17:03 atinm joined #gluster-dev
17:04 EinstCra_ joined #gluster-dev
17:05 wushudoin| joined #gluster-dev
17:16 vimal joined #gluster-dev
18:06 shaunm joined #gluster-dev
18:09 Manikandan joined #gluster-dev
20:07 spalai left #gluster-dev
21:32 EinstCrazy joined #gluster-dev
21:50 post-factum joined #gluster-dev
21:51 post-factum guys, anyone here following my FUSE client memleak-related spam in mailing list?
21:52 post-factum I'm doing last test — valgrind for target volume while rsyncing, and would like to find out if I can provide you more info you need
22:11 ira joined #gluster-dev
22:35 shyam joined #gluster-dev
22:46 raghu post-factum: I have been following that thread. If you have valgrind o/p please share it. Note that while running with valgrind i/o becomes slower than normal cases. You can wait till your tests are done and then send valgrind logs
22:48 post-factum correct, it is much slower, and I'll post the results once they are ready
22:49 post-factum anyway, there are another valgrind results for another type of load already posted
22:50 raghu post-factum: ok. I will take a look at them
22:50 post-factum thx, feel free to mail me for details if needed
22:51 raghu sure
22:51 raghu mean-while I will also try to reproduce in my local setup and see what is happening
22:59 hagarth post-factum: I have been following
23:04 post-factum hagarth: ok, thanks, just want to be sure my reports are being delivered well
23:04 hagarth post-factum: the 4 GB leak did look baffling .. looks like there were two requests in transit and the iovec for that seems to take close to 4 G
23:05 hagarth s/leak/memory consumption/
23:06 post-factum hagarth: 4 GB comes from target volume of rsync. i'm doing rsyncing lots of small (usually, under 1 MB) files
23:08 post-factum also, 4 GB comes from simple "find", but obviously this have another reason
23:08 post-factum hope, valgrind will tell more about rsync as it already did for "find" test
23:09 post-factum btw, for another test memory consumption was somewhat around 3 GiB, but statedump showed 2^32 as well
23:09 post-factum dunno :/
23:11 post-factum have you noticed huge values for dht-related items in statedump?
23:12 hagarth post-factum: did notice that
23:12 hagarth post-factum: my suspicion was with a leak in dht locks
23:13 hagarth post-factum: but could not find anything by tracing code ..needs some more analysis
23:13 post-factum that's why running valgrind now.
23:14 post-factum root     16647 86.2  2.5 1560440 1266980 pts/1 Sl+  Jan26 655:20 valgrind --leak-check=full --show-leak-kinds=all --log-file=valgrind_fuse.log /usr/sbin/glusterfs -N --volfile-server=glusterfs.la.net.ua --volfile-id=asterisk_records /mnt/net/glusterfs/asterisk_record
23:14 post-factum already 1.2G for ~10% of copied files
23:17 hagarth post-factum: let me know once this run is complete
23:17 post-factum sure, thanks

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary