Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-05-04

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:39 itisravi joined #gluster-dev
01:58 yinyin joined #gluster-dev
02:17 badone joined #gluster-dev
03:57 itisravi joined #gluster-dev
04:07 hagarth Supermathie: would it be possible to upload the valgrind report somewhere?
04:11 Supermathie hagarth: I think I attached it to the email to gluster-dev
04:13 Supermathie GRNNGHH I've compiled and installed v3.3.1 and I'm still getting that behaviour of locking up on the untar
04:17 hagarth Supermathie: These don't seem to be leaks - but allocations which have not been free'd yet as operations are not complete. Can you try turning on write-behind in the nfs stack via volume set <volname> performance.nfs.write-behind on?
04:18 hagarth this will certainly reduce the memory footprint
04:20 badone joined #gluster-dev
04:26 Supermathie Isn't write-behind implicit with unstable writes?
04:26 Supermathie That's a valgrind of a process that terminated normally. Either way, 22GB?
04:26 Supermathie Little big isn't it?
04:29 Supermathie Or is that a fix for my tar problem?
04:34 Supermathie Well I have a 2.2GB log from turning on nfs trace, trying to figure out why after 10 minutes it only managed to extract 283MB or so
04:41 Supermathie Do I need to specifically enable RDMA for various gluster daemons on the same node to be able to communicate efficiently?
04:44 hagarth Supermathie: enabling RDMA would not be necessary
04:50 hagarth nfs xlator logs excessively with trace.. Is the latest with write-behind on?
04:51 Supermathie Running v3.3.1 now, straight up.
04:52 Supermathie write-behind is on, and a simple ls -al in the mount hangs during the untar of a large file.
04:52 Supermathie This is messed up.
04:53 hagarth something is not right. wait are your bricks on ext4?
04:53 Supermathie No, xfs. It's not *that* problem :)
04:53 hagarth ok :)
04:54 Supermathie but at least it's actually writing out to the disk now, I see the files growing on the other node.
05:04 Supermathie Yeah, the mount responds fairly fine, but if I try to ls the directory that has a big file being untarred, it hangs until the untar is done
05:08 Supermathie the ls completes in between the point where tar finishes writing the file and before tar sets the mode/owner on the new file
05:08 hagarth were both these activities happening from the same mount?
05:11 Supermathie yep
05:13 Supermathie is write-behind documented?
05:14 hagarth the option or the xlator?
05:14 Supermathie the option I guess
05:14 Supermathie from what I read on the wiki, the translator's for io aggregation right? Not useful here.
05:15 hagarth the option enables the translator in the stack
05:16 Supermathie It's odd that would help here - the disks are way faster than gluster
05:18 Supermathie How can I show all the translators/structure of the volume?
05:19 hagarth this is a problem I have noticed. without write-behind in the picture, all nfs writes would go through a transaction in afr. This sometimes causes timeouts on the client side and re-transmissions do occur. Hence having write-behind in the nfs server stack will reduce the number of re-tries from the client and prevents load on nfs server/gluster as well.
05:20 hagarth you can take a look at the volume files generated (in /var/lib/glusterd/)
05:20 Supermathie Yeah, i was noticing a huge number of rpc retries, ok the batching of transactions between nfs and the afr layer does make sense
05:20 Supermathie gv0-fuse.vol?
05:21 hagarth that would be for the fuse client. nfs volume file would be named nfs.vol.
05:21 Supermathie ooohhhh *there* it is, I was looking in .../vols
05:23 Supermathie Oh damn, so without me changing anything, nfs only uses 1 io thread?
05:24 hagarth by default, yes.
05:25 Supermathie well crap, no wonder
05:26 Supermathie So I need to edit the volfile to turn those on?
05:28 Supermathie The documentation says that 16 io threads was the default
05:28 hagarth you can volume set performance.nfs.io-threads on .. but I haven't seen workloads that benefit with this option being set.
05:29 hagarth 16 io threads is the default for gluster brick processes.
05:30 mohankumar joined #gluster-dev
05:30 Supermathie My gluster nfs daemon is pretty much constant at 100% CPU with load
05:31 Supermathie Where the heck are all these options coming from?
05:31 Supermathie There are *no* performance.nfs options in the admin guide
05:32 hagarth you can check them in glusterd-volume-set.c of xlators/mgmt/glusterd/src.
05:33 hagarth not documented since there haven't been many requests for that .. however write-behind would be enabled by default in 3.4.
05:34 _ilbot joined #gluster-dev
05:35 hagarth darn glusterd-volgen.c in release-3.4. I was referring to the src file in master.
05:35 Supermathie If I have the memory for it, think bumping up the performance.write-behind-window-size from 1MB might help?
05:37 hagarth Supermathie: yeah, that might help.
05:37 Supermathie Does that apply globally to all write-behind translators?
05:39 hagarth applies to all write-behind translators.
05:41 Supermathie volume set gv0 performance.write-behind-window-size 32MB ... OK. I'm guessing that's a size per-io thread
05:42 Supermathie gluster> volume set gv0 performance.nfs.io-threads on
05:42 Supermathie Error, Validation Failed
05:42 Supermathie Set volume unsuccessful
05:49 Supermathie tar: ./fleming2/db0/ALTUS_data/users04.dbf: Cannot close: Input/output error
05:50 Supermathie on the untar... random errors. This was really working pretty well with the src rpms, but this is just not happy.
05:54 itisravi_ joined #gluster-dev
05:54 hagarth Supermathie: got to run now, will bbl.
05:54 Supermathie OK... heading to bed soon, I'm flummoxxed
06:25 Supermathie performance.nfs.io-threads actually breaks nfs - doesn't start
06:26 Supermathie so much for that
09:00 bulde joined #gluster-dev
09:08 _ilbot joined #gluster-dev
09:21 _ilbot joined #gluster-dev
11:55 bala2 joined #gluster-dev
11:58 nickw joined #gluster-dev
14:29 gbrand__ joined #gluster-dev
14:56 fabien joined #gluster-dev
14:57 fabien Hi ! Any chance to get macosx compile for 3.4 beta ?
15:20 hagarth joined #gluster-dev
15:20 fabien https://bugzilla.redhat.com/show_bug.cgi?id=919916
15:20 glusterbot Bug 919916: unspecified, medium, ---, amarts, NEW , glusterd compilation failure on OSX due to AT_SYMLINK_NOFOLLOW
15:21 fabien thanks sir Glusterbot !
16:16 itisravi_ joined #gluster-dev
18:58 itisravi_ joined #gluster-dev
19:02 hagarth joined #gluster-dev
19:21 hagarth joined #gluster-dev
21:16 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary