Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-05-09

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 yinyin joined #gluster-dev
00:36 yinyin joined #gluster-dev
01:32 bala joined #gluster-dev
01:42 jbrooks joined #gluster-dev
01:45 portante|ltp joined #gluster-dev
01:58 hagarth joined #gluster-dev
01:58 johnmark howdy
01:59 hagarth johnmark: hey there
01:59 jdarcy joined #gluster-dev
02:02 jdarcy Anybody else here for 3.4 readiness?
02:03 avati_ me
02:03 johnmark me, too
02:03 johnmark hagarth is here
02:03 johnmark kkeithley: hey you
02:03 * hagarth too
02:04 johnmark so beta's out. slap yourself on the back :)
02:04 johnmark now what are hte remaining issues? and would it be too much to ask to kick off beta 2 on May 14?
02:04 avati_ we need a beta2
02:04 hagarth my pet peeves - rdma and op-version need to be in
02:05 jdarcy I noticed that none of the bugs on the beta1 tracker had been updated.  Were bugs being tracked some other way?
02:05 hagarth op-version has a backport. rdma still needs to be backported.
02:05 hagarth jdarcy: transitions to ON_DEV?
02:06 avati_ is rdma even fixed (to be backported)?
02:06 hagarth avati_: I think initial tests are showing much better behavior than 3.3.0
02:07 johnmark avati_: last I checked we were waiting for those to be reviewed
02:07 jdarcy hagarth: IIRC, there were bugs on that list that weren't even MODIFIED, perhaps not even POST.  Not quite sure what the beta1 content really was, TBH.
02:07 johnmark *sigh*
02:07 johnmark jdarcy: are folks just not updating the bugs?
02:07 johnmark hagarth: does that include all the backports?
02:07 avati_ hagarth: ah ok, wasn't aware of that
02:08 hagarth I think we can put out rdma with appropriate caveats.
02:08 johnmark hagarth: which are... ?
02:08 jdarcy https://bugzilla.redhat.com/showdependen​cytree.cgi?id=952693&hide_resolved=1
02:09 hagarth jdarcy: when I checked the bug tracker, there were patches for bugs listed as beta blockers. such patches made into the branch.
02:09 hagarth johnmark: volume cannot be exported by both rdma and tcp simulataneously, limited testing for rdam.
02:09 hagarth s/rdam/rdma/
02:10 bharata joined #gluster-dev
02:10 hagarth I think we can leave out the ufo issues from our tracking, as it is going to have its own cadence.
02:11 bharata a2_, avati_ r u here ?
02:11 johnmark hagarth: agreed
02:11 avati_ hagarth: yes, will be abandoning all the UFO patches against glusterfs.git
02:11 johnmark hagarth: re: rdma - right, forgot about that limitation. I think that's acceptable
02:11 jdarcy hagarth: So it's a matter of people not updating the bugs, but you did check each one?
02:12 hagarth jdarcy: I went through most (leaving out rdma and ufo).
02:12 johnmark hagarth: do we need to lean on bug owners to update things? or are we waitingon patch review?
02:12 jdarcy If UFO is going to have its own cadence, should it still be in the same specfile?
02:13 hagarth johnmark: yes, bug owners would be the right folks to update state.
02:13 hagarth jdarcy: peter just sent in a patch to remove ufo from the repo, I hope it covers the specfile changes too.
02:15 bharata avati_, So you are suggesting different interfaces for FUSE and gfapi, which means applications have to be written differently to run on FUSE and run viagfapi
02:15 hagarth we have missed 950056 and 953387 in beta1
02:16 hagarth s/953387/953887/
02:19 hagarth shall we change beta1 tracker to beta2 tracker?
02:21 avati_ bharata: applications have to _anyways_ be rewritten to work with gfapi
02:22 jdarcy hagarth: What I've been doing is creating new trackers each time, moving bugs from older to newer.
02:22 avati_ jdarcy: kkeithley peter and luis are working out the details of the spec file.. UFO stuff will move out of glusterfs.spec
02:23 hagarth jdarcy: sounds good, we can move the rdma, op-version and the above two bugs to the new tracker.
02:23 johnmark avati_: cool. was happy to see his commit just now
02:23 yinyin joined #gluster-dev
02:24 avati_ hagarth, jdarcy: there are glusterd backports pending, including a crash kp is working on
02:24 jdarcy avati_: OK, can you work with Vijay to make sure they get added to the new tracker?
02:24 hagarth avati_: agree, we need to track those backports too.
02:24 avati_ yeah.. i'm checking if kp is working aginst a bug id
02:25 johnmark hagarth: jdarcy: ok, so where does this leave us for issuing a beta2 next week?
02:25 jdarcy The benefit here is not just that *we* know what's going on, but it leaves a record for the release too.
02:25 bharata avati_, while that is true, I couldn't see a problem with having a front ending ioctl() from gfapi too. Do you see any issues there ?
02:25 johnmark It *sounds* like the action items needed to be taken by then are reasonable to ask
02:25 lalatenduM joined #gluster-dev
02:25 jdarcy johnmark: Can't say I have a good feel for that.  Vijay's a lot closer to the items that are still in progress.
02:26 hagarth johnmark, jdarcy: AI on me to populate the beta2 tracker today. Once that's done, let us collectively review and update whatever else is needed in the tracker.
02:27 jdarcy hagarth: Sounds excellent.
02:27 avati_ bharata: that's a bad reason to inherit a clunky API :-) variable args of ioctl() makes it really a mess to use as a standardized API
02:28 avati_ bharata: really, how hard is it to change an application (which you are bound to change anyways) to replace ioctl(CMD_NAME) with cmd_name() ?
02:28 jdarcy avati_: Would it be reasonable to auto-convert ioctls we know about, and reject anything else?
02:28 avati_ jdarcy: this is for GFAPI, there is no reason to have an ioctl-like API
02:29 bharata avati_, fair enough, will give more thought and see
02:29 avati_ for FUSE we would probably have no option but to implement fuse_ioctl() and have some translation into our FOPs
02:29 jdarcy avati_: What about VMs running in other hypervisors that don't have gfapi-based support but might know about some of these ioctls?
02:30 avati_ jdarcy: right.. for them we will need to implement and map fuse_ioctl() into our FOPs
02:30 avati_ ioctl through FUSE is clucnky (especially marshalling variable args into fuse protocol), but there is no other option
02:31 jdarcy Yeah, I really don't want to do a fully general ioctl implementation.  Insecure, aside from other problems.
02:31 hagarth necessary but not clean.
02:32 avati_ there is no reason to spread the ioctl disease beyond fuse-bridge.c
02:32 jdarcy Story of my life.  ;)
02:32 bharata avati_, :)
02:32 avati_ btw, bfoster has posted fallocate() support yday..
02:32 avati_ bharata: that solves half your problem, i think!
02:32 bharata avati_, oh ok. let me check
02:33 avati_ #4969
02:33 johnmark jdarcy: +1 to reference to other hypervisors
02:34 johnmark avati_: can you explain to me what fallocate does?
02:34 avati_ it would have been nice to separate fallocate() and discard() into two fops instead of a CMD argument to differentiate
02:35 johnmark avati_: I saw that and was wonderign about its significance
02:35 bharata avati_, need to look at in detail, but don't see gfapi support there ?
02:35 avati_ johnmark: fallocate preallocates diskspace to guarantee no "out of space" errors for future writes in the region
02:35 avati_ bharata: yes, gfapi support is missing in that patch, it needs to come in before merge
02:36 bharata avati_, ok
02:36 johnmark avati_: aha, thanks
02:36 avati_ bharata: we should also discuss w/ brian separating monolithic fallocate() into fallocate()+discard()
02:36 johnmark hagarth: ok, sounds like we have a plan
02:36 johnmark any other things to address?
02:37 bharata avati_, right, let me take a detailed look, but better to have a separate discard()
02:37 hagarth johnmark: sounds good for now.
02:37 johnmark w00t
02:37 johnmark thanks, guys
02:37 johnmark bharata: you are always welcome to come here once a week and discuss release plans
02:37 avati_ once a week only :p
02:38 bharata johnmark, sure, avati_ :)
02:38 avati_ bharata: do you have plans of making progress on the zero copy work? if not i plan to pick it up in a couple of weeks
02:39 avati_ we need it for samba vfs module as well
02:39 bharata avati_, Got busy with various other things
02:40 avati_ no probs.. i'll probably pick up from your last patch
02:40 bharata avati_, thanks
02:40 johnmark avati_: heh heh
02:40 bharata avati_, wanted to see the real benefit before continuing further, but none of the tests I did gave me any benefits
02:42 avati_ bharata: if the size of "working memory" window (addresses you have referred recently and will again refer in the future) is small enough to fit in the L1/L2 caches, you won't see much benefits
02:42 avati_ bharata: if all the memory copies end up forcing a full memory cycle with L1/L2 overflows, that's when zero-copy will show the advantages
02:43 avati_ it is very likely with simple tests like a file copy you will see no benefits at all
02:43 bharata avati_, I remember doing gigabytes worth of copy in multiple threads - didn't see any benefit
02:44 bharata avati_, but we should revisit that anyway
02:45 avati_ bharata: yes.. again depends on the exact workload.. large quantities of data could be transferred with a very small working memory window and you will be fine
02:45 avati_ even multithreaded
02:45 avati_ we should revisit it for sure
02:47 bharata ok
03:02 Supermathie Heyyyyyyyyy guys. Does the gluster NFS server treat filehandles obtained on one tcp connection as specific to that connection? Does gluster/nfs not like it if you use those filehandles on another connection?
03:06 avati_ Supermathie: that would be a major fail.. gluster nfs filehandles are durable.. across clients, across servers, and across reboots
03:09 Supermathie avati_: Pretty sure that's what I'm seeing here... I'm seeing calls against a FH obtained earlier failing with NFS3ERR_STALE
03:31 Supermathie avati_: I am most assuredly seeing the NFS3ERR_STALE coming back on a valid file handle, but that could be because gluster/nfs is failing
03:32 shubhendu joined #gluster-dev
03:33 Supermathie I know the FH is still good since it matches the gfid of the file on disk
03:33 Supermathie so the STALE results are a symptom, not the problem. :/
03:38 Supermathie avati_: I couldn't figure out which context has the *this->ctx->pool to examine :) But I have a corefile.
03:40 Supermathie "0-gv0-io-threads: READ scheduled as slow fop" what does this "mean"?
03:43 Supermathie ... looks like it means that glusterfs still hasn't processed the read request after 6 seconds. So it gets resubmitted...
04:04 avati_ "this" is any translator
04:05 Supermathie soooooo dht_readv_cbk fer instance?
04:08 avati_ yes
04:08 avati_ this->ctx->pool
04:08 avati_ that is of type call_pool_t
04:10 Supermathie (gdb) print (call_pool_t)this->ctx->pool
04:10 Supermathie $12 = {{all_frames = {next = 0x25f96c0, prev = 0x25f9620}, all_stacks = {next_call = 0x25f96c0, prev_call = 0x25f9620}}, cnt = 39705952, lock = 0, frame_mem_pool = 0x0, stack_mem_pool = 0x0}
04:10 Supermathie (gdb) print ((call_pool_t)this->ctx->pool)->all_stacks
04:10 Supermathie $14 = {next_call = 0x25f96c0, prev_call = 0x25f9620}
04:11 Supermathie how do i find out how many there are?
04:12 Supermathie from the trace & pcap, looks like reads just aren't completing
04:12 Supermathie sometimes
04:13 Supermathie Oh! Is that 'cnt'? 39705952?
04:20 yinyin joined #gluster-dev
04:23 bala joined #gluster-dev
04:32 avati_ Supermathie: cnt is the count of outstanding call stacks (roughly number of outstanding rpc)
04:32 Supermathie lol poor wireshark is getting confused by all the rpc retransmits in this dump... "Time from request: -62.594355000 seconds"
04:33 Supermathie avati_: So having 40 million outstanding RPCs... that's bad, eh?
04:43 Supermathie I think glusterfs is just getting seriously seriously bogged down. 21:16:00.564817 received rpc-message (XID: 0x8879128a), finally has reply at 21:17:40.996628
04:49 Supermathie whoops 21:17:03.
04:49 Supermathie gluster is taking MORE THAN A MINUTE to reply to this request.
04:52 Supermathie and the disks (enteprise SSDs) are more or less idle
04:55 Supermathie OK... gluster/nfs is spending ALL of it's time sitting and spinning on it's thumb, and no time actually doing reads
05:00 bala joined #gluster-dev
05:01 mohankumar joined #gluster-dev
05:07 hagarth_ joined #gluster-dev
05:12 Supermathie Can I upgrade to 3.4.0b and still use the same vol files? i.e. stop gluster, install new gluster ver, start gluster and go?
05:15 bulde joined #gluster-dev
05:18 yinyin joined #gluster-dev
05:18 bala joined #gluster-dev
05:41 aravindavk joined #gluster-dev
05:43 raghu joined #gluster-dev
05:46 vshankar joined #gluster-dev
05:47 JoeJulian holy crap... 39 million outstanding rpc calls???
05:48 Supermathie ... yeah.
05:48 lalatenduM joined #gluster-dev
05:48 JoeJulian Supermathie: From what I've seen so far, yes. You should be able to drop 3.4 in. I'd back up /var/lib/glusterd first though just to be safe.
05:49 Supermathie JoeJulian: Just want to see if I get the same behaviour on 3.4. I suspect I will...
05:50 JoeJulian I'm going to continue conversing in #gluster. Too hard to follow two conversations with one person.
05:57 rastar joined #gluster-dev
06:03 lalatenduM joined #gluster-dev
06:33 mohankumar joined #gluster-dev
06:55 jclift_ joined #gluster-dev
07:30 johnmark Supermathie: are you discourse guy?
07:55 yinyin joined #gluster-dev
08:45 deepakcs joined #gluster-dev
09:31 bulde1 joined #gluster-dev
09:32 fabien joined #gluster-dev
09:34 fabien Hi !
09:34 shubhendu joined #gluster-dev
09:35 fabien sudo prove bugs/*
10:05 bulde joined #gluster-dev
10:48 jclift_ That's weird.  git master head atm is broken for NFS mounting for me.
10:59 bulde joined #gluster-dev
11:10 kkeithley1 joined #gluster-dev
11:11 jclift_ Yeah, there's definitely something breaking NFS in git.
11:12 jclift_ Dropping back to git about a month ago, things work
11:12 * jclift_ doesn't have time to look into it atm
11:12 jclift_ This commit (apr 14) works a1db18cf7a6cde96f2e5b920ffbbf88e72a21fd4, if anyone wants to triage
11:12 fabien I'm installing a machine to try it
11:13 jclift_ fabien: Cool. :)
11:13 fabien (without using Finder) ;)
11:13 jclift_ :)
11:13 edward1 joined #gluster-dev
11:18 lpabon joined #gluster-dev
11:28 yinyin joined #gluster-dev
11:42 nickw joined #gluster-dev
11:51 bulde joined #gluster-dev
11:53 shubhendu joined #gluster-dev
12:07 mohankumar joined #gluster-dev
12:09 shubhendu joined #gluster-dev
12:18 mohankumar joined #gluster-dev
12:35 bulde joined #gluster-dev
12:56 Supermathie johnmark: Yep
13:30 Supermathie http://review.gluster.com/#/c/4049/ looks interesting to me... has anybody seriously tried it out?
13:30 Supermathie I wonder if that would help me.
13:31 Supermathie Also, would it be suitable for glusterfs/nfs to drop duplicate incoming RPCs iff one is already queued and hasn't been replied to yet?
13:33 JoeJulian Found what I suspect is a replica 3 heal race: http://paste.fedoraproject.org/11273/68105410/
13:37 Supermathie Also possibly 6f6744730e34fa8a161b5f7f2a8ad3f8a7fc30fa...
13:39 fabien jclift_: I tried a1db18cf7a6cde96f2e5b920ffbbf88e72a21fd4 054c1d7eb3782c35fc0f0ea3a5fd25337d080294 b6e10801bee030fe7fcd1ec3bfac947ce44d023d d3e3a849ddce1ade85ddb885474b66299e98744d 8923c14151d646ab90f05addc9e6c3ed178fee10
13:39 jclift_ fabien: And... ?
13:40 fabien jclift_: all hfs mount is working (but at first I didn't see that portmap wasn't started)
13:40 fabien s/hfs/NFS
13:40 jclift_ fabien: Damn, I wonder what it was?
13:40 jclift_ fabien: It's possible it was something in my setup.  I just blew away the VM I was doing it in, then went and used one that I know for sure works instead.
13:41 jclift_ fabien: I'll look into it later on.  Try and figure it out.
13:41 jclift_ fabien: But, it's good news that it's not something in git after all. :D
13:41 jclift_ fabien: Thanks for checking this btw. :)
13:42 fabien at first I see in vol status that NFS server was up, but impossible to mount
13:43 fabien was portmap not started, must see port 111 from outside
13:43 fabien jclift_: you're welcome
13:44 jclift_ fabien: Yeah, maybe that was it.  Will check.
13:44 hagarth joined #gluster-dev
14:04 wushudoin joined #gluster-dev
14:25 rastar joined #gluster-dev
15:03 deepakcs joined #gluster-dev
15:55 bala joined #gluster-dev
16:48 aravindavk joined #gluster-dev
17:03 bulde joined #gluster-dev
17:24 lbalbalba joined #gluster-dev
17:26 lbalbalba hi. when i try to do a build for gcov code coverage using *FLAGS+='-fprofile-arcs -ftest-coverage', i run into this error: http://fpaste.org/11321/36811943/
17:26 lbalbalba -fprofile-arcs implies -lgcov, so i shouldnt be getting that error :(
17:26 lbalbalba does anyone have an idea whats going wrong here ?
17:26 aravindavk joined #gluster-dev
17:42 rastar joined #gluster-dev
18:25 jbrooks joined #gluster-dev
18:29 bulde1 joined #gluster-dev
18:33 bulde joined #gluster-dev
19:03 aravindavk joined #gluster-dev
19:16 lbalbalba joined #gluster-dev
19:18 lbalbalba hi im trying to run 'prove ./tests/basic/bd.t', but run into errors: http://fpaste.org/11340/27051136/ is there a way to debug/troubleshoot what could be going wrong ?
19:20 lbalbalba actually, its './run-tests.sh', but thats the 1st test that gets run.
19:20 a2_ jclift_, NFS tests seem to pass in the regression tests.. it's probably because of the change in port number
19:20 a2_ maybe you have a stale portmap registration
19:21 jclift_ a2_: Yeah, it could have been anything.
19:21 jclift_ I just spun up a "known good" VM instead that worked for the small amount of time I needed. :)
19:22 jclift_ lbalbalba: Does this help? http://www.youtube.com/watch?v=dy8lPtZ7B14
19:22 jclift_ (actually for real, not a joke thing :>)
19:22 jclift_ It's a YouTube video introducing Gluster's test framework
19:22 lbalbalba jclift_ thanks. let me watch that
19:23 jclift_ Worst case scenario, it at least gives you the name/contact details for one of the guys really into GlusterFS test framework.  If nothing else comes up, email the guy. :)
19:23 * jclift_ actually does things like that
19:23 jclift_ (often effective)
19:25 lbalbalba well it looks to me like the scripts expect some pre-req setup, like the creation of volume patchy. but it could be that the test fails to create that, too.
19:26 lbalbalba so whats the email for krishnan parthasarathi , then ...
19:34 lbalbalba ah. there we go.... kparthas@redhat.com
19:47 jclift_ Oops, just saw your question there.  Glad you found him.
19:47 jclift_ Hopefully he's helpful. :)
19:49 lbalbalba google knows all ;) awaiting response to my email now ...
19:51 a2_ lbalbalba, patchy volume is created as part of the test script
19:51 a2_ you dont need to precreate any volumes
19:51 lbalbalba a2_: well, not on my system its not :(
19:51 a2_ how are you trying to run the test?
19:52 a2_ do you have gluster, glusterd and glusterfs in your $PATH?
19:52 lbalbalba I run ./run-tests.sh from the top level directory. or, just run the 1st one in the bunch:  prove ./tests/basic/bd.t
19:52 lbalbalba gluster, glusterd and glusterfs are in my $PATH
19:53 a2_ ./tests/basic/bd.t .. 1/26 Wrong brick type: device, use <HOSTNAME>:<export-dir-abs-path>
19:53 a2_ Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> ... [force]
19:53 a2_ it tried to create the volume, and failed
19:53 lbalbalba ypu
19:53 lbalbalba me too
19:53 lbalbalba so its not me ! yippie
19:53 lbalbalba oh, wait...
19:53 a2_ so, again, you don't need to create the volume.. the script is failing for some reason to create it
19:54 a2_ you can ignore bd.t for now if it is troubling
19:54 lbalbalba ah.... should i file a bug report ?
19:54 a2_ you need to setup LVM first for that to work I think
19:54 lbalbalba i am running lvm
19:55 lbalbalba oh, well, ill ignore it for now then
19:56 lbalbalba but it seems mor etests depend on that ... like : ./tests/basic/mount.t gives '1/29 No volumes present'
19:56 lbalbalba same for ./tests/basic/quota.t
19:56 lbalbalba seems kinda silly to continue
19:56 a2_ there can be error messages.. but at the end does prove report give you failed test count?
19:57 lbalbalba dunno, lemme see when it finishes....
19:57 lbalbalba still, ' 1/xx No volumes present' doesnt seem to be exactly right, either
19:58 a2_ does it say "not ok" ?
19:58 a2_ you're usually good as long as it doesn't say "not ok".. you can ignore some of the errors
19:58 lbalbalba ah, here we go: ./tests/basic/bd.t ................................ Failed 23/26 subtests
19:58 lbalbalba ./tests/basic/rpm.t ............................... Failed 2/5 subtests
20:00 lbalbalba again, '1/14 No volumes present' and '13/14 No open fds'. this is no use. ill await krishnan's response
20:04 bulde1 joined #gluster-dev
20:15 lbalbalba anyway... i need a way to run the tests on the source directory. gcov expects to find .gcno files, that were created during the compile in the src dir.
20:37 a2_ Supermathie, the DRC does exactly that
20:37 a2_ if a request is already in progress, just drops the duplicate request
20:44 Supermathie a2_: ORLY? http://fpaste.org/11359/81322791/ (or is that in post-3.3.1?)
20:48 a2_ Supermathie, DRC is still in review (#4909), not even available in master branch
20:49 a2_ just mentioned what that patch was supposed to do
20:49 a2_ you might want to give that patch a spin to see if it immproves the situation!
20:49 a2_ or if it horribly crashes ;)
20:51 Supermathie Oh 4049, yeah I saw that one and mentioned it above as well :) Looks possibly useful, but ultimately the underlying problem is something much more severe I suspect. (glusterfs/nfs took >1s to respond and the disks were IDLE)
20:52 a2_ what call took > 1s to respond?
20:53 Supermathie the link I pasted
20:53 Supermathie also:
20:53 Supermathie https://bugzilla.redhat.com/show_bug.cgi?id=960141
20:53 glusterbot Bug 960141: urgent, unspecified, ---, vraman, NEW , NFS no longer responds, get  "Reply submission failed" errors
20:53 Supermathie 99.98 8454338.20 us      99.00 us 115469515.00 us         417475        READ
20:54 Supermathie glusterfs/nfs is running a little hot, full-blast on 16 cores.
20:55 a2_ wow, can you attach gdb and do 'thread apply all bt full' ?
20:55 Supermathie a2_: already in the bug report
20:55 a2_ ok
20:55 a2_ let me check
20:56 Supermathie a2_: also, this is impressive: print ((call_pool_t)this->ctx->pool) .... cnt = 39705952 ...
20:57 a2_ Supermathie, i don't see output of 'thread apply all bt full' in that bug?
20:59 Supermathie argh... I pasted it for JoeJulian... lemme track down that link
21:01 a2_ 99.98 8454338.20 us      99.00 us 115469515.00 us         417475        READ
21:01 a2_
21:02 a2_ 8sec average time PER CALL
21:02 Supermathie LOTS OF PAIN
21:03 Supermathie http://paste.fedoraproject.org/11222/80799531/
21:03 Supermathie oh it's not full
21:04 Supermathie <- full http://fpaste.org/11361/68133498/
21:05 a2_ wait.. you have io-threads in the NFS server?
21:10 Supermathie a2_: Yeah, without it glusterfs/nfs was choking on the traffic. It fails in the same way with nfs.iothreads off, by the way.
21:24 bulde joined #gluster-dev
22:28 bulde joined #gluster-dev
23:04 a2_ Supermathie, can you remove write-behind in the NFS graph and test?
23:05 a2_ having io-cache in the NFS graph might help.. it helps paginating requests into larger page sizes (128KB) decreasing the number of nfs<-->brick RPC calls
23:41 yinyin joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary