Why is NFSv4 so slow?
Rick C. Petty
rick-freebsd2009 at kiwi-computer.com
Tue Jun 29 03:06:35 UTC 2010
On Mon, Jun 28, 2010 at 07:48:59PM -0400, Rick Macklem wrote:
> Ok, it sounds like you found some kind of race condition in the delegation
> handling. (I'll see if I can reproduce it here. It could be fun to find:-)
Good luck with that! =)
> >I can try it again with v3 client and v4 server, if you think that's
> >worthy of pursuit. If it makes any difference, the server's four CPUs are
> >pegged at 100% (running "nice +4" cpu-bound jobs). But that was the case
> >before I enabled v4 server too.
> It would be interesting to see if the performance problem exists for
> NFSv3 mounts against the experimental (nfsv4) server.
Hmm, I couldn't reproduce the problem. Once I unmounted the nfsv4 client
and tried v3, the jittering stopped. Then I unmounted v3 and tried v4
again, no jitters. I played with a couple of combinations back and forth
(toggling the presence of "nfsv4" in the options) and sometimes I saw
jittering but only with v4, but nothing like what I was seeing before.
Perhaps this is a result of Jeremy's TCP tuning tweaks.
This is also a difficult thing to test, because the server and client have
so much memory, they cache the date blocks. So if I try my stutter test
on the same video a second time, I only notice stutters if I skip to parts
I haven't skipped to before. I can comment that it seemed like more of a
latency issue than a throughput issue to me. But the disks aren't ever
under a high load. But it's hard to determine accurate load when the
disks are seeking. Oh, I'm using the AHCI controller mode/driver on those
disks instead of ATA, if that matters.
One time when I mounted the v4 again, it broke subdirectories like I was
talking about before. Essentially it would give me a readout of all the
top-level directories but wouldn't descend into subdirectories which
reflect different mountpoints on the server. An unmount and a remount
(without changes to /etc/fstab) fixed the problem. I'm wondering if there
isn't some race condition that seems to affect crossing mountpoints on the
server. When the situation happens, it affects all mountpoints equally
and persists for the duration of that mount. And of course, I can't
reproduce the problem when I try.
I saw the broken mountpoint crossing on another client (without any TCP
tuning) but each time it happened I saw this in the logs:
nfscl: consider increasing kern.ipc.maxsockbuf
Once I doubled that value, the problem went away.. at least with this
particular v4 server mountpoint.
At the moment, things are behaving as expected. The v4 file system seems
just as fast as v3 did, and I don't need a dozen mountpoints specified
on each client thanks to v4. Once again, I thank you, Rick, for all your
-- Rick C. Petty
More information about the freebsd-stable