What is the PREEMPTION option good for?
dillon at apollo.backplane.com
Fri Dec 1 20:54:27 PST 2006
:the client. The difference is entirely due to dead time somewhere in
:nfs. Unfortunately, turning on PREEMPTION and IPI_PREEMPTION didn't
:recover all the lost performance. This is despite the ~current kernel
:having slightly lower latency for flood pings and similar optimizations
:for nfs that reduce the RPC count by a factor of 4 and the ping latency
:by a factor of 2.
The single biggest NFS client performance issue I have encountered
in an environment where most of the data can be cached from earlier
runs is with negative name lookups. Due the large number of -I
options used in builds, the include search path is fairly long and
this usually results in a large number of negative lookups, all of
which introduce synchronous dead times while the stat() or open()
waits for the over-the-wire transaction to complete.
The #1 solution is to cache negative namecache hits for NFS clients.
You don't have to cache them for long... just 3 seconds is usually
enough to remove most of the dead time. Also make sure your access
cache timeout is something reasonable.
It is possible to reduce the number of over-the-wire transactions to
zero but it requires seriously nerfing the access and negative cache
timeouts. It isn't usually worth doing.
Here are some test results:
make buildkernel, /usr/src mounted via NFS, 10 second access cache
timeout, multiple runs to pre-cache data and tcpdump used to verify
that only access RPCs were being sent over the wire for all tests.
No negative cache - 440 seconds real
3 second neg cache timeout - 411 seconds real
10 second neg cache timeout - 410 seconds real (6% improvement)
30 second neg cache timeout - 409 seconds real
<dillon at backplane.com>
More information about the freebsd-arch