ZFS, NFS and Network tuning

Bruce Evans brde at optusnet.com.au
Thu Jan 29 12:19:31 PST 2009


On Thu, 29 Jan 2009, Brent Jones wrote:

> On Wed, Jan 28, 2009 at 11:21 PM, Brent Jones <brent at servuhome.net> wrote:

>> ...
>> The issue I am seeing, is that for certain file types, the FreeBSD NFS
>> client will either issue an ASYNC write, or an FSYNC.
>> However, NFSv3 and v4 both support "safe" ASYNC writes in the TCP
>> versions of the protocol, so that should be the default.
>> Issuing FSYNC's for every compete block transmitted adds substantial
>> overhead and slows everything down.

I use some patches (mainly for nfs write clustering on the server) by
Bjorn Gronwall and some local fixes (mainly for vfs write clustering
on the server, and tuning off excessive nfs[io]d daemons which get in
each other's way due to poor scheduling, and things that only help for
lots of small files), and see reasonable performance in all cases (~90%
of disk bandwidth with all-async mounts, and half that with the client
mounted noasync on an old version of FreeBSD.  The client in -current 
is faster.)  Writing is actually faster than reading here.

>> ...
>> My NFS mount command lines I have tried to get all data to ASYNC write:
>>
>> $ mount_nfs -3T -o async 192.168.0.19:/pdxfilu01/obsmtp /mnt/obsmtp/
>> $ mount_nfs -3T 192.168.0.19:/pdxfilu01/obsmtp /mnt/obsmtp/
>> $ mount_nfs -4TL 192.168.0.19:/pdxfilu01/obsmtp /mnt/obsmtp/

Also try -r16384 -w16384, and udp, and async on the server.  I think
block sizes default to 8K for udp and 32K for tcp.  8K is too small,
and 32K may be too large (it increases latency for little benefit
if the server fs block size is 16K).  udp gives lower latency.  async
on the server makes little difference provided the server block size
is not too small.

> I have found a 4 year old bug, which may be related to this. cp uses
> mmap for small files (and I imagine lots of things use mmap for file
> operations) and causes slowdowns via NFS, due to the fsync data
> provided above.
>
> http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/87792

mmap apparently breaks the async mount preference in the following code:
from vnode_pager.c:

% 	/*
% 	 * pageouts are already clustered, use IO_ASYNC t o force a bawrite()
% 	 * rather then a bdwrite() to prevent paging I/O from saturating 
% 	 * the buffer cache.  Dummy-up the sequential heuristic to cause
% 	 * large ranges to cluster.  If neither IO_SYNC or IO_ASYNC is set,
% 	 * the system decides how to cluster.
% 	 */
% 	ioflags = IO_VMIO;
% 	if (flags & (VM_PAGER_PUT_SYNC | VM_PAGER_PUT_INVAL))
% 		ioflags |= IO_SYNC;

This apparently gives lots of sync writes.  (Sync writes are the default for
nfs, but we mount with async to try to get async writes.)

% 	else if ((flags & VM_PAGER_CLUSTER_OK) == 0)
% 		ioflags |= IO_ASYNC;

nfs doesn't even support this flag.  In fact, ffs is the only file
system that supports it, and here is the only place that sets it.  This
might explain some slowness.

One of the bugs in vfs clustering that I don't have is related to this.
IIRC, mounting the server with -o async doesn't work as well as it
should because the buffer cache becomes congested with i/o that should
have been sent to the disk.  Some writes must be done async as explained
above, but one place in vfs_cache.c is too agressive in delaying async
writes for file systems that are mounted async.  This problem is more
noticeable for nfs, at least with networks not much faster than disks,
since it results in the client and server taking turns waiting for
each other.  (The names here are very confusing -- the async mount
flag normally delays both sync and async writes for as long as possible,
except for nfs it doesn't affect delays but asks for async writes
instead of sync writes on the server, while the IO_ASYNC flag asks for
async writes and thus often has the opposite sense to the async mount
flag.)

% 	ioflags |= (flags & VM_PAGER_PUT_INVAL) ? IO_INVAL: 0;
% 	ioflags |= IO_SEQMAX << IO_SEQSHIFT;

Bruce


More information about the freebsd-performance mailing list