ZFS: How to enable cache and logs.

Rick Macklem rmacklem at uoguelph.ca
Fri May 13 00:03:40 UTC 2011


> On Thu, 12 May 2011, Rick Macklem wrote:
> >> The large write feature of the ZIL is a reason why we should
> >> appreciate modern NFS's large-write capability and avoid anchient
> >> NFS.
> >>
> > The size of a write for the new FreeBSD NFS server is limited to
> > MAX_BSIZE. It is currently 64K, but I would like to see it much
> > larger.
> > I am going to try increasing MAX_BSIZE soon, to see what happens.
> 
> Zfs would certainly appreciate 128K since that is its default block
> size. When existing file content is overwritten, writing in properly
> aligned 128K blocks is much faster due to ZFS's COW algorithm and not
> needing to read the existing block. With a partial "overwrite", if
> the existing block is not already cached in the ARC, then it would
> need to be read from underlying store before the replacement block can
> be written. This effect becomes readily apparent in benchmarks. In
> my own benchmarking I have found that 128K is sufficient and using
> larger multiples of 128K does not obtain much more performance.
> 
> When creating a file from scratch, zfs performs well for async writes
> if a process writes data smaller than 128K. That might not be the
> case for sync writes.
> 
Yep, I think sizes greater than 128K might only benefit WAN connections
with a larger bandwidth * delay product.

It also helps to find "not so great" network interfaces/drivers. When I
used 128K on the Mac OS X port, it worked great for some Macs and horribly
for others. Some Macs would drop packets when they would see a burst of
read traffic (the Mac was a client and the server was Solaris10, which
handles NFS read/write sizes up to 1Mbyte) and wouldn't perform well above
32Kbytes (for a now rather old port to Leopard).

rick


More information about the freebsd-fs mailing list