ZFS and large directories - caveat report

Freddie Cash fjwcash at gmail.com
Thu Jul 21 18:32:24 UTC 2011


On Thu, Jul 21, 2011 at 11:25 AM, Martin Matuska <mm at freebsd.org> wrote:

> Quoting:
> ... The default record size ZFS utilizes is 128K, which is good for many
> storage servers that will harbor larger files. However, when dealing
> with many files that are only a matter of tens of kilobytes, or even
> bytes, considerable slowdown will result. ZFS can easily alter the
> record size of the data to be written through the use of attributes.
> These attributes can be set at any time through the use of the "zfs set"
> command. To set the record size attribute perform "zfs set
> recordsize=32K pool/share". This will set the recordsize to 32K on share
> "share" within pool "pool". This type of functionality can even be
> implemented on nested shares for even more flexibility. ...
>
>
The recordsize property in ZFS is the "max" block size used.  It is not the
only block size used for a dataset.  ZFS will use any block size from 0.5 KB
to $recordsize KB, as determined by the size of the file to be written (it
tries to the find the recordsize that most closely matches the file size to
use the least number of blocks per write).

It's only on ZVols that the recordsize==the block size, and all writes are
fixed in size.

Have a look through "zdb -dd poolname" to see the spread of block sizes in
the pool.

-- 
Freddie Cash
fjwcash at gmail.com


More information about the freebsd-fs mailing list