ZFS performance of various vdevs (long post)

Bob Friesenhahn bfriesen at simple.dallas.tx.us
Mon Jun 7 23:19:18 UTC 2010


On Mon, 7 Jun 2010, Bradley W. Dutton wrote:
> So the normal vdev performs closest to raw drive speeds. Raidz1 is slower and 
> raidz2 even more so. This is observable in the dd tests and viewing in gstat. 
> Any ideas why the raid numbers are slower? I've tried to account for the fact 
> that the raid vdevs have fewer data disks. Would a faster CPU help here?

The sequential throughput on your new drives is faster than the old 
drives, but it is likely that the seek and rotational latencies are 
longer.  ZFS is transaction-oriented and must tell all the drives to 
sync their write cache before proceeding to the next transaction 
group.  Drives with more latency will slow down this step.  Likewise, 
ZFS always reads and writes full filesystem blocks (default 128K) and 
this may cause more overhead when using raidz.

Using 'dd' from /dev/zero is not a very good benchmark test since zfs 
could potentially compress zero-filled blocks down to just a few bytes 
(I think recent versions of zfs do this) and of course Unix supports 
files with holes.

The higher CPU usage might be due to the device driver or the 
interface card being used.

If you could afford to do so, you will likely see considerably better 
performance by using mirrors instead of raidz since then 128K blocks 
will be sent to each disk and with fewer seeks.

Bob
-- 
Bob Friesenhahn
bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/


More information about the freebsd-fs mailing list