ZFS resize disk vdev

James R. Van Artsdalen james-freebsd-fs2 at jrv.org
Tue Dec 9 00:53:53 PST 2008

Bryan Alves wrote:
> I'm thinking about using a hardware raid array with ZFS, using a single disk
> vdev zpool.  I want the ability to add/remove disks to an array, and I'm
> still unsure of the stability of zfs as a whole.  I'm looking for an easy
> way to resize and manage disks that are greater than 2 terabytes.
> If I have a single block device, /dev/da0, on my system that is represented
> by a zfs disk vdev, and the size of this block device grows (because the
> underlying hardware raid expands), will zfs correctly expand?  And will it
> correctly expand in place?
I see no benefit to using hardware RAID for a vdev.  If there is any
concern over ZFS stability then you're using a filesystem you suspect on
an - at best  - really reliable disk: not a step forward!  I think best
practice is to configure the disk controller to present the disks as
JBOD and let ZFS handle things: avoid fancy hardware RAID controllers
altogether and use the fastest JBOD controller configuration available.

Using a hardware RAID seems likely to hurt performance since the
hardware RAID must issue extra reads for partial parity-stripe updates:
ZFS never does in-place disk writes and rarely if ever does partial
parity-stripe updates.  Block allocation will suffer since the
filesystem allocator can't know the geometry of the underlying storage
array when laying out a file.  Parity rebuilds ("resilvering") can be
much faster in ZFS since only things that are different need to be
recomputed when a disk is reattached to a redundant vdev (and if a disk
is replaced free space need not have parity computed).  And hardware
RAID just adds another layer of processing to slow things down.

I'm not sure how ZFS reacts to an existing disk drive suddenly becoming
larger.  Real disk drives don't do that and ZFS is intended to use real
disks.  There are some uberblocks (pool superblocks) at the end of the
disk and ZFS probably won't be able to find them if the uberblocks at
the front of the disk are clobbered and the "end of the disk" has moved
out away from the remaining uberblocks.

You can replace all of the members of a redundant vdev one-by-one with
larger disks and increase the storage capacity of that vdev and hence
the pool.

I routinely run zpools of 4TB and 5TB, which isn't even warming up for
some people.  Sun has had customers with ZFS pools in the petabytes. 
"disks that are greater than 2 terabytes" are pocket change.

More information about the freebsd-fs mailing list