Replacing disks in a ZFS pool

Steve Bertrand steve at
Fri Jan 8 18:04:48 UTC 2010

Steve Bertrand wrote:
> krad wrote:
>>>> the idea of using this type of label instead of the disk names
>>> themselves.
>>> I personally haven't run into any bad problems using the full device, but
>>> I suppose it could be a problem. (Side note - geom should learn how to
>>> parse zfs labels so it could create something like /dev/zfs/<uuid> for
>>> device nodes instead of using other trickery)
>>>> How should I proceed? I'm assuming something like this:
>>>> - add the new 1.5TB drives into the existing, running system
>>>> - GPT label them
>>>> - use 'zpool replace' to replace one drive at a time, allowing the pool
>>>> to rebuild after each drive is replaced
>>>> - once all four drives are complete, shut down the system, remove the
>>>> four original drives, and connect the four new ones where the old ones
>>> were
>>> If you have enough ports to bring all eight drives online at once, I would
>>> recommend using 'zfs send' rather than the replacement. That way you'll
>>> get something like a "burn-in" on your new drives, and I believe it will
>>> probably be faster than the replacement process. Even on an active system,
>>> you can use a couple of incremental snapshots and reduce the downtime to a
>>> bare minimum.
>> Surely it would be better to attach the drives either individually or as a
>> matching vdev (assuming they can all run at once), then break the mirror
>> after its resilvered.  Far less work and far less liekly to miss something.
>> What I have done with my system is label the drives up with a coloured
>> sticker then create a glabel for the device. I then add the glabels to the
>> zpool. Makes it very easy to identify the drives.
> Ok. Unfortunately, the box only has four SATA ports.
> Can I:
> - shut down
> - replace a single existing drive with a new one (breaking the RAID)
> - boot back up
> - gpt label the new disk
> - import the new gpt labelled disk
> - rebuild array
> - rinse, repeat three more times

This seems to work ok:

# zpool offline storage ad6
# halt & replace disk, and start machine
# zpool online storage ad6
# zpool replace storage ad6

I don't know enough about gpt/gpart to be able to work that into the
mix. I would much prefer to have gpt labels as opposed to disk names,
but alas.

fwiw, can I label an entire disk (such as ad6) with gpt, without having
to install boot blocks etc?

I was hoping it would be as easy as:

# gpt create -f ad6
# gpt label -l disk1 ad6

...but it doesn't work.

Neither does:

# gpart create -s gpt ad6
# gpart add -t freebsd-zfs -l disk1 ad6

I'd like to do this so I don't have to manually specify a size to use. I
just want the system to Do The Right Thing, which in this case, would be
to just use the entire disk.


> If so, is there anything I should do prior to the initial drive
> replacement, or will simulating the drive failure be ok?
> Steve
> _______________________________________________
> freebsd-questions at mailing list
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at"

More information about the freebsd-questions mailing list