Replacing disks in a ZFS pool

Steve Bertrand steve at
Fri Jan 8 16:48:58 UTC 2010

krad wrote:

>>> the idea of using this type of label instead of the disk names
>> themselves.
>> I personally haven't run into any bad problems using the full device, but
>> I suppose it could be a problem. (Side note - geom should learn how to
>> parse zfs labels so it could create something like /dev/zfs/<uuid> for
>> device nodes instead of using other trickery)
>>> How should I proceed? I'm assuming something like this:
>>> - add the new 1.5TB drives into the existing, running system
>>> - GPT label them
>>> - use 'zpool replace' to replace one drive at a time, allowing the pool
>>> to rebuild after each drive is replaced
>>> - once all four drives are complete, shut down the system, remove the
>>> four original drives, and connect the four new ones where the old ones
>> were
>> If you have enough ports to bring all eight drives online at once, I would
>> recommend using 'zfs send' rather than the replacement. That way you'll
>> get something like a "burn-in" on your new drives, and I believe it will
>> probably be faster than the replacement process. Even on an active system,
>> you can use a couple of incremental snapshots and reduce the downtime to a
>> bare minimum.
> Surely it would be better to attach the drives either individually or as a
> matching vdev (assuming they can all run at once), then break the mirror
> after its resilvered.  Far less work and far less liekly to miss something.
> What I have done with my system is label the drives up with a coloured
> sticker then create a glabel for the device. I then add the glabels to the
> zpool. Makes it very easy to identify the drives.

Ok. Unfortunately, the box only has four SATA ports.

Can I:

- shut down
- replace a single existing drive with a new one (breaking the RAID)
- boot back up
- gpt label the new disk
- import the new gpt labelled disk
- rebuild array
- rinse, repeat three more times

If so, is there anything I should do prior to the initial drive
replacement, or will simulating the drive failure be ok?


More information about the freebsd-questions mailing list