Replacing disks in a ZFS pool

krad kraduk at googlemail.com
Fri Jan 8 11:10:30 UTC 2010


> Also, I've been loosely following some of the GPT threads, and I like

> > the idea of using this type of label instead of the disk names
> themselves.
>
> I personally haven't run into any bad problems using the full device, but
> I suppose it could be a problem. (Side note - geom should learn how to
> parse zfs labels so it could create something like /dev/zfs/<uuid> for
> device nodes instead of using other trickery)
>
> > How should I proceed? I'm assuming something like this:
> >
> > - add the new 1.5TB drives into the existing, running system
> > - GPT label them
> > - use 'zpool replace' to replace one drive at a time, allowing the pool
> > to rebuild after each drive is replaced
> > - once all four drives are complete, shut down the system, remove the
> > four original drives, and connect the four new ones where the old ones
> were
>
> If you have enough ports to bring all eight drives online at once, I would
> recommend using 'zfs send' rather than the replacement. That way you'll
> get something like a "burn-in" on your new drives, and I believe it will
> probably be faster than the replacement process. Even on an active system,
> you can use a couple of incremental snapshots and reduce the downtime to a
> bare minimum.
>
>
Surely it would be better to attach the drives either individually or as a
matching vdev (assuming they can all run at once), then break the mirror
after its resilvered.  Far less work and far less liekly to miss something.

What I have done with my system is label the drives up with a coloured
sticker then create a glabel for the device. I then add the glabels to the
zpool. Makes it very easy to identify the drives.


More information about the freebsd-questions mailing list