zfs drive replacement issues

Wes Morgan morganw at chemikals.org
Wed May 19 05:06:07 UTC 2010


On Mon, 17 May 2010, Todd Wasson wrote:

> > Hello,
> > You could try exporting and importing the pool with three disks.
> > Then make sure the "new" drive isn't part of any zpool (low-level format?).
> > Then try a "replace" again.
> > Have fun!
> >
>
>
> Hi Mark, I was about to try this, but I just tried putting the "old"
> (damaged) drive back in the pool and detaching the "new" drive from the
> pool, which I've tried before, but for some reason this time it
> succeeded.  I was then able to "zpool offline" the old drive, physically
> replace it with the new one, and "zpool replace" the old one with the
> new one.  It just finished successfully resilvering, and apparently
> everything is working well.  I'm going to initiate a scrub to be sure
> that everything is alright, but I'm fairly sure that the problem is
> solved.  I didn't do anything that I hadn't already tried, so I don't
> know why it worked this time, but I'm not complaining.  Thanks to
> everyone for your help; at the very least, the idea of putting the
> original drive back into the machine and mucking around with it led me
> in the right direction.  Next time I'll be sure to issue an offline
> command before replacing a device!

I'm not certain that you really always want to do that. When you offline a
device in a redundant pool you lose that redundancy. If you have a drive
that is completely dead, it is obviously the right thing to do, but
otherwise perhaps not. Were you the have another failure during the
rebuild, or if there was another error on a different vdev, you wouldn't
be able to recover that data because of the missing device. The same
reason why offlining and replacing each device in a raidz1 to "grow" it
isn't as safe as you might think -- any error could lead to data loss.

Just food for thought.


More information about the freebsd-fs mailing list