Frustration: replace not doing what I expected.
Baldur Gislason
baldur at foo.is
Sun Mar 21 16:10:53 UTC 2010
I got it working, what I had to do was to delete a file that the
resilver process reported as being corrupted. Then run a scrub again
and it would upgrade the pool status to healthy.
Baldur
On Sun, Mar 21, 2010 at 10:34:44AM -0500, Wes Morgan wrote:
> On Wed, 17 Mar 2010, Baldur Gislason wrote:
>
> > A drive failed in a pool and I had to replace it.
> > I did zpool replace ad18 ad18, the pool resilvered for 5 hours
> > and finished but did not return from degraded mode.
> > I tried removing the cache file and reimporting the pool, no change, it
> > hasn't gotten rid of the old drive which does not exist anymore.
>
> Hmmm. I've successfully replaced a drive that way before, and I'm sure
> many other people have. Did you offline ad18 before doing both the
> physical drive replacement and the zpool replace? I can't recall if that
> is necessary or not. Can you send the relevant output from zpool history?
>
> The "old" device is part of the metadata on the drive labels, so there is
> no way to remove it like you're wanting without either zfs deciding to
> remove it or rewriting the labels by hand.
>
>
> > pool: zirconium
> > state: DEGRADED
> > status: One or more devices has experienced an error resulting in data
> > corruption. Applications may be affected.
> > action: Restore the file in question if possible. Otherwise restore the
> > entire pool from backup.
> > see: http://www.sun.com/msg/ZFS-8000-8A
> > scrub: none requested
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > zirconium DEGRADED 0 0 0
> > raidz1 DEGRADED 0 0 0
> > ad4 ONLINE 0 0 0
> > ad6 ONLINE 0 0 0
> > replacing DEGRADED 0 0 0
> > 2614810928866691230 UNAVAIL 0 962 0 was /dev/ad18/old
> > ad18 ONLINE 0 0 0
> > ad20 ONLINE 0 0 0
More information about the freebsd-fs
mailing list