Frustration: replace not doing what I expected.
Wes Morgan
morganw at chemikals.org
Mon Mar 22 00:12:38 UTC 2010
On Sun, 21 Mar 2010, Baldur Gislason wrote:
> I got it working, what I had to do was to delete a file that the
> resilver process reported as being corrupted. Then run a scrub again
> and it would upgrade the pool status to healthy.
>
Good, I'm glad it worked out. Those errors can be extremely frustrating.
One thing that I am curious about, though. A single device failure on a
raidz1 shouldn't have resulted in corruption. Were you running degraded
for a long period of time? You might want to check your other disks if it
was a read failure.
> Baldur
>
> On Sun, Mar 21, 2010 at 10:34:44AM -0500, Wes Morgan wrote:
> > On Wed, 17 Mar 2010, Baldur Gislason wrote:
> >
> > > A drive failed in a pool and I had to replace it.
> > > I did zpool replace ad18 ad18, the pool resilvered for 5 hours
> > > and finished but did not return from degraded mode.
> > > I tried removing the cache file and reimporting the pool, no change, it
> > > hasn't gotten rid of the old drive which does not exist anymore.
> >
> > Hmmm. I've successfully replaced a drive that way before, and I'm sure
> > many other people have. Did you offline ad18 before doing both the
> > physical drive replacement and the zpool replace? I can't recall if that
> > is necessary or not. Can you send the relevant output from zpool history?
> >
> > The "old" device is part of the metadata on the drive labels, so there is
> > no way to remove it like you're wanting without either zfs deciding to
> > remove it or rewriting the labels by hand.
> >
> >
> > > pool: zirconium
> > > state: DEGRADED
> > > status: One or more devices has experienced an error resulting in data
> > > corruption. Applications may be affected.
> > > action: Restore the file in question if possible. Otherwise restore the
> > > entire pool from backup.
> > > see: http://www.sun.com/msg/ZFS-8000-8A
> > > scrub: none requested
> > > config:
> > >
> > > NAME STATE READ WRITE CKSUM
> > > zirconium DEGRADED 0 0 0
> > > raidz1 DEGRADED 0 0 0
> > > ad4 ONLINE 0 0 0
> > > ad6 ONLINE 0 0 0
> > > replacing DEGRADED 0 0 0
> > > 2614810928866691230 UNAVAIL 0 962 0 was /dev/ad18/old
> > > ad18 ONLINE 0 0 0
> > > ad20 ONLINE 0 0 0
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>
More information about the freebsd-fs
mailing list