ZFS replacing drive issues

Henrik Hudson lists at rhavenn.net
Fri Jan 9 19:27:07 UTC 2015


On Tue, 06 Jan 2015, Da Rock wrote:

> On 05/01/2015 11:07, William A. Mahaffey III wrote:
> > On 01/04/15 18:25, Da Rock wrote:
> >> I haven't seen anything specifically on this when googling, but I'm 
> >> having a strange issue in replacing a degraded drive in ZFS.
> >>
> >> The drive has been REMOVED from ZFS pool, and so I ran 'zpool replace 
> >> <pool> <old device> <new device>'. This normally just works, and I 
> >> have checked that I have removed the correct drive via serial number.
> >>
> >> After resilvering, it still shows that it is in a degraded state, and 
> >> that the old and the new drive have been REMOVED.
> >>
> >> No matter what I do, I can't seem to get the zfs system online and in 
> >> a good state.
> >>
> >> I'm running a raidz1 on 9.1 and zfs is v28.
> >>
> >> Cheers
> >> _______________________________________________
> >> freebsd-questions at freebsd.org mailing list
> >> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> >> To unsubscribe, send any mail to 
> >> "freebsd-questions-unsubscribe at freebsd.org"
> >>
> >
> > Someone posted a similar problem a few weeks ago; rebooting fixed it 
> > for them (as opposed to trying to get zfs to fix itself w/ management 
> > commands), might try that if feasible .... $0.02, no more,l no less ....
> >
> Sorry, that didn't work unfortunately. I had to wait a bit until I could 
> do it between it trying to resilver and workload. It came online at 
> first, but then went back to removed when I checked again later.
> 
> Any other diags I can do? I've already run smartctl on all the drives 
> (5hrs+) and they've come back clean. There's not much to go on in the 
> logs either. Do a small number of drives just naturally error when 
> placed in a raid or something?
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"

a) try a 'zpool clear' to perhaps force it to clear errors, but to
be safe I'd still do "c" below.

b) Did you physically remove the old drive and replace it and then
run a zpool replace? Did the devices have the same device ID or did
you use GPT ids?

c) If it's a mirror try just removing the device, zpool remove pool
device and then re-attaching it via zpool attach.

henrik

-- 
Henrik Hudson
lists at rhavenn.net
-----------------------------------------
"God, root, what is difference?" Pitr; UF 



More information about the freebsd-questions mailing list