ZFS replacing drive issues

Da Rock freebsd-questions at herveybayaustralia.com.au
Tue Jan 6 00:00:47 UTC 2015


On 05/01/2015 11:07, William A. Mahaffey III wrote:
> On 01/04/15 18:25, Da Rock wrote:
>> I haven't seen anything specifically on this when googling, but I'm 
>> having a strange issue in replacing a degraded drive in ZFS.
>>
>> The drive has been REMOVED from ZFS pool, and so I ran 'zpool replace 
>> <pool> <old device> <new device>'. This normally just works, and I 
>> have checked that I have removed the correct drive via serial number.
>>
>> After resilvering, it still shows that it is in a degraded state, and 
>> that the old and the new drive have been REMOVED.
>>
>> No matter what I do, I can't seem to get the zfs system online and in 
>> a good state.
>>
>> I'm running a raidz1 on 9.1 and zfs is v28.
>>
>> Cheers
>> _______________________________________________
>> freebsd-questions at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> To unsubscribe, send any mail to 
>> "freebsd-questions-unsubscribe at freebsd.org"
>>
>
> Someone posted a similar problem a few weeks ago; rebooting fixed it 
> for them (as opposed to trying to get zfs to fix itself w/ management 
> commands), might try that if feasible .... $0.02, no more,l no less ....
>
Sorry, that didn't work unfortunately. I had to wait a bit until I could 
do it between it trying to resilver and workload. It came online at 
first, but then went back to removed when I checked again later.

Any other diags I can do? I've already run smartctl on all the drives 
(5hrs+) and they've come back clean. There's not much to go on in the 
logs either. Do a small number of drives just naturally error when 
placed in a raid or something?


More information about the freebsd-questions mailing list