ZFS weird issue...

Will Andrews will at firepipe.net
Sun Dec 7 01:29:19 UTC 2014


On Fri, Dec 5, 2014 at 6:40 PM, Michelle Sullivan <michelle at sorbs.net> wrote:
> Days later new drive to replace the dead drive arrived and was
> inserted.  System refused to re-add as there was data in the cache, so
> rebooted and cleared the cache (as per many on web faq's)  Reconfigured
> it to match the others.  Can't do a zpool replace mfid8 because that's
> already in the pool... (was mfid9) can't use mfid15 because zpool
> reports it's not part of the config... can't use the uniq-id it received
> (can't find vdev) ... HELP!! :)
[...]
> root at colossus:~ # zpool status -v
[...]
>   pool: sorbs
>  state: DEGRADED
> status: One or more devices could not be opened.  Sufficient replicas
> exist for
>     the pool to continue functioning in a degraded state.
> action: Attach the missing device and online it using 'zpool online'.
>    see: http://illumos.org/msg/ZFS-8000-2Q
>   scan: scrub in progress since Fri Dec  5 17:11:29 2014
>         2.51T scanned out of 29.9T at 89.4M/s, 89h7m to go
>         0 repaired, 8.40% done
> config:
>
>     NAME              STATE     READ WRITE CKSUM
>     sorbs             DEGRADED     0     0     0
>       raidz2-0        DEGRADED     0     0     0
>         mfid0         ONLINE       0     0     0
>         mfid1         ONLINE       0     0     0
>         mfid2         ONLINE       0     0     0
>         mfid3         ONLINE       0     0     0
>         mfid4         ONLINE       0     0     0
>         mfid5         ONLINE       0     0     0
>         mfid6         ONLINE       0     0     0
>         mfid7         ONLINE       0     0     0
>         spare-8       DEGRADED     0     0     0
>           1702922605  UNAVAIL      0     0     0  was /dev/mfid8
>           mfid14      ONLINE       0     0     0
>         mfid8         ONLINE       0     0     0
>         mfid9         ONLINE       0     0     0
>         mfid10        ONLINE       0     0     0
>         mfid11        ONLINE       0     0     0
>         mfid12        ONLINE       0     0     0
>         mfid13        ONLINE       0     0     0
>     spares
>       933862663       INUSE     was /dev/mfid14
>
> errors: No known data errors
> root at colossus:~ # uname -a
> FreeBSD colossus.sorbs.net 9.2-RELEASE FreeBSD 9.2-RELEASE #0 r255898:
> Thu Sep 26 22:50:31 UTC 2013
> root at bake.isc.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
[...]
> root at colossus:~ # ls -l /dev/mfi*
> crw-r-----  1 root  operator  0x22 Dec  5 17:18 /dev/mfi0
> crw-r-----  1 root  operator  0x68 Dec  5 17:18 /dev/mfid0
> crw-r-----  1 root  operator  0x69 Dec  5 17:18 /dev/mfid1
> crw-r-----  1 root  operator  0x78 Dec  5 17:18 /dev/mfid10
> crw-r-----  1 root  operator  0x79 Dec  5 17:18 /dev/mfid11
> crw-r-----  1 root  operator  0x7a Dec  5 17:18 /dev/mfid12
> crw-r-----  1 root  operator  0x82 Dec  5 17:18 /dev/mfid13
> crw-r-----  1 root  operator  0x83 Dec  5 17:18 /dev/mfid14
> crw-r-----  1 root  operator  0x84 Dec  5 17:18 /dev/mfid15
> crw-r-----  1 root  operator  0x6a Dec  5 17:18 /dev/mfid2
> crw-r-----  1 root  operator  0x6b Dec  5 17:18 /dev/mfid3
> crw-r-----  1 root  operator  0x6c Dec  5 17:18 /dev/mfid4
> crw-r-----  1 root  operator  0x6d Dec  5 17:18 /dev/mfid5
> crw-r-----  1 root  operator  0x6e Dec  5 17:18 /dev/mfid6
> crw-r-----  1 root  operator  0x75 Dec  5 17:18 /dev/mfid7
> crw-r-----  1 root  operator  0x76 Dec  5 17:18 /dev/mfid8
> crw-r-----  1 root  operator  0x77 Dec  5 17:18 /dev/mfid9
> root at colossus:~ #

Hi,

>From the above it appears your replacement drive's current name is
mfid15, and the spare is now mfid14.

What commands did you run that failed?  Can you provide a copy of the
first label from 'zdb -l /dev/mfid0'?

The label will provide you with the full vdev guid that you need to
replace the original drive with a new one.

Another thing you could do is wait for the spare to finish
resilvering, then promote it to replace the original drive, and make
your new one a spare.  Considering the time required to resilver this
pool configuration, that may be preferable for you.

--Will.


More information about the freebsd-fs mailing list