ZFS re attaching failed device to pool

Alejandro Imass aimass at yabarana.com
Tue Nov 6 18:16:53 UTC 2018


On Tue, Nov 6, 2018 at 9:05 AM Philipp Vlassakakis <
freebsd-en at lists.vlassakakis.de> wrote:

> Hi Alex,
>
> Did you try „zpool online zroot NAME-OF-DEGRADED-DISK“ and „zpool zroot
> clear“ ?
>
> Regards,
> Philipp
>

Hey Phillip, thanks for the suggestion.

I just tried it and it says:

Device xxx onlined but remains in faulted state
And the "Action"suggests to run replace. Then I tried clear and waited for
the scrub to finish bu the device still says UNAVAIL.

So I went ahead and RTFM again and did "detach" and the "add"like the
handbook suggests.

Now the pool says ONLINE. BUT, why is the first disk labeled as "p3" and
not the second one???

# zpool status -v
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 0h6m with 0 errors on Tue Nov  6 07:24:56 2018
config:

        NAME                             STATE     READ WRITE CKSUM
        zroot                            ONLINE       0     0     0
          diskid/DISK-WD-WCC4N2YTRX40p3  ONLINE       0     0     0
          diskid/DISK-WD-WCC4N6XZY8C2    ONLINE       0     0     0

# zpool history
History for 'zroot':
2017-06-30.21:38:33 zpool create -o altroot=/mnt -O compress=lz4 -O
atime=off -m none -f zroot mirror ada0p3 ada1p3
2017-06-30.21:38:33 zfs create -o mountpoint=none zroot/ROOT
2017-06-30.21:38:33 zfs create -o mountpoint=/ zroot/ROOT/default
2017-06-30.21:38:33 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off
zroot/tmp
2017-06-30.21:38:33 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
2017-06-30.21:38:33 zfs create zroot/usr/home
2017-06-30.21:38:34 zfs create -o setuid=off zroot/usr/ports
2017-06-30.21:38:34 zfs create zroot/usr/src
2017-06-30.21:38:34 zfs create -o mountpoint=/var -o canmount=off zroot/var
2017-06-30.21:38:34 zfs create -o exec=off -o setuid=off zroot/var/audit
2017-06-30.21:38:34 zfs create -o exec=off -o setuid=off zroot/var/crash
2017-06-30.21:38:34 zfs create -o exec=off -o setuid=off zroot/var/log
2017-06-30.21:38:35 zfs create -o atime=on zroot/var/mail
2017-06-30.21:38:35 zfs create -o setuid=off zroot/var/tmp
2017-06-30.21:38:35 zfs set mountpoint=/zroot zroot
2017-06-30.21:38:35 zpool set bootfs=zroot/ROOT/default zroot
2017-06-30.21:38:35 zpool export zroot
2017-06-30.21:38:37 zpool import -o altroot=/mnt zroot
2017-06-30.21:38:42 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
2018-11-06.05:18:34 zpool clear zroot
2018-11-06.07:13:34 zpool online zroot /dev/diskid/DISK-WD-WCC4N6XZY8C2
2018-11-06.07:18:01 zpool clear zroot
2018-11-06.07:35:55 zpool detach zroot /dev/diskid/DISK-WD-WCC4N6XZY8C2
2018-11-06.07:36:24 zpool add zroot /dev/diskid/DISK-WD-WCC4N6XZY8C2





> > On 6. Nov 2018, at 14:53, Alejandro Imass <aimass at yabarana.com> wrote:
> >
> >> On Tue, Nov 6, 2018 at 8:50 AM Alejandro Imass <aimass at yabarana.com>
> wrote:
> >>
> >> Dear Beasties,
> >>
> >> I have a simple 2 disk pool and one disk started failing and zfs put it
> in
> >


[...]

>
>


More information about the freebsd-questions mailing list