Unavailable pool cannot be cleared, remains across reboots

Mel Pilgrim list_freebsd at bluerosetech.com
Sat Jun 23 21:42:06 UTC 2018


I have a FreeBSD 11.1 system where a drive failed and was physically 
removed while a ZFS pool was imported and mounted.  The system now shows 
a stuck pool in the unavailable state that I am unable to remove:

# zpool status
   pool: backupA
  state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
         replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
    see: http://illumos.org/msg/ZFS-8000-3C
   scan: none requested
config:

         NAME                    STATE     READ WRITE CKSUM
         backupA                 UNAVAIL      0     0     0
           10529238916776142171  UNAVAIL      0     0     0  was 
/dev/gpt/backupA

But the zpool clear command doesn't work:

# zpool clear -F backupA
cannot clear errors for backupA: no such pool or dataset

The zpool destroy -f and export -f commands get stuck in uninterruptible 
wait.  The stale pool persists across reboots even though nothing is 
attached to the system that has this.

A strings search of /boot/zfs/zpool.cache shows it, so I'm guessing this 
is a stale cachefile issue?  How do I fix that on a root-on-ZFS system?


More information about the freebsd-questions mailing list