clear old pools remains from active vdevs

Eugene M. Zheganin eugene at zhegan.in
Thu Apr 26 07:28:16 UTC 2018


Hello,


I have some active vdev disk members that used to be in pool that 
clearly have not beed destroyed properly, so I'm seeing in a "zpool 
import" output something like


# zpool import
    pool: zroot
      id: 14767697319309030904
   state: UNAVAIL
  status: The pool was last accessed by another system.
  action: The pool cannot be imported due to damaged devices or data.
    see: http://illumos.org/msg/ZFS-8000-EY
  config:

         zroot                    UNAVAIL  insufficient replicas
           mirror-0               UNAVAIL  insufficient replicas
             5291726022575795110  UNAVAIL  cannot open
             2933754417879630350  UNAVAIL  cannot open

    pool: esx
      id: 8314148521324214892
   state: UNAVAIL
  status: The pool was last accessed by another system.
  action: The pool cannot be imported due to damaged devices or data.
    see: http://illumos.org/msg/ZFS-8000-EY
  config:

         esx                       UNAVAIL  insufficient replicas
           mirror-0                UNAVAIL  insufficient replicas
             10170732803757341731  UNAVAIL  cannot open
             9207269511643803468   UNAVAIL  cannot open


is there any _safe_ way to get rid of this ? I'm asking because a 
gptzfsboot loader in recent -STABLE stumbles upon this and refuses to 
boot the system 
(https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=227772). The 
workaround is to use the 11.1 loader, but I'm afraid this behavior will 
now be the intended one.


Eugene.



More information about the freebsd-stable mailing list