Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"

From: joe mcguckin <joe_at_via.net>
Date: Thu, 16 Sep 2021 20:26:16 UTC
I experienced the same yesterday. I grabbed an old disk that was previously part of a pool. Stuck it in the chassis and did ‘zpool import’ and got the same output you did.
Since the other drives of the pool were missing, the pool could not be imported.

zpool status reports 'everything ok’ because all the existing pools are ok. zpool destroy can’t destroy the pool becuase it has not been imported.

I simply created a new pool specifying the drive address of the disk - zfs happily overwrote the old incomplete pool info.

joe


Joe McGuckin
ViaNet Communications

joe@via.net
650-207-0372 cell
650-213-1302 office
650-969-2124 fax



> On Sep 16, 2021, at 1:01 PM, Mark Millard via freebsd-current <freebsd-current@freebsd.org> wrote:
> 
> What do I go about:
> 
> QUOTE
> # zpool import
>   pool: zopt0
>     id: 18166787938870325966
>  state: FAULTED
> status: One or more devices contains corrupted data.
> action: The pool cannot be imported due to damaged devices or data.
>   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
> config:
> 
>        zopt0       FAULTED  corrupted data
>          nda0p2    UNAVAIL  corrupted data
> 
> # zpool status -x
> all pools are healthy
> 
> # zpool destroy zopt0
> cannot open 'zopt0': no such pool
> END QUOTE
> 
> (I had attempted to clean out the old zfs context on
> the media and delete/replace the 2 freebsd swap
> partitions and 1 freebsd-zfs partition, leaving the
> efi partition in place. Clearly I did not do everything
> require [or something is very wrong]. zopt0 had been
> a root-on-ZFS context and would be again. I have a
> backup of the context to send/receive once the pool
> in the partition is established.)
> 
> For reference, as things now are:
> 
> # gpart show
> =>       40  937703008  nda0  GPT  (447G)
>         40     532480     1  efi  (260M)
>     532520       2008        - free -  (1.0M)
>     534528  937166848     2  freebsd-zfs  (447G)
>  937701376       1672        - free -  (836K)
> . . .
> 
> (That is not how it looked before I started.)
> 
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021     root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/arm64.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1300139 1300139
> 
> I have also tried under:
> 
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021     root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1400032 1400032
> 
> after reaching this state. It behaves the same.
> 
> The text presented by:
> 
> https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
> 
> does not deal with what is happening overall.
> 
> ===
> Mark Millard
> marklmi at yahoo.com
> ( dsl-only.net went
> away in early 2018-Mar)
> 
>