zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"
- Reply: joe mcguckin : "Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool""
- Reply: Alan Somers : "Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool""
- Reply: Mark Millard via freebsd-current : "Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool""
- Reply: Tomoaki AOKI : "Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool""
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Thu, 16 Sep 2021 20:01:16 UTC
What do I go about:
QUOTE
# zpool import
pool: zopt0
id: 18166787938870325966
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:
zopt0 FAULTED corrupted data
nda0p2 UNAVAIL corrupted data
# zpool status -x
all pools are healthy
# zpool destroy zopt0
cannot open 'zopt0': no such pool
END QUOTE
(I had attempted to clean out the old zfs context on
the media and delete/replace the 2 freebsd swap
partitions and 1 freebsd-zfs partition, leaving the
efi partition in place. Clearly I did not do everything
require [or something is very wrong]. zopt0 had been
a root-on-ZFS context and would be again. I have a
backup of the context to send/receive once the pool
in the partition is established.)
For reference, as things now are:
# gpart show
=> 40 937703008 nda0 GPT (447G)
40 532480 1 efi (260M)
532520 2008 - free - (1.0M)
534528 937166848 2 freebsd-zfs (447G)
937701376 1672 - free - (836K)
. . .
(That is not how it looked before I started.)
# uname -apKU
FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021 root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1300139 1300139
I have also tried under:
# uname -apKU
FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021 root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1400032 1400032
after reaching this state. It behaves the same.
The text presented by:
https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
does not deal with what is happening overall.
===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)