Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"
Date: Thu, 16 Sep 2021 20:56:16 UTC
On 2021-Sep-16, at 13:01, Mark Millard <marklmi at yahoo.com> wrote:
> What do I go about:
>
> QUOTE
> # zpool import
> pool: zopt0
> id: 18166787938870325966
> state: FAULTED
> status: One or more devices contains corrupted data.
> action: The pool cannot be imported due to damaged devices or data.
> see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
> config:
>
> zopt0 FAULTED corrupted data
> nda0p2 UNAVAIL corrupted data
>
> # zpool status -x
> all pools are healthy
>
> # zpool destroy zopt0
> cannot open 'zopt0': no such pool
> END QUOTE
>
> (I had attempted to clean out the old zfs context on
> the media and delete/replace the 2 freebsd swap
> partitions and 1 freebsd-zfs partition, leaving the
> efi partition in place. Clearly I did not do everything
> require [or something is very wrong]. zopt0 had been
> a root-on-ZFS context and would be again. I have a
> backup of the context to send/receive once the pool
> in the partition is established.)
>
> For reference, as things now are:
>
> # gpart show
> => 40 937703008 nda0 GPT (447G)
> 40 532480 1 efi (260M)
> 532520 2008 - free - (1.0M)
> 534528 937166848 2 freebsd-zfs (447G)
> 937701376 1672 - free - (836K)
> . . .
>
> (That is not how it looked before I started.)
>
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021 root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1300139 1300139
>
> I have also tried under:
>
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021 root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1400032 1400032
>
> after reaching this state. It behaves the same.
>
> The text presented by:
>
> https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>
> does not deal with what is happening overall.
>
I finally seem to have stomped on enough to have gotten
past the issue (last actions):
# gpart add -tfreebsd-swap -s440g /dev/nda0
nda0p2 added
# gpart add -tfreebsd-swap /dev/nda0
nda0p3 added
7384907776 bytes transferred in 5.326024 secs (1386570546 bytes/sec)
# dd if=/dev/zero of=/dev/nda0p3 bs=4k conv=sync status=progress
dd: /dev/nda0p3: end of device972 MiB) transferred 55.001s, 133 MB/s
1802957+0 records in
1802956+0 records out
7384907776 bytes transferred in 55.559644 secs (132918559 bytes/sec)
# gpart delete -i3 /dev/nda0
nda0p3 deleted
# gpart delete -i2 /dev/nda0
nda0p2 deleted
# gpart add -tfreebsd-zfs -a1m /dev/nda0
nda0p2 added
# zpool import
no pools available to import
# gpart show
. . .
=> 40 937703008 nda0 GPT (447G)
40 532480 1 efi (260M)
532520 2008 - free - (1.0M)
534528 937166848 2 freebsd-zfs (447G)
937701376 1672 - free - (836K)
# zpool create -O compress=lz4 -O atime=off -f -tzpopt0 zopt0 /dev/nda0p2
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zpopt0 444G 420K 444G - - 0% 0% 1.00x ONLINE -
zroot 824G 105G 719G - - 1% 12% 1.00x ONLINE -
I've no clue what made my original zpool labelclear -f attempt
leave material behind before repartitioning. Still could have
been operator error of some kind.
===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)