Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"

From: Mark Millard via freebsd-current <freebsd-current_at_freebsd.org>
Date: Fri, 17 Sep 2021 00:14:42 UTC

On 2021-Sep-16, at 16:27, Freddie Cash <fjwcash at gmail.com> wrote:
> 
> [message chopped and butchered, don't follow the quotes, it's just to show some bits together from different messages]
> 
> On Thu, Sep 16, 2021 at 3:54 PM Mark Millard via freebsd-current <freebsd-current@freebsd.org> wrote:
> > > For reference, as things now are:
> > > 
> > > # gpart show
> > > =>       40  937703008  nda0  GPT  (447G)
> > >          40     532480     1  efi  (260M)
> > >      532520       2008        - free -  (1.0M)
> > >      534528  937166848     2  freebsd-zfs  (447G)
> > >   937701376       1672        - free -  (836K)
> > > . . .
>  
> > > So you just want to clean nda0p2 in order to reuse it?  Do "zpool labelclear -f /dev/nda0p2"
> > > 
> >> 
> >> I did not extract and show everything that I'd tried but
> >> there were examples of:
> >> 
> >> # zpool labelclear -f /dev/nda0p2
> >> failed to clear label for /dev/nda0p2
> 
> The start of the problem looked like (console context,
> so messages interlaced):
> 
> # zpool create -O compress=lz4 -O atime=off -f -tzopt0 zpopt0 /dev/nvd0
> GEOM: nda0: the primary GPT table is corrupt or invalid.
> GEOM: nda0: using the secondary instead -- recovery strongly advised.
> cannot create 'zpopt0': no such pool or dataset
> # Sep 16 12:19:31 CA72_4c8G_ZFS ZFS[1111]: vdev problem, zpool=zopt0 path=/dev/nvd0 type=ereport.fs.zfs.vdev.open_failed
> 
> The GPT table was okay just prior to the command.
> So I recovered it.
> 
> It looks like you're trying to use a disk partition for a ZFS pool (nda0p2), but then you turn around and use the entire drive (nvd0) for the pool which clobbers the GPT.

I'd not noticed my lack of a "p2" suffix. Thanks. Explains how
I got things messed up, with GPT and zfs conflicting. (Too many
distractions at the time, I guess.)

> You need to be consistent in using partitions for all commands.

Yep: dumb typo that I'd not noticed.

> You're also mixing up your disk device nodes for the different commands; while they are just different names for the same thing, it's best to be consistent.

Once I had commands failing, I expectly tried alternatives that
I thought should be equivalent in case they were not in some way.
Not my normal procedure.

> GEOM is built out of layers (or more precisely, "containers" as it specifies a new start and end point on the disk), which is very powerful.  But it's also very easy to make a mess of things when you start accessing things outside of the layers.  :)  And ZFS labelclear option is the nuclear option that tends to remove everything ZFS-related, and everything GPT-related; although I've never seen it used on a partition before, usually just the disk.

> Best bet in this situation is to just zero out the entire disk (dd if=/dev/zero of=/dev/nda0 bs=1M), and start over from scratch.  Create a new GPT.  Create new partitions.  Use the specific partition with the "zpool create" command.

I ended up writing something less than a full 480 GiByte of writes. It
preserved /dev/nda0p1 .

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)