From nobody Thu Sep 16 22:16:25 2021 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id C1CAE17D98AB for ; Thu, 16 Sep 2021 22:16:42 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-ot1-f41.google.com (mail-ot1-f41.google.com [209.85.210.41]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4H9Wdk4wtgz3vks for ; Thu, 16 Sep 2021 22:16:42 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-ot1-f41.google.com with SMTP id y63-20020a9d22c5000000b005453f95356cso3358752ota.11 for ; Thu, 16 Sep 2021 15:16:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=uF7CW3LcGIjZqE0/Tmi91iYactTKpoWnniO9mU0KLwA=; b=XlUx9BIR3N41cOD5GBcI9xE+uBxiQ8y0MC1BHM0L9r37Ks+12sYYVaKljc4L96C6LT 472PQ9Jt8t5M8zmmhCAGpCxNi4HfqsnJVIF2TMBBf3WLi2V1/104ftw+c8qJ1/K+SV63 J2M9kYzkmxA7gF2b4FfShBD0lHLoMck4tODtTFiC8YqNBoJWPaXz/HEf3an1u9LFx0Dy Y/nTBuBmU50Bgx3SH8rBYh2rK9cdKDs90lEKVVsXiRcRuKaJxbfnqUkouGwc5fe8v9vF KhnlkwYqioSJYsgpImxauHisEfoByZJcwLLCtBkLcYfWHHbGBzSdRwkxKuApdu73Htc/ vKsA== X-Gm-Message-State: AOAM530gm5nDSm/zPOST9orbWYiPHADwT8R//8NsHypiGAjuFrevuScq 4Fo6aVTqJhf/dcS+06HgRy5T5aM9IJtYictdhSUn+6LD/2s= X-Google-Smtp-Source: ABdhPJzkOIPCyLA4dM2K8wXvNXiLW9vNxUgXMKj//ovrSKYktJZKumYy6h3m7uHP4Tc7x3F7TXTFRbtFIsk6yFLaXuI= X-Received: by 2002:a9d:411d:: with SMTP id o29mr6722369ote.111.1631830596292; Thu, 16 Sep 2021 15:16:36 -0700 (PDT) List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@freebsd.org MIME-Version: 1.0 References: In-Reply-To: From: Alan Somers Date: Thu, 16 Sep 2021 16:16:25 -0600 Message-ID: Subject: Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool" To: Mark Millard Cc: freebsd-current Content-Type: multipart/alternative; boundary="0000000000000693bd05cc242b43" X-Rspamd-Queue-Id: 4H9Wdk4wtgz3vks X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] X-ThisMailContainsUnwantedMimeParts: Y --0000000000000693bd05cc242b43 Content-Type: text/plain; charset="UTF-8" On Thu, Sep 16, 2021 at 4:02 PM Mark Millard wrote: > > > On 2021-Sep-16, at 13:39, Alan Somers wrote: > > > On Thu, Sep 16, 2021 at 2:04 PM Mark Millard via freebsd-current < > freebsd-current@freebsd.org> wrote: > > What do I go about: > > > > QUOTE > > # zpool import > > pool: zopt0 > > id: 18166787938870325966 > > state: FAULTED > > status: One or more devices contains corrupted data. > > action: The pool cannot be imported due to damaged devices or data. > > see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E > > config: > > > > zopt0 FAULTED corrupted data > > nda0p2 UNAVAIL corrupted data > > > > # zpool status -x > > all pools are healthy > > > > # zpool destroy zopt0 > > cannot open 'zopt0': no such pool > > END QUOTE > > > > (I had attempted to clean out the old zfs context on > > the media and delete/replace the 2 freebsd swap > > partitions and 1 freebsd-zfs partition, leaving the > > efi partition in place. Clearly I did not do everything > > require [or something is very wrong]. zopt0 had been > > a root-on-ZFS context and would be again. I have a > > backup of the context to send/receive once the pool > > in the partition is established.) > > > > For reference, as things now are: > > > > # gpart show > > => 40 937703008 nda0 GPT (447G) > > 40 532480 1 efi (260M) > > 532520 2008 - free - (1.0M) > > 534528 937166848 2 freebsd-zfs (447G) > > 937701376 1672 - free - (836K) > > . . . > > > > (That is not how it looked before I started.) > > > > # uname -apKU > > FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 > releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021 > root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 > arm64 aarch64 1300139 1300139 > > > > I have also tried under: > > > > # uname -apKU > > FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 > main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021 > root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 > arm64 aarch64 1400032 1400032 > > > > after reaching this state. It behaves the same. > > > > The text presented by: > > > > https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E > > > > does not deal with what is happening overall. > > > > So you just want to clean nda0p2 in order to reuse it? Do "zpool > labelclear -f /dev/nda0p2" > > > > I did not extract and show everything that I'd tried but > there were examples of: > > # zpool labelclear -f /dev/nda0p2 > failed to clear label for /dev/nda0p2 > > from when I'd tried such. So far I've not > identified anything with official commands > to deal with the issue. > That is the correct command to run. However, the OpenZFS import in FreeBSD 13.0 brought in a regression in that command. It wasn't a code bug really, more like a UI bug. OpenZFS just had a less useful labelclear command than FreeBSD did. The regression has now been fixed upstream. https://github.com/openzfs/zfs/pull/12511 > > Ultimately I zeroed out areas of the media that > happened to span the zfs related labels. After > that things returned to normal. I'd still like > to know a supported way of dealing with the > issue. > > The page at the URL it listed just says: > > QUOTE > The pool must be destroyed and recreated from an appropriate backup source > END QUOTE > It advised to to "destroy and recreate" the pool because you ran "zpool import", so ZFS thought that you actually wanted to import the pool. The error message is appropriate if that had been the case. > > But the official destroy commands did not work: > Because "zpool destroy" only works for imported pools. The error message meant "destroy" in a more generic sense. > same sort of issue of reporting that nothing > appropriate was found to destroy and no way to > import the problematical pool. > > > Note: I use ZFS because of wanting to use bectl, not > for redundancy or such. So the configuration is very > simple. > > > === > Mark Millard > marklmi at yahoo.com > ( dsl-only.net went > away in early 2018-Mar) > > --0000000000000693bd05cc242b43--