From nobody Thu Sep 16 22:50:17 2021 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id F33F417C53F8 for ; Thu, 16 Sep 2021 22:50:37 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic316-55.consmr.mail.gq1.yahoo.com (sonic316-55.consmr.mail.gq1.yahoo.com [98.137.69.31]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4H9XNs4fXLz4gkh for ; Thu, 16 Sep 2021 22:50:37 +0000 (UTC) (envelope-from marklmi@yahoo.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1631832620; bh=AS5aRs8ns+3OkMyFckDVrrFmWPof0GiBu4JKN/Y/FbE=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=qBFU1t80ndAoO1oEEDDMjFzugOf+LYBWQJT9S6GUqome6mVfYrstpm/SaXAU8dCcD4d3z9bAdm4cfgMRIJJLaP7oNpdXWPfCPfRfOajd5u3nrcx/hRbo4oJ5cv2jasxNp/nlxi9V+UbQGuoz3ugirhIJNASzw+TDRA/Fba7wtTOx6sF4fpLdDYtkcfb5dgmYBR/byRs1Ey+8PVJl+asXoknAWplMDKLu2rWm2602qcN/CPXUaP6bAOtwy19MhBl8OkvHEx46zAEvL0H8OyfchW8ckTWiDBAbsRmUkYtTTMXrbagb+smnbNhQyb9PzJIb1KaeSvknAus11y7AmE7KxA== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1631832620; bh=zC11IJJMP4OrfV00tA4uXFHxiZRZNl8YF63eOUIivLf=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=tQo3PIPQWQTQeL5/5dv5QxtftFXX3kZgdcIW/xr5kYVKtBVDJrTEQZRjfcrlWef6umwRBmuWAnbd2Nuau8ETRjDStMaZa3wS6ksh2R+CFNnhQtw2kswLdEUn4r4nM98ClfxtWbwG6uG+TKAiV8AYmw8Vq+bV1rtmyxDH5FmUlnfE7Z8PNMeRNJ6pU3dHrLnLGdA3No7gw5BtoCdrhOzQwh2XJKg7iewozPQWj9ZX/nsNGporuREjWX3JuhVaiS56RR87K7Z2j9zeC+liqrMbPlN6bwaN1yWNTWLUwe4onUkSSqT/TPUZE6+wHG5+lNbD/CO517Ye5g0JIryLd6uoYg== X-YMail-OSG: RimUAesVM1mAXtHAH7bqgFD.DYaNgEBgqTwKAYcjwAGjjgAekA64UuLRSGcgtXL pzF_9STgh6o6aZxwutFx10XxgSiDE5UsrymqVEQQvRzmtXzQrnf.xUnIT6VnSAi.FOKi9rQGrKEj JnM1c39S9eve_mrTzukbsBDsXCikiiLBmbJx_2v3TgwS__dSGg3v0s_noB6vjfYA2eGGCPM.g_yD wOa1NizeynoKm6p6nfmyWAE_ERAthnVPOIRtezFiOMxtR0UGv40ZNi1F37JNBbq.ChQ3ivIsHuPN OfqFDsdLKJJl5vndhs9o3e5K9uLhrKmRO9bq_QXgzLvcx8lGZAROvP.CLM05ziImyuxKpQiDM9OS 02hB3cgoHR8rF2jHSuCjEaRyXxQTXgjyzw8MBWP.WO9Sd4vMuGCgTKFVlUtqUhtUXOinf0l6YeDu JfqY.JakCbwwIV91ZdR7otLsTjePsi6xoFThKWZdOiTYvc01NIoCx1LlPqgqQfCYUcEvCv.Ywfcw ZVUUiRLwNc1XuYpIHggnh8UakgK03tkrNjLtUcUFwdunFi3oAWjIZuen1c5njpdt9SxagttzV8Ll BGFmI38k_cAI1iqZwt4kx0k57oyApD0zU50Ss2oRresxkJnrtGYZskSOYSLzfLBZzqkSQjeRLiFV CfUL.fHOgALH9eQACV.hLtW_ybqQNFT.u4dkLjYaqpCxyMmmae8XRIv7BT5B2240yBXdhCpYHguu 2aunrNX1qcShfvwxxvq4HP.S0Cz6ISVsIylH0zuKee2_8m.BniSP7X1pPllQtqabPVVfm2EHgGbn C7QaoGkxMwOvRwWhOWB6085fWjkpTOSPpjCZ5HJKMOEeYPXi4gq6FNkq4fNq6ihZCnFh4zdivh6L zicALp4mCTHs3ZHGNi2kvNuXwAnuEax9eSU0NlyqSPOW0t5klDp3dW_D_QcBj9u497cE5.XqFukN VyP1p6B2beIK0rHm8l.0ZcWfuO8_i8w4X.Rbeq8YMucJTODVfNEr4Xi_CUaR6y7ICSWS94iSOq56 9QRPLrawm7jIAxJaiXHOn6NYdgJ8aoGE11WD1W1y0HTMBiQZaDobX.CTxzorGE4geKNs7WyPfeBv y3p4.IqRXb1ZiitnscodNGAAwcxScRjCs41iFGhwdkdOZ.L8T6vawbWLojjtlbq7HeiiW82r6y4p VKw0rEaxZALl0rvk7BQjHHJMeyprycqtTPAxkcZ4tPLCK2hT4lr3o8uVcxD7cm675Ph9SUeRRGw_ euFrSgLrtxFXy9GEwDtCXny0lEvThAa2y21lrccV1EQZM2AzRcciQprX0.R5Lq4PtQJs1musnbpC 8FVgiPz2PHlBx.GyInhE0EDciDCSWb97CAAjl3p2rud1DjZT_A5zkRFV7243s9F.dDScZFXbfZ.n XbQ12J4zXnORoABbHcBVOtxfH86WY5BngLdREVwHr61wud3qEEgMrn7AxMHcgZhX6FYeS_iod4YE s4HXB9r9ZCMKXuoUoJhDlisK1OqEM.mmXypT2kHzO.xOCj2E1ioSQA0QEMIMPel4_do.5Sbagoti G7bVSSjFbBDkWsDPDQ8CT1iNTrdYo5nLMppcHlTs9d6v6Mz0nA.uMgziiN2Q61LK6fXToudWBIFJ j5oHKY3OOF19ffN3t9w1FdlUzRs9k6kg4kF77H_XF.JOkjKyCnroEa1rNreoBWjGCAJQ.PUvALF7 _AzjVT_jcE96BRtfJhShCuWEy_6Czz53_fLk.7947MgpPjccEx7_xSYoXPZdlEC0dQlhHQlJRKN4 WNRXFZHTd1jL7p.3m1Rn41r0_2MYr69XEFjTyGCcZu9ezImlkWuKW4PNIeFi7WHgX_x1i_oW3t9M YnchIYELzlHZj1JS.Wo1oZ7r_XQhTH3mnUv6LwSSzhDJKpHOHJRP0Aay_B5DeAoznujRw_odtmG9 6_OE1VWX6ibCLg7XsKNzNO_MtH9ODPKuweq.4NJRSa_9OYyHHR0JZ_bAlGjOwoGKRFnia2uSX8uH e7gx3arGhG1KLhSPhrFJOpyC0huphmn9t4wRXSBQK.AA_qB71SMdTGADtEtGpZM_QV89NYV7gaXP CSxzUepY0R56N_wh0jN6GRsdblUg8VegmC0UzjxRvoR56taG7F0ul6Cb1PuaWZWf2dYr4V_p.kux HZGSHY3_cDpfb3_HoqSaCGSA0qO9wANljpLUkBKq.o4oL3X4iJI_UoLmdYPvQ.7T15rs25KUntlj QYpWnq7lzqWNqdpUk5gBA_XePA4lj0YnwWYfwiH_YrERQ.6oR6Ro3wDXlxYv5mRG7pyim5kYWwcp jocK7I2K7Ab5gDR93 X-Sonic-MF: Received: from sonic.gate.mail.ne1.yahoo.com by sonic316.consmr.mail.gq1.yahoo.com with HTTP; Thu, 16 Sep 2021 22:50:20 +0000 Received: by kubenode526.mail-prod1.omega.gq1.yahoo.com (VZM Hermes SMTP Server) with ESMTPA ID c6ff517a5daddbcb429d402e81f6c96c; Thu, 16 Sep 2021 22:50:18 +0000 (UTC) Content-Type: text/plain; charset=us-ascii List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.120.0.1.13\)) Subject: Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool" In-Reply-To: Date: Thu, 16 Sep 2021 15:50:17 -0700 Cc: freebsd-current Content-Transfer-Encoding: quoted-printable Message-Id: <37A64EF6-C638-41A6-9304-3C11550B811E@yahoo.com> References: To: Alan Somers X-Mailer: Apple Mail (2.3654.120.0.1.13) X-Rspamd-Queue-Id: 4H9XNs4fXLz4gkh X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] Reply-To: marklmi@yahoo.com From: Mark Millard via freebsd-current X-Original-From: Mark Millard X-ThisMailContainsUnwantedMimeParts: N On 2021-Sep-16, at 15:16, Alan Somers wrote: > On Thu, Sep 16, 2021 at 4:02 PM Mark Millard = wrote: >=20 >=20 > On 2021-Sep-16, at 13:39, Alan Somers wrote: >=20 > > On Thu, Sep 16, 2021 at 2:04 PM Mark Millard via freebsd-current = wrote: > > What do I go about: > >=20 > > QUOTE > > # zpool import > > pool: zopt0 > > id: 18166787938870325966 > > state: FAULTED > > status: One or more devices contains corrupted data. > > action: The pool cannot be imported due to damaged devices or data. > > see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E > > config: > >=20 > > zopt0 FAULTED corrupted data > > nda0p2 UNAVAIL corrupted data > >=20 > > # zpool status -x > > all pools are healthy > >=20 > > # zpool destroy zopt0 > > cannot open 'zopt0': no such pool > > END QUOTE > >=20 > > (I had attempted to clean out the old zfs context on > > the media and delete/replace the 2 freebsd swap > > partitions and 1 freebsd-zfs partition, leaving the > > efi partition in place. Clearly I did not do everything > > require [or something is very wrong]. zopt0 had been > > a root-on-ZFS context and would be again. I have a > > backup of the context to send/receive once the pool > > in the partition is established.) > >=20 > > For reference, as things now are: > >=20 > > # gpart show > > =3D> 40 937703008 nda0 GPT (447G) > > 40 532480 1 efi (260M) > > 532520 2008 - free - (1.0M) > > 534528 937166848 2 freebsd-zfs (447G) > > 937701376 1672 - free - (836K) > > . . . > >=20 > > (That is not how it looked before I started.) > >=20 > > # uname -apKU > > FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 = releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021 = root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/ar= m64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1300139 1300139 > >=20 > > I have also tried under: > >=20 > > # uname -apKU > > FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 = main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021 = root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm6= 4.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1400032 1400032 > >=20 > > after reaching this state. It behaves the same. > >=20 > > The text presented by: > >=20 > > https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E > >=20 > > does not deal with what is happening overall. > >=20 > > So you just want to clean nda0p2 in order to reuse it? Do "zpool = labelclear -f /dev/nda0p2" > >=20 >>=20 >> I did not extract and show everything that I'd tried but >> there were examples of: >>=20 >> # zpool labelclear -f /dev/nda0p2 >> failed to clear label for /dev/nda0p2 >>=20 >> from when I'd tried such. So far I've not >> identified anything with official commands >> to deal with the issue. >>=20 > That is the correct command to run. However, the OpenZFS import in = FreeBSD 13.0 brought in a regression in that command. It wasn't a code = bug really, more like a UI bug. OpenZFS just had a less useful = labelclear command than FreeBSD did. The regression has now been fixed = upstream. > https://github.com/openzfs/zfs/pull/12511 Cool. >> Ultimately I zeroed out areas of the media that >> happened to span the zfs related labels. After >> that things returned to normal. I'd still like >> to know a supported way of dealing with the >> issue. >>=20 >> The page at the URL it listed just says: >>=20 >> QUOTE >> The pool must be destroyed and recreated from an appropriate backup = source >> END QUOTE >=20 > It advised to to "destroy and recreate" the pool because you ran = "zpool import", so ZFS thought that you actually wanted to import the = pool. The error message is appropriate if that had been the case. The start of the problem looked like (console context, so messages interlaced): # zpool create -O compress=3Dlz4 -O atime=3Doff -f -tzopt0 zpopt0 = /dev/nvd0 GEOM: nda0: the primary GPT table is corrupt or invalid. GEOM: nda0: using the secondary instead -- recovery strongly advised. cannot create 'zpopt0': no such pool or dataset # Sep 16 12:19:31 CA72_4c8G_ZFS ZFS[1111]: vdev problem, zpool=3Dzopt0 = path=3D/dev/nvd0 type=3Dereport.fs.zfs.vdev.open_failed The GPT table was okay just prior to the command. So I recovered it. The import was the only command that I tried that referenced what to do about what was being reported. (Not that it was useful for my context.) I discovered the zpool status via the import reporting what it did after doing the GPT recovery first. I've still no clue what was wrong with my labelclear before the repartitioning. But it appeared that the GPT tables and the zfs related labels were stomping on each other after the reparitioning. So, yes, I was trying to import when I first got the message in question. But I could not do as indated and it reported to do a type of activity that I could not do. That was confusing. >> But the official destroy commands did not work: >>=20 >> Because "zpool destroy" only works for imported pools. The error = message meant "destroy" in a more generic sense. >> =20 >> same sort of issue of reporting that nothing >> appropriate was found to destroy and no way to >> import the problematical pool. >>=20 >>=20 >> Note: I use ZFS because of wanting to use bectl, not >> for redundancy or such. So the configuration is very >> simple. =3D=3D=3D Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)