From nobody Tue Jul 22 23:31:16 2025 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4bmtnP62R2z62CHH for ; Tue, 22 Jul 2025 23:31:21 +0000 (UTC) (envelope-from void@f-m.fm) Received: from fhigh-b7-smtp.messagingengine.com (fhigh-b7-smtp.messagingengine.com [202.12.124.158]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4bmtnN1L0Wz3Vjb for ; Tue, 22 Jul 2025 23:31:20 +0000 (UTC) (envelope-from void@f-m.fm) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=f-m.fm header.s=fm2 header.b=LKyuXKot; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=a470TNXy; spf=pass (mx1.freebsd.org: domain of void@f-m.fm designates 202.12.124.158 as permitted sender) smtp.mailfrom=void@f-m.fm; dmarc=pass (policy=none) header.from=f-m.fm Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfhigh.stl.internal (Postfix) with ESMTP id DA5407A003A for ; Tue, 22 Jul 2025 19:31:18 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-10.internal (MEProxy); Tue, 22 Jul 2025 19:31:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=f-m.fm; h=cc :content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:subject :subject:to:to; s=fm2; t=1753227078; x=1753313478; bh=hb4JGuI8v7 2+3lG0MAsb7xDYpvh7HD+ZoewPjptud4g=; b=LKyuXKot0sMPz4z69Azbu7chN0 lbihu6lRG0HydsmcOyBNZQ0l/To44wfBN1nr5qCkf1ocBN1NrinUA0FJbRWsRt/c HsBKOxmjHpnt4FP3PlfvddcDbiVmSHNRwufcKw1THS/VWM6LNqYsrbAdPrXVEnhu 94IZxw6pqoRlvSEzzQ2E7XEpByCIESeYnCdbVvAWq9PHnMl0ekZYZGnuNHohi4bn EBtmlTvKlnE+v7QXUYLj21w6oSguMGkNaOG7RM67M3o1UV+JkTtii1wjxEBkzXrN bpxqRre2I9XeToq5vLwcLAMNsfBDyq36Lz7MqK/j3rhbHEiC1LHV+kee+PJQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:subject:subject:to :to:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1753227078; x=1753313478; bh=hb4JGuI8v72+3lG0MAsb7xDYpvh7HD+Zoew Pjptud4g=; b=a470TNXy8M0UaODMRPLX9wmXjKqQt99ivGIQvmkomJi/mdzU3ot s6HISkCaeTnVb4TCnALO0wU/jJjD4ciQFWUj2/BaLEMsghvVQgRn1HLrH80d5/9G outqwx1ZpoaI9H6SfsBZFHrK4CKkM+IQP4F/BeHQzbZBFcIzm9v6JTsMGQWMFY4m +clgmWqjJO6t6kbOcRAX6ZyFljuEyWKYIjktcQ3dSSXD2bWxWGngOpo7PhIv6XIz RcxmsyE028yZHPbOJMy5Rr50h8uzRH8No2+oQNBx+BQbLigKys9BwVeNgPwOk8FG WR0Dbk0tksuQaFOzJsF1gzNkQH1ym0nrZcQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdefgdejiedvfecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecunecujfgurhepfffhvffukfhfgggtuggjsehttdertddttd dvnecuhfhrohhmpehvohhiugcuoehvohhiugesfhdqmhdrfhhmqeenucggtffrrghtthgv rhhnpeekleduvdelhfeileefgffghfffkedtheellefgudfgvdegkeejjedutdehhefgue enucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehvohhi ugesfhdqmhdrfhhmpdhnsggprhgtphhtthhopedupdhmohguvgepshhmthhpohhuthdprh gtphhtthhopehfrhgvvggsshguqdgtuhhrrhgvnhhtsehfrhgvvggsshgurdhorhhg X-ME-Proxy: Feedback-ID: i2541463c:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA for ; Tue, 22 Jul 2025 19:31:18 -0400 (EDT) Date: Wed, 23 Jul 2025 00:31:16 +0100 From: void To: freebsd-current@freebsd.org Subject: Re: zfs panic VERIFY3U Message-ID: Mail-Followup-To: freebsd-current@freebsd.org References: List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: X-Spamd-Result: default: False [-2.27 / 15.00]; NEURAL_HAM_MEDIUM(-0.88)[-0.876]; NEURAL_HAM_SHORT(-0.77)[-0.771]; MID_RHS_NOT_FQDN(0.50)[]; DMARC_POLICY_ALLOW(-0.50)[f-m.fm,none]; R_DKIM_ALLOW(-0.20)[f-m.fm:s=fm2,messagingengine.com:s=fm2]; R_SPF_ALLOW(-0.20)[+ip4:202.12.124.128/27]; RCVD_IN_DNSWL_LOW(-0.10)[202.12.124.158:from]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-0.02)[-0.018]; RCVD_TLS_LAST(0.00)[]; FREEMAIL_FROM(0.00)[f-m.fm]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_ENVFROM(0.00)[f-m.fm]; PREVIOUSLY_DELIVERED(0.00)[freebsd-current@freebsd.org]; TO_DN_NONE(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; DKIM_TRACE(0.00)[f-m.fm:+,messagingengine.com:+]; MLMMJ_DEST(0.00)[freebsd-current@freebsd.org]; ASN(0.00)[asn:151847, ipnet:202.12.124.0/24, country:AU]; RCVD_VIA_SMTP_AUTH(0.00)[]; MISSING_XM_UA(0.00)[]; DWL_DNSWL_NONE(0.00)[messagingengine.com:dkim] X-Rspamd-Queue-Id: 4bmtnN1L0Wz3Vjb X-Spamd-Bar: -- On Tue, Jul 22, 2025 at 11:46:11AM -0600, Alan Somers wrote: > >From the panic, it looks like your vdev is smaller than what is recorded in >the label. I can think of a few reasons why that might be: >* It's a VM, and you shrunk the size of the VM's disk. ZFS can't tolerate >that. no, this is bare metal >* You shrunk the size of the disk using some exotic SCSI commands. no :) >* The disk is broken in such a way that it reports mediasize 0. I've seen >that happen. You can check with "geom disk list". Might be, I can't tell just yet. >* ZFS found an old label. Perhaps it dates from before you expanded a >vdev. You might've pulled out a disk, then expanded the other disks in >that RAID or mirror, then reinserted the old disk. These problems are >annoying, but solvable. This is also one case where you might get >different results if you import during boot vs after boot. Possibly/probably the case >Is this still a problem for you, or is it all solved? The problem has changed in that there is no more zfs panic, but the disk (it was /dev/da7) has "disappeared" from /dev. It *is*, however, accessible from 'cciss,7 /dev/ciss0' via smartctl. I can't check geom because it's not seen in /dev at the moment. The disk (da7) is SAT. The others are SATA. This has worked before, the card is ciss0: in HBA mode. It can use SAT or SATA. Not sure if SAT *and* SATA might cause undocumented behaviour though. The problem of the disk becoming unavailable in the pool has happened before, in similar circumstances (power outage), but zfs never panicked. That part is new. Right now, zfs (deliberately) doesn't autoload. I'm going to try make the system see da7 and if that works, then look at geom, clear that, then load zfs.ko and zpool import -a. The last time that was done (sans clearing geom), da7 resilvered and joined the pool. All the other disks, apart from da0 (which isn't part of the pool) show no geom (I understand that to be normal with raw disks) thank you for your help & ideas, --