Re: zfs panic VERIFY3U
- Reply: void : "Re: zfs panic VERIFY3U"
- Reply: void : "Re: zfs panic VERIFY3U"
- In reply to: void : "Re: zfs panic VERIFY3U"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 22 Jul 2025 17:46:11 UTC
On Tue, Jul 22, 2025 at 6:50 AM void <void@f-m.fm> wrote: > On Tue, Jul 22, 2025 at 02:38:52AM +0100, void wrote: > >On Tue, Jul 22, 2025 at 02:28:19AM +0100, void wrote: > >> > >>zpool import causes the following output at the console: > >>https://void.f-m.fm.user.fm/panic/zpool-broken.png > > > >agh, should read http://void.f-m.fm.user.fm/panic/zpool-broken.png > >*not* https, sorry. > > booting without zfs.ko loaded, then building kernel from > main-stabweek-2025-Jul > rebooting again then loading zfs.ko then zpool import -a went normally > ald zpool status showed an unavailable disk in the raidz2 pool. > -- > From the panic, it looks like your vdev is smaller than what is recorded in the label. I can think of a few reasons why that might be: * It's a VM, and you shrunk the size of the VM's disk. ZFS can't tolerate that. * You shrunk the size of the disk using some exotic SCSI commands. * The disk is broken in such a way that it reports mediasize 0. I've seen that happen. You can check with "geom disk list". * ZFS found an old label. Perhaps it dates from before you expanded a vdev. You might've pulled out a disk, then expanded the other disks in that RAID or mirror, then reinserted the old disk. These problems are annoying, but solvable. This is also one case where you might get different results if you import during boot vs after boot. Is this still a problem for you, or is it all solved?