boot fail (panic: free: guard2 fail @ 0x7c237010 + 2061 from unknown:0 )

From: mike tancsa <mike_at_sentex.net>
Date: Wed, 16 Jun 2021 16:41:16 UTC
As part of our backup strategy, I am trying to do bare metal restores of
our servers.  So far so good, but one of our older ones is failing with

                                                                          
BTX loader 1.00  BTX version is
1.02                                           
Consoles: internal
video/keyboard                                              
BIOS drive C: is
disk0                                                         
BIOS 607kB/1975840kB available
memory                                          
                                                                               

FreeBSD/x86 ZFS enabled bootstrap loader, Revision
1.1                         
(Tue Jul 17 15:23:15 EDT 2018
mdt@fileserver1.sentex.ca)                         
panic: free: guard2 fail @ 0x7c237010 + 2061 from
unknown:0                    
--> Press a key on the console to reboot <--               


This is an older releng11 r336457 box that I am trying to restore. I am
doing the restoration on RELENG13. I am doing a zpool create -d and just
enable lz4. I am guessing there is something about that old pool that
will not work ? Every other server I have done has worked really well,
but none are as old as this guy.  Any ideas how to narrow this down ?

I am making it as follows

gpart create -s gpt $DESTDEVICE
gpart add -a 4k -s 512k -t freebsd-boot $DESTDEVICE
gpart add -a 4k  -s 8G -t freebsd-swap -l swap1 $DESTDEVICE
gpart add -a 4k  -t freebsd-zfs -l $DISKNAME $DESTDEVICE
gpart bootcode -b /boot-releng11/pmbr -p /boot-releng11/gptzfsboot -i 1
$DESTDEVICE

gnop create -S 4096 /dev/gpt/$DISKNAME

zpool create -d -f -o altroot=/mnt -o feature@lz4_compress=enabled -o
cachefile=/var/tmp/zpool.cache $POOLNAME /dev/gpt/$DISKNAME.nop
zpool export $POOLNAME
gnop destroy /dev/gpt/$DISKNAME.nop

zpool import -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache $POOLNAME

<then a bunch of zfs send | zfs recv >

the live machine shows

 zdb -v
fileserver1:
    version: 5000
    name: 'fileserver1'
    state: 0
    txg: 23498113
    pool_guid: 10657256123199195564
    hostid: 3022456803
    hostname: ''
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 10657256123199195564
        create_txg: 4
        children[0]:
            type: 'raidz'
            id: 0
            guid: 18123299127602057786
            nparity: 1
            metaslab_array: 40
            metaslab_shift: 34
            ashift: 12
            asize: 1931688804352
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 35
            children[0]:
                type: 'disk'
                id: 0
                guid: 9163277686889746146
                path: '/dev/ada0p3'
                whole_disk: 1
                DTL: 197
                create_txg: 4
                com.delphix:vdev_zap_leaf: 36
            children[1]:
                type: 'disk'
                id: 1
                guid: 8474459820264337096
                path: '/dev/ada1p3'
                whole_disk: 1
                DTL: 196
                create_txg: 4
                com.delphix:vdev_zap_leaf: 37
            children[2]:
                type: 'disk'
                id: 2
                guid: 14194255123576696065
                path: '/dev/ada2p3'
                whole_disk: 1
                DTL: 195
                create_txg: 4
                com.delphix:vdev_zap_leaf: 38
            children[3]:
                type: 'disk'
                id: 3
                guid: 15226426451126911594
                path: '/dev/ada3p3'
                whole_disk: 1
                DTL: 194
                create_txg: 4
                com.delphix:vdev_zap_leaf: 39
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data