New ZFSv28 patchset for 8-STABLE

Chreo chreo at chreo.net
Mon Jan 31 15:26:54 UTC 2011


Hello Martin,

On 2010-12-16 13:44, Martin Matuska wrote:
> Following the announcement of Pawel Jakub Dawidek (pjd at FreeBSD.org) I am
> providing a ZFSv28 testing patch for 8-STABLE.
>
> Link to the patch:
>
> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
>

I've tested 
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20110116-nopython.patch.xz
with 8-STABLE from 2011-01-18

Seems to work nicely except for a panic when importing a degraded pool 
on GELI vdevs:
(captured from the screen and OCR'd)
vdev.geom_detach:156[1]: Closing access to label/Disk.4.eli.
vdev.geom_detach:160[1]: Destroyed consumer to label/Disk.4.eli.
vdev.geomdetach:156[1]: Closing access to label/Disk.5.eli.
vdev.geom_detach:160[1]: Destroyed consumer to label/Disk.5.eli.
Solaris: WARNING: can't open objset for Ocean/Images
panic: solaris assert: bpobj_iterate(defer_bpo, spa_free_sync_cb, zio, 
tx) == 0 (0x6 == 0x0), file: 
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c, 
line: 5576
cpuid = 1
KDB: stack backtrace:
#0 0xffffffff802f14ce at kdb_backtrace+0x5e
#1 0xffffffff802bf877 at panic+Ox187
#2 0xffffffff808e0c48 at spa_sync+0x978
#3 0xffffffff808f1011 at txg_sync_thread+0x271
#4 0xffffffff802960b7 at fork_exit+0x117
#5 0xffffffff804b7a7e at fork_trampoline+0xe
GEOM_ELI: Device label/Disk.5.eli destroyed.
GEOM_ELI: Device label/Disk.4.eli destroyed.

The command run was:
# zpool import -F Ocean
and that worked with ZFS v15

The panic is 100% reproducible. The reason for this import was that I 
wanted to try and clear the log (something which is possible on v28 but 
not v15 it seems) with: zpool clear Ocean, and that caused a panic. An 
export was done and the import was tried. Using the same command on v15 
works and imports the pool but it is faulted (due to the log).

Anything I can test or do about this? I've also tried importing with -o 
failmode=continue and that does absolutely nothing to prevent the panic.

The other pool on the same system works perfectly so far with v28. Many 
thanks to you and PJD for your work on ZFS.

Regards,
Christian Elmerot


More information about the freebsd-fs mailing list