New ZFSv28 patchset for 8-STABLE

Nik A Azam freebsd-list at nikazam.com
Sat Feb 12 19:01:14 UTC 2011


Ok, looking at SVN log there was a change to vdev_geom.c recently after mm's
patch. I synced to revision r218540 and all is good. Sorry for the noise!

nik

On Sat, Feb 12, 2011 at 9:54 AM, Nik A Azam <freebsd-list at nikazam.com>wrote:

> Hi Martin, all
>
> I'm testing the ZFS v28 on FreeBSD stable (r218583M , ZFS patch from
> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20110208-nopython.patch.xz)
> and been getting this panic everytime I issue any zfs/zpool command. This is
> 100% reproducible.
>
> panic: _sx_xlock_hard: recursed on non-recursive sx GEOM topology @
> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:380
>
>
> cpuid = 1
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2a
> kdb_backtrace() at kdb_backtrace+0x37
> panic() at panic+0x182
> _sx_xunlock_hard() at _sx_xunlock_hard
> _sx_xlock() at _sx_xlock+0xa9
> vdev_geom_open_by_path() at vdev_geom_open_by_path+0x45
> vdev_geom_open() at vdev_geom_open+0x100
> vdeev_open() at vdeev_open+0xc9
> vdev_open_children() at vdev_open_children+0x39
> vdev_raidz_open() at vdev_raidz_open+0x4f
> vdev_open() at vdev_open+0xc9
> vdev_open_children() at vdev_open_children+0x39
> vdev_root_open() at vdev_root_open+0x40
> vdev_open() at den)open+0xc9
> spa_load() at spa_load+0x23f
> spa_load_best() at spa_load_best+0x4a
> pool_status_check() at pool_status_check+0x19
> zfsdev_ioctl() at zfsdev_ioctl+0x208
> devfs_ioctl_f() at devfs_ioctl_f+0x73
> kern_ioctl() at kern_ioctl+0x8b
> ioctl() at 90ctl+0xec
> syscall() at syscall+0x41
> Xfast_syscall() at Xfast_syscall+0x2e
>
> I'm more than happy to investigate this further given instructions on how
> to do so. Really appreciate the work that you guys have put in FreeBSD/ZFS!
>
> Thanks,
> Nik
>
>
> On Mon, Jan 31, 2011 at 3:10 PM, Chreo <chreo at chreo.net> wrote:
>
>> Hello Martin,
>>
>> On 2010-12-16 13:44, Martin Matuska wrote:
>>
>>> Following the announcement of Pawel Jakub Dawidek (pjd at FreeBSD.org) I am
>>> providing a ZFSv28 testing patch for 8-STABLE.
>>>
>>> Link to the patch:
>>>
>>>
>>> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz
>>>
>>>
>> I've tested
>> http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20110116-nopython.patch.xz
>> with 8-STABLE from 2011-01-18
>>
>> Seems to work nicely except for a panic when importing a degraded pool on
>> GELI vdevs:
>> (captured from the screen and OCR'd)
>> vdev.geom_detach:156[1]: Closing access to label/Disk.4.eli.
>> vdev.geom_detach:160[1]: Destroyed consumer to label/Disk.4.eli.
>> vdev.geomdetach:156[1]: Closing access to label/Disk.5.eli.
>> vdev.geom_detach:160[1]: Destroyed consumer to label/Disk.5.eli.
>> Solaris: WARNING: can't open objset for Ocean/Images
>> panic: solaris assert: bpobj_iterate(defer_bpo, spa_free_sync_cb, zio, tx)
>> == 0 (0x6 == 0x0), file:
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c,
>> line: 5576
>> cpuid = 1
>> KDB: stack backtrace:
>> #0 0xffffffff802f14ce at kdb_backtrace+0x5e
>> #1 0xffffffff802bf877 at panic+Ox187
>> #2 0xffffffff808e0c48 at spa_sync+0x978
>> #3 0xffffffff808f1011 at txg_sync_thread+0x271
>> #4 0xffffffff802960b7 at fork_exit+0x117
>> #5 0xffffffff804b7a7e at fork_trampoline+0xe
>> GEOM_ELI: Device label/Disk.5.eli destroyed.
>> GEOM_ELI: Device label/Disk.4.eli destroyed.
>>
>> The command run was:
>> # zpool import -F Ocean
>> and that worked with ZFS v15
>>
>> The panic is 100% reproducible. The reason for this import was that I
>> wanted to try and clear the log (something which is possible on v28 but not
>> v15 it seems) with: zpool clear Ocean, and that caused a panic. An export
>> was done and the import was tried. Using the same command on v15 works and
>> imports the pool but it is faulted (due to the log).
>>
>> Anything I can test or do about this? I've also tried importing with -o
>> failmode=continue and that does absolutely nothing to prevent the panic.
>>
>> The other pool on the same system works perfectly so far with v28. Many
>> thanks to you and PJD for your work on ZFS.
>>
>> Regards,
>> Christian Elmerot
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>
>
>


More information about the freebsd-fs mailing list