ZFS Panic (3rd time)
Mike Carlson
mike at bayphoto.com
Tue Sep 9 19:19:19 UTC 2014
Thanks Steve
That is troubling news!
zdb without parameters:
# zdb
data:
version: 5000
name: 'data'
state: 0
txg: 26
pool_guid: 17275286806962484556
hostid: 1566810261
hostname: 'zfs-2.discdrive.bayphoto.com'
vdev_children: 2
vdev_tree:
type: 'root'
id: 0
guid: 17275286806962484556
create_txg: 4
children[0]:
type: 'raidz'
id: 0
guid: 18387238254393289487
nparity: 2
metaslab_array: 33
metaslab_shift: 37
ashift: 12
asize: 26005123629056
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 12919111950950057118
path: '/dev/da0p1.nop'
phys_path: '/dev/da0p1.nop'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 6204755784205312226
path: '/dev/da1p1.nop'
phys_path: '/dev/da1p1.nop'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 2095825539678825225
path: '/dev/da2p1.nop'
phys_path: '/dev/da2p1.nop'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 13288853293260483450
path: '/dev/da3p1.nop'
phys_path: '/dev/da3p1.nop'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 17651239728056787408
path: '/dev/da4p1.nop'
phys_path: '/dev/da4p1.nop'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 10121035556839569555
path: '/dev/da5p1.nop'
phys_path: '/dev/da5p1.nop'
whole_disk: 1
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 8141814630245447282
path: '/dev/da6p1.nop'
phys_path: '/dev/da6p1.nop'
whole_disk: 1
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 12107673881548157163
path: '/dev/da7p1.nop'
phys_path: '/dev/da7p1.nop'
whole_disk: 1
create_txg: 4
children[8]:
type: 'disk'
id: 8
guid: 13722803165058102841
path: '/dev/da8p1.nop'
phys_path: '/dev/da8p1.nop'
whole_disk: 1
create_txg: 4
children[9]:
type: 'disk'
id: 9
guid: 3812972074943976242
path: '/dev/da9p1.nop'
phys_path: '/dev/da9p1.nop'
whole_disk: 1
create_txg: 4
children[10]:
type: 'disk'
id: 10
guid: 3387379670710299146
path: '/dev/da10p1.nop'
phys_path: '/dev/da10p1.nop'
whole_disk: 1
create_txg: 4
children[11]:
type: 'disk'
id: 11
guid: 17209778087441255883
path: '/dev/da11p1.nop'
phys_path: '/dev/da11p1.nop'
whole_disk: 1
create_txg: 4
children[12]:
type: 'disk'
id: 12
guid: 14155860984589567095
path: '/dev/da12p1.nop'
phys_path: '/dev/da12p1.nop'
whole_disk: 1
create_txg: 4
children[1]:
type: 'raidz'
id: 1
guid: 7358812460992449364
nparity: 2
metaslab_array: 37
metaslab_shift: 37
ashift: 12
asize: 24004729503744
is_log: 0
create_txg: 16
children[0]:
type: 'disk'
id: 0
guid: 16184992168562751178
path: '/dev/da13p1.nop'
phys_path: '/dev/da13p1.nop'
whole_disk: 1
create_txg: 16
children[1]:
type: 'disk'
id: 1
guid: 17273520967287205930
path: '/dev/da14p1.nop'
phys_path: '/dev/da14p1.nop'
whole_disk: 1
create_txg: 16
children[2]:
type: 'disk'
id: 2
guid: 17965068062958146105
path: '/dev/da15p1.nop'
phys_path: '/dev/da15p1.nop'
whole_disk: 1
create_txg: 16
children[3]:
type: 'disk'
id: 3
guid: 6440721779503392985
path: '/dev/da16p1.nop'
phys_path: '/dev/da16p1.nop'
whole_disk: 1
create_txg: 16
children[4]:
type: 'disk'
id: 4
guid: 5129596340557895557
path: '/dev/da17p1.nop'
phys_path: '/dev/da17p1.nop'
whole_disk: 1
create_txg: 16
children[5]:
type: 'disk'
id: 5
guid: 13197465381631225536
path: '/dev/da18p1.nop'
phys_path: '/dev/da18p1.nop'
whole_disk: 1
create_txg: 16
children[6]:
type: 'disk'
id: 6
guid: 13521709969101776408
path: '/dev/da19p1.nop'
phys_path: '/dev/da19p1.nop'
whole_disk: 1
create_txg: 16
children[7]:
type: 'disk'
id: 7
guid: 7379733698654539430
path: '/dev/da20p1.nop'
phys_path: '/dev/da20p1.nop'
whole_disk: 1
create_txg: 16
children[8]:
type: 'disk'
id: 8
guid: 10498685535964391283
path: '/dev/da21p1.nop'
phys_path: '/dev/da21p1.nop'
whole_disk: 1
create_txg: 16
children[9]:
type: 'disk'
id: 9
guid: 12185894059804382853
path: '/dev/da22p1.nop'
phys_path: '/dev/da22p1.nop'
whole_disk: 1
create_txg: 16
children[10]:
type: 'disk'
id: 10
guid: 6545374147807002239
path: '/dev/da23p1.nop'
phys_path: '/dev/da23p1.nop'
whole_disk: 1
create_txg: 16
children[11]:
type: 'disk'
id: 11
guid: 1183756296391348826
path: '/dev/da24p1.nop'
phys_path: '/dev/da24p1.nop'
whole_disk: 1
create_txg: 16
features_for_read:
Is there a way to "rollback" a transaction?
The behavior that I can see at the moment is any zpool or zfs command that
touches 'data' causes all of the drives to "scan", and then after a minute
the system panics.
On Tue, Sep 9, 2014 at 12:04 PM, Steven Hartland <smh at freebsd.org> wrote:
> You panic is being cause by the dereference of a null vdev
> in vdev_rele but the issue seems to start earlier as the
> zio value passed to vdev_mirror_scrub_done also looks invalid.
>
> The call point for this should be in zio_done specifically
> if (zio->io_done)
> zio->io_done(zio);
>
> So if zio is actually invalid then something is really wrong
> which is unfortunatelly not much help :(
>
> With regards zdb try not passing a pool.
>
> ----- Original Message ----- From: "Mike Carlson" <mike at bayphoto.com>
>
> snip...
>
>>
>> #7 0xffffffff81860336 in vdev_rele (vd=0x0) at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/vdev.c:1556
>> #8 0xffffffff81866800 in vdev_mirror_scrub_done (zio=0x3) at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/vdev_mirror.c:193
>> #9 0xffffffff81866344 in vdev_mirror_io_start (zio=0xfffff80142733d00)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/vdev_mirror.c:286
>> #10 0xffffffff818803c4 in zio_vdev_io_start (zio=0xfffff8013eb20b10)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/zio.c:2571
>> #11 0xffffffff8187d796 in zio_suspend (spa=0xfffff8000e122000,
>> zio=0xfffff8013eb20b10)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/zio.c:1456
>> #12 0xffffffff8180ddec in arc_read (pio=0x0, spa=0xfffff8001e37d000,
>> bp=<value optimized out>, done=0x2, private=0x0, priority=6, zio_flags=0,
>> arc_flags=<value optimized out>, zb=0xfffff8001ed06558) at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/arc.c:3371
>> #13 0xffffffff818268f1 in traverse_prefetcher (spa=0xfffff8001e37d000,
>> zilog=0xf01ff, bp=<value optimized out>, zb=<value optimized out>,
>> dnp=0xfffff80142733d00, arg=<value optimized out>) at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:451
>> #14 0xffffffff81825d14 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffffe000fe64800, bp=0xfffffe000fe64980, zb=0xfffffe104c762e88)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:250
>> #15 0xffffffff8182677f in traverse_dnode (td=0xfffffe104c763900,
>> dnp=0xfffffe000fe64800, objset=110, object=26823324)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:417
>> #16 0xffffffff81826487 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffffe000fe61000, bp=0xfffffe001285ea00, zb=0xfffffe104c7630a8)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:309
>> #17 0xffffffff81825ee3 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffff8013ed8f000, bp=0xfffffe0012867a00, zb=0xfffffe104c7631d8)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:284
>> #18 0xffffffff81825ee3 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffff8013ed8f000, bp=0xfffffe0012842980, zb=0xfffffe104c763308)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:284
>> #19 0xffffffff81825ee3 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffff8013ed8f000, bp=0xfffffe0012848000, zb=0xfffffe104c763438)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:284
>> #20 0xffffffff81825ee3 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffff8013ed8f000, bp=0xfffffe000fe6d000, zb=0xfffffe104c763568)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:284
>> #21 0xffffffff81825ee3 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffff8013ed8f000, bp=0xfffffe001282d000, zb=0xfffffe104c763698)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:284
>> #22 0xffffffff81825ee3 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0xfffff8013ed8f000, bp=0xfffff8013ed8f040, zb=0xfffffe104c763758)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:284
>> #23 0xffffffff81826714 in traverse_dnode (td=0xfffffe104c763900,
>> dnp=0xfffff8013ed8f000, objset=110, object=0)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:407
>> #24 0xffffffff81826190 in traverse_visitbp (td=0xfffffe104c763900,
>> dnp=0x0,
>> bp=0xfffff8013ed7ea80, zb=0xfffffe104c7638e0)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:338
>> #25 0xffffffff81825af6 in traverse_prefetch_thread
>> (arg=0xfffffe104cd2f0e0)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/
>> common/fs/zfs/dmu_traverse.c:470
>> #26 0xffffffff817fcc00 in taskq_run (arg=0xfffff801445bef30,
>> pending=983551)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/compat/opensolaris/kern/
>> opensolaris_taskq.c:109
>> #27 0xffffffff808f5c26 in taskqueue_run_locked (queue=0xfffff8000e0eba00)
>> at /usr/src/sys/kern/subr_taskqueue.c:333
>> #28 0xffffffff808f64a8 in taskqueue_thread_loop (arg=<value optimized
>> out>)
>> at /usr/src/sys/kern/subr_taskqueue.c:535
>> #29 0xffffffff80881a4a in fork_exit (callout=0xffffffff808f6400
>> <taskqueue_thread_loop>, arg=0xfffff8000e10aac0, frame=0xfffffe104c763a40)
>> at /usr/src/sys/kern/kern_fork.c:995
>> #30 0xffffffff80c75a6e in fork_trampoline () at
>> /usr/src/sys/amd64/amd64/exception.S:606
>> #31 0x0000000000000000 in ?? ()
>> Current language: auto; currently minimal
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>
>>
More information about the freebsd-fs
mailing list