(ZFS?): panic: lockmgr: locking against myself
Peter Schuller
peter.schuller at infidyne.com
Mon Jul 30 21:25:55 UTC 2007
> vnode 0xffffff00037473e0: tag devfs, type VDIR
> usecount 0, writecount 0, refcount 1 mountedhere 0xffffff0003745ca0
> flags (VV_ROOT)
> lock type devfs: EXCL (count 1) by thread 0xffffff00010e6680 (pid 1)
Some additional facts:
Looking at the printouts, there is always a sequence of three or more (three
at least twice; more than three at least once) vrele():s of the same vnode,
in both the successful case and the panicing case. There are no vrele():s of
any other vnodes in either case.
Inserting enter/exit debug printouts in mountcheckdirs() confirms that all
calls occur within the bounds of a single call to mountcheckdirs(). Does not
this imply there is some locking mismatch in the non-ZFS specific code? I
must admit I find the locking confusing; with several locking/unlocking
functions/macros intermixed at different levels in the callstack. My
(incorrect) reading was that this panic should always be happening, which is
obviously not the case.
Running with vfs.zfs.debug=1 confirms that vdev_geom open/attach/detach is
happening prior to any vrele() even in the panicing case (i.e., zfs pool
discovery seems to complete).
In the case of an expected provider not being found, vd->vdev_devid is NULL in
vdev_geom_open(), based on the "provider not found" debug printout (perhaps
normal?).
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller at infidyne.com>'
Key retrieval: Send an E-Mail to getpgpkey at scode.org
E-Mail: peter.schuller at infidyne.com Web: http://www.scode.org
More information about the freebsd-current
mailing list