kern/131573: lock_init() assumes zero-filled struct

Brad Huntting huntting at glarp.com
Tue Feb 10 14:10:03 PST 2009


>Number:         131573
>Category:       kern
>Synopsis:       lock_init() assumes zero-filled struct
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    freebsd-bugs
>State:          open
>Quarter:        
>Keywords:       
>Date-Required:
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Tue Feb 10 22:10:02 UTC 2009
>Closed-Date:
>Last-Modified:
>Originator:     Brad Huntting
>Release:        7.1-STABLE
>Organization:
>Environment:
FreeBSD spork.glarp.com 7.1-STABLE FreeBSD 7.1-STABLE #1: Tue Feb 10 11:58:44 MST 2009     root at spork.glarp.com:/usr/src/sys/amd64/compile/SPORK  amd64

>Description:
In /sys/kern/subr_lock.c, the function lock_init() starts out with:

        /* Check for double-init and zero object. */
        KASSERT(!lock_initalized(lock), ("lock \"%s\" %p already initialized",
            name, lock));

This is fine for locks allocated with MALLOC(...,M_ZERO), but locks allocated on the stack (of some kernel thread) are garbage filled which triggers a panic here (assuming you have options INVARIANTS).

The solutions is to either:  (a) document that lock struct's must be zero filled prior to calling {mtx,rw,sx}_init().  Or (b), remove the KASSERT().

Normally, (b) would be the obvious choice, since adding a bzero() before every call to {mtx,rw,sx}_init() defeats the check anyway.  But in most cases, locks are allocated in softc's which are zero filled so there is no need to bzero().  However, even in this case, the check is of questionable value since locks are normally initialized in an attach() function and destroyed in a detach() function.  If you're attach() method is being called twice you have bigger problems that a reinitialized a lock.


brad
>How-To-Repeat:

>Fix:


>Release-Note:
>Audit-Trail:
>Unformatted:


More information about the freebsd-bugs mailing list