misc/129174: [nfs][zfs][panic] NFS v3 Panic when under high load exporting ZFS file system

Weldon Godfrey wgodfrey at ena.com
Tue Nov 25 05:30:02 PST 2008

>Number:         129174
>Category:       misc
>Synopsis:       [nfs][zfs][panic] NFS v3 Panic when under high load exporting ZFS file system
>Confidential:   no
>Severity:       critical
>Priority:       low
>Responsible:    freebsd-bugs
>State:          open
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Tue Nov 25 13:30:01 UTC 2008
>Originator:     Weldon Godfrey
>Release:        FreeBSD 7.1-PREPREASE (also happens under 8-CURRENT and 7.x)
FreeBSD store1.mail.ena.net 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE #2: Thu Nov 20 10:41:36 CST 2008     root at store1.mail.ena.net:/usr/obj/usr/src/sys/GENERIC  amd64
This issue was origionally reported under another ticket that turned out to be unrelated.  I am using 2 IBM EXP 3000 SAS Chasis with a total of 24 300GB 25K SAS drives using a 3ware 9690SE-8 controller on a Dell Poweredge 2950-iii with 2x4core Intel 2.5GHz CPU and 16GB of RAM.  I am testing by running 9 instances of Postmark from Centos 5.2 x86_64 (was 3.x i386) NFS clients.  I can run 1 on 9 machines or all on 1 machine, after about 2-4 hrs of running the benchmark, the system panics and reboots.  From help with the freebsd-fs group, it was determined that the vnode is becoming invalid.  I have tried several things including using FreeBSD 8-CURRENT and the ZFS v11 patch but nothing helps except if I force to use NFS v2, so it appears this is a bug with the v3 code.

This is my loader.conf, the commented out section are values I have tried but do not have an effect:


the raid is configured RAID 10, each chasis is one side of the mirror (the 1st 11 drives of each chasis, the final one of each is marked as spare).

run multiple instances of postmark benchmark using NFS v3 from NFS clients for several hours.
workaround:  don't use NFS v3, use -2 option with mountd.


More information about the freebsd-bugs mailing list