8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic
doug at polands.org
Wed Jan 13 20:26:43 UTC 2010
On Wed, January 13, 2010 13:57, Ivan Voras wrote:
> 2010/1/13 Doug Poland <doug at polands.org>:
>> This is the state of the machine when it panicked this time:
>> panic: kmem_malloc(131072): kmem_map too small: 1296957440 total
>> cpuid = 1
>> /boot/loader.conf: vfs.zfs.arc_max=512M
>> vfs.numvnodes: 660
>> vfs.zfs.arc_max: 536870912
>> vfs.zfs.arc_meta_limit: 134217728
>> vfs.zfs.arc_meta_used: 7006136
>> vfs.zfs.arc_min: 67108864
>> vfs.zfs.zil_disable: 0
>> vm.kmem_size: 1327202304
>> vm.kmem_size_max: 329853485875
> (from the size of arc_max I assume you did remember to reboot after
> changing loader.conf and before testing again but just checking - did
Yes, I did reboot
> Can you monitor and record kstat.zfs.misc.arcstats.size sysctl while
> the test is running (and crashing)?
> This looks curious - your kmem_max is ~~ 1.2 GB, arc_max is 0.5 GB and
> you are still having panics. Is there anything unusual about your
> system? Like unusually slow CPU, unusually fast or slow drives?
Don't think there is anything unusual. This is 5 year old HP DL385.
It has two 2.6GHz Opteron 252 CPUs. The disks are 6x36GB P-SCSI.
There are behind an HP Smart Array 6i controller. I had to configure
each drive as "RAID0" in order make it visible to the OS. Kinda hokey
if you ask me.
dmesg | grep -i CPU
CPU: AMD Opteron(tm) Processor 252 (2605.92-MHz K8-class CPU)
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
smartctl -a /dev/da0
Device: COMPAQ RAID 0 VOLUME Version: OK
Device type: disk
Local Time is: Wed Jan 13 14:21:44 2010 CST
Device does not support SMART
dmesg | grep -i smart
ciss0: <HP Smart Array 6i> port 0x5000-0x50ff mem
0xf7ef0000-0xf7ef1fff,0xf7e80000-0xf7ebffff irq 24 at device 4.0 on
> I don't have any ideas smarter than reducing arc_max by half then try
> again and continue reducing it until it works. It would be very
> helpful if you could monitor the kstat.zfs.misc.arcstats.size sysctl
> while you are doing the tests to document what is happening to the
> system. If it by any chance stays the same you should probably monitor
> "vmstat -m".
OK, will do monitor on the next run. Thanks for your help so far.
More information about the freebsd-questions