ZFS committed to the FreeBSD base.

Craig Boston craig at xfoil.gank.org
Tue Apr 10 00:35:11 UTC 2007

On Sat, Apr 07, 2007 at 11:24:14PM +0200, Bernd Walter wrote:
> On Sat, Apr 07, 2007 at 09:15:17PM +0200, Pawel Jakub Dawidek wrote:
> > Just committed a change. You can tune max and min ARC size via
> > vfs.zfs.arc_max and vfs.zfs.arc_min tunnables.
> Thanks - I'd set c_max to 80M now and will see what happens, since
> I had such a panic again with 240M kmem.

Hi, just wanted to chime in that I'm experiencing the same panic with
a fresh -CURRENT.

I'm seriously considering trying out ZFS on my home file server (this
should tell you how much I've come to trust pjd's work ;).  Anyway,
since it's a repurposed desktop with a crappy board, it's limited to
512MB RAM.  So I've been testing in a VMware instance with 512MB.  My
vm.kmem_size is defaulting to 169758720.

Works fine up until the point I start copying lots of files onto the ZFS
partition.  I tried the suggestion of reducing the tunables.  After
modifying the source to accept these values, I have it set to:

kstat.zfs.misc.arcstats.p: 33554432
kstat.zfs.misc.arcstats.c: 67108864
kstat.zfs.misc.arcstats.c_min: 33554432
kstat.zfs.misc.arcstats.c_max: 67108864
kstat.zfs.misc.arcstats.size: 20606976

This is after a clean boot before trying anything.  arcstats.size floats
right at the max for quite a while before the panic happens, so I
suspect something else is causing it to run out of kvm, perhaps the
normal buffer cache since I'm copying from a UFS filesystem.

panic: kmem_malloc(131072): kmem_map too small: 131440640 total

Though the backtrace (assuming I'm loading the module symbols correctly)
seems to implicate zfs.

#0  doadump () at pcpu.h:172
#1  0xc06bbaab in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:409
#2  0xc06bbd38 in panic (
    fmt=0xc094f28c "kmem_malloc(%ld): kmem_map too small: %ld total allocated")
    at /usr/src/sys/kern/kern_shutdown.c:563
#3  0xc0821e70 in kmem_malloc (map=0xc145408c, size=131072, flags=2)
    at /usr/src/sys/vm/vm_kern.c:305
#4  0xc0819d56 in page_alloc (zone=0x0, bytes=131072, pflag=0x0, wait=2)
    at /usr/src/sys/vm/uma_core.c:955
#5  0xc081bfcf in uma_large_malloc (size=131072, wait=2)
    at /usr/src/sys/vm/uma_core.c:2709
#6  0xc06b0eb1 in malloc (size=131072, mtp=0xc0bd0080, flags=2)
    at /usr/src/sys/kern/kern_malloc.c:364
#7  0xc0b66f67 in zfs_kmem_alloc (size=131072, kmflags=2)
    at /usr/src/sys/modules/zfs/../../compat/opensolaris/kern/opensolaris_kmem.c:67
#8  0xc0bb23ad in zio_buf_alloc (size=131072)
    at /usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/fs/zfs/zio.c:211
#9  0xc0ba4487 in vdev_queue_io_to_issue (vq=0xc3424ee4, pending_limit=Unhandled dwarf expression opcode 0x93
    at /usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c:213
    at /usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c:312
#11 0xc0bc69fd in vdev_geom_io_done (zio=0xc4435400)
    at /usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:412
#12 0xc0b6ad19 in taskq_thread (arg=0xc2dfa0cc)
    at /usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/os/taskq.c:833
#13 0xc06a54ba in fork_exit (callout=0xc0b6ac18 <taskq_thread>, 
    arg=0xc2dfa0cc, frame=0xd62cdd38) at /usr/src/sys/kern/kern_fork.c:814
#14 0xc08a8c10 in fork_trampoline () at /usr/src/sys/i386/i386/exception.s:205

I haven't tried increasing kmem yet -- I'm a bit leery of devoting so
much memory (presumably nonpageable, nonreclaimable) to the kernel.

Admittedly I'm somewhat confused as to why ZFS needs its own special
cache rather than sharing the system's, or at least only use free
physical pages allocated as VM objects rather than precious kmem.  But
I'm no VM guru :)


More information about the freebsd-current mailing list