Repeatable ZFS "kmem map too small" panic on 8.0-STABLE

Doug Poland doug at polands.org
Fri Jan 22 04:28:47 UTC 2010


On Thu, Jan 21, 2010 at 05:27:31PM -0500, Adam McDougall wrote:
> On 01/21/10 17:21, Artem Belevich wrote:
> >On Thu, Jan 21, 2010 at 12:46 PM, Gary Corcoran<gcorcoran at rcn.com>  wrote:
> >>Adam McDougall wrote:
> >>>
> >>>Put this in /boot/loader.conf:
> >>>vm.kmem_size="20G"
> >>>
> >>>It is intentionally higher than your amount of ram.
> >>
> >>Would you mind explaining...
> >>1) why this fixes the kmem_map too small problem ?
> >
> >Because it explicitly makes kmem_map larger.
> >
> >>2) why it should be larger than the amount of RAM, and by how much ?
> >
> >ZFS needs access to a lot of memory for ARC and it allocates/frees
> >memory fairly randomly. That raises two issues.
> >
> >First issue is that kernel is by default fairly conservative about
> >its memory needs. vm.kmem_size which limits address space for
> >in-kernel memory allocations is by default set to a fairly low value
> >which works reasonably well in most of the cases. However, for ZFS it
> >needs to be bumped up allow large amounts of memory to be allocated
> >by ZFS.
> >
> >Second problem is memory fragmentation. If you set vm.kmem_size ==
> >physical memory size, over time you may end up with a situation when
> >there is plenty of physical memory available, but there is no single
> >contiguous block of address space to map that memory into. FreeBSD
> >allocator is pretty good about avoiding fragmentation but you still
> >do need more address space than the amount of memory that could
> >potentially be allocated. I'd say that vm.kmem_size should be few
> >multiples of amounts of memory that you're planning to allocate.
> >
> >Just my $0.02
> >
> 
> Exactly what I would have said, thanks :) I'd imagine the kmem_size
> could be much larger still, closer to kmem_size_max, but I just picked
> 20G as a default for my servers that have 8G or less and I haven't
> seen an out of kmem panic for as long as I could raise kmem_size
> sufficiently high (a change was made around 6 months ago).  kmem_size
> doesn't seem to "grow" (much?) towards kmem_size_max, it is what it
> is, and you need to make sure it is big enough for your needs.  I have
> systems with just one gig and they run fine.
>
Interesting discussion :)  I added vm.kmem_size="20G" to
/boot/loader.conf per your instructions.  This time, it didn't panic at
the same point in the test, however, it appears the filesystem is
"hanging".  

On the fdisk test, I hit <CTRL> T and get:
cmd: fsdisk 37066 [zio->io_cv)] 245.62r 0.12u 25.10s 0

My various metrics are still running, but anything that needs the
filesystem appears "stuck".

The memory usage of the item "solaris" (vmstat -m | grep solaris) spiked
at 3334781952 (3180.30 MiB).

# zpool iostat 2, a <CTRL> T shows:
load: 0.00  cmd: zpool 934 [tx->tx_quiesce_done_cv)] 2052.45r 0.06 u 0.39s 0% 0k

# vmstat -v | grep solaris to disk every second and it's
hung at: 
load: 0.00  cmd: sh 38551 [zfs] 909.85r 0.00u 0.00s 0% 16k 

Any suggestions!

-- 
Regards,
Doug


More information about the freebsd-fs mailing list