Fatal trap 12: page fault panic with recent kernel with ZFS

Adam McDougall mcdouga9 at egr.msu.edu
Tue May 19 01:57:14 UTC 2009

On Mon, May 18, 2009 at 06:26:51PM -0700, Kip Macy wrote:

  On Mon, May 18, 2009 at 6:22 PM, Adam McDougall <mcdouga9 at egr.msu.edu> wrote:
  > On Mon, May 18, 2009 at 07:06:57PM -0500, Larry Rosenman wrote:
  > ?On Mon, 18 May 2009, Kip Macy wrote:
  > ?> The ARC cache allocates wired memory. The ARC will grow until there is
  > ?> vm pressure.
  > ?My crash this AM was with 4G real, and the ARC seemed to grow and grow, then
  > ?we started paging, and then crashed.
  > ?Even with the VM pressure it seemed to grow out of control.
  > ?Ideas?
  > Before that but since 191902 I was having the opposite problem,
  > my ARC and thus Wired would grow up to approx arc_max until my
  > Inactive memory put pressure on ARC making it shrink back down
  > to ~450M where some aspects of performance degraded. ?A partial
  > workaround was to add a arc_min which isn't entirely successful
  > and I found I could restore ZFS performance by temporarily squeezing
  > down Inactive memory by allocating a bunch of it myself; after
  > freeing that, ARC had no pressure and could grow towards arc_max
  > again until Inactive eventually rose. ?Reported to Kip last night
  > and some cvs commit lists. ?I never did run into Swap.
  That is a separate issue. I'm going to try adding a vm_lowmem event
  handler to drive reclamation instead of the current paging target.
  That shouldn't cause inactive pages to shrink the ARC.
  Most people consider out of the box stability more import than getting
  the maximum ARC. However, for people like you who want the safety
  catches removed I should make it possible to disable back-pressure.

Thanks, I appreciate all this work.  Not allowing inactive pages to 
shrink the ARC sounds great as an option.  I would be willing to bet 
that allowing inactive pages to shrink the arc would be far less 
detrimental to most people who aren't running a constant busy file 
server load, and its definitely important to try to protect untuned 

Do you have any suggestions for increasing the amount of memory ARC 
can use?  I've had difficulty increasing kmem past a few gigs on any 
of my recent builds (all past where kmem was changed so it could be 
more than ~2g) because at some point the kernel would stop booting.  
If I increase them too far, a few lines of the booting kernel would 
print, followed by a long stream of page fault panics or something 
with a sudden reboot.  With the recent change allowing the use of 
direct mem, the ARC could easily use ample memory except it turned 
out not to be stable.

Thanks again.

More information about the freebsd-current mailing list