ZFS system lockup

Dave Duchscher daved at nostrum.com
Mon Jul 6 21:23:23 UTC 2015


In the process of diagnosing an IO performance problem with our virtual environment, we ran into FreeBSD instances used in testing locking up and needing to be reset.  Moving to real hardware and running the same tests, we are able to reproduce the lockup.

We are testing using fio running a few read and write tests over and over again.  Watching via top, the system locks up and last update from top is reporting wired memory has taking all the memory (2G in the system, top shows1947M wired). ARC size at the time of the latest lockup was around 437M.  I can keep the system from locking if I reduce the maximum ARC size to 512M and wired memory floats around 1G. Setting maximum ARC 768M or higher and we get consistent lockups after running for a few hours.

What is using this wired memory?
Is there a way to keep wired memory under control with ZFS besides shrinking the ARC cache?
Is there any guidance on how much wired memory will be used for various ARC sizes?
Is 2G just too little memory to run ZFS?

We understand that the maximum ARC size will need to tuned in some cases but shrinking it down to 512M seems low.

This test hardware has a single 250G disk and 2G of RAM.  OS is FreeBSD 10.1 Release.  Upgrading the system to stable and saw similar results.  Currently, the system is running 10.1 Release since that is what is used elsewhere.

We have seen a lockup on one of our database nodes which has 20G of RAM which we thought was caused by a SAN switch on our VM system.  Now we are not so sure.

--
Dave



More information about the freebsd-fs mailing list