7.2 dies in zfs

Adam McDougall mcdouga9 at egr.msu.edu
Sun Nov 22 05:20:33 UTC 2009


On Sat, Nov 21, 2009 at 11:36:43AM -0800, Jeremy Chadwick wrote:

  
  On Sat, Nov 21, 2009 at 08:07:40PM +0100, Johan Hendriks wrote:
  > Randy Bush <randy at psg.com> wrote:
  > > imiho, zfs can not be called production ready if it crashes if you
  > > do not stand on your left leg, put your right hand in the air, and
  > > burn some eye of newt.
  > 
  > This is not a rant, but where do you read that on FreeBSD 7.2 ZFS has
  > been marked as production ready.
  > As far as i know, on FreeBSD 8.0 ZFS is called production ready.
  > 
  > If you boot your system it probably tell you it is still experimental.
  > 
  > Try running FreeBSD 7-Stable to get the latest ZFS version which on
  > FreeBSD is 13
  > On 7.2 it is still at 6 (if I remember it right).
  
  RELENG_7 uses ZFS v13, RELENG_8 uses ZFS v18.
  
  RELENG_7 and RELENG_8 both, more or less, behave the same way with
  regards to ZFS.  Both panic on kmem exhaustion.  No one has answered my
  question as far as what's needed to stabilise ZFS on either 7.x or 8.x.

I have a stable public ftp/http/rsync/cvsupd mirror that runs ZFS v13.
It has been stable since mid may.  I have not had a kmem panic on any
of my ZFS systems for a long time, its a matter of making sure there is
enough kmem at boot (not depending on kmem_size_max) and that it is big enough
that fragmentation does not cause a premature allocation failure due to lack
of large-enough contiguous chunk.  This requires the platform to support a
kmem size that is "big enough"... i386 can barely muster 1.6G and sometimes
that might not be enough.  I'm pretty sure all of my currently existing ZFS
systems are amd64 where the kmem can now be huge.  On the busy fileserver with
20 gigs of ram running FreeBSD 8.0-RC2 #21: Tue Oct 27 21:45:41 EDT 2009,
I currently have:
vfs.zfs.arc_max=16384M
vfs.zfs.arc_min=4096M
vm.kmem_size=18G
The arc settings here are to try to encourage it to favor the arc cache
instead of whatever else Inactive memory in 'top' contains.

On other systems that are hit less hard, I simply set:
vm.kmem_size="20G"
I even do this on systems with much less ram, it doesn't seem to matter
except it works, this is on an amd64 with only 8G.  Most of my ZFS systems
are 7.2-stable, some are 8.0-something.  Anything with v13 is much better
than v6, but 8.0 has additional fixes that have not been backported to 7 yet.
I don't consider the additional fixes in 8 required for my uses yet, although
I'm planning on moving forward eventually.  I would consider 2G kmem a realistic
minimum on a system that will see some serious disk IO (regardless of how much
ram the system actually contains, as long as the kmem size can be set that big
and the system not blow chunks).  Hope this personal experience helps.

  
  The people who need to answer the question are those who are familiar
  with the code.  Specifically: Kip Macy, Pawel Jakub Dawidek, and anyone
  else who knows the internals.  Everyone else in the user community is
  simply guessing + going crazy trying to figure out a solution.
  
  As much as I appreciate all the work that has been done to bring ZFS to
  FreeBSD -- and I do mean that! -- we need answers at this point.
  
  -- 
  | Jeremy Chadwick                                   jdc at parodius.com |
  | Parodius Networking                       http://www.parodius.com/ |
  | UNIX Systems Administrator                  Mountain View, CA, USA |
  | Making life hard for others since 1977.              PGP: 4BD6C0CB |
  _______________________________________________
  freebsd-stable at freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-stable
  To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
  


More information about the freebsd-stable mailing list