Areca vs. ZFS performance testing.

Jeremy Chadwick koitsu at FreeBSD.org
Thu Oct 30 21:34:14 PDT 2008


On Fri, Oct 31, 2008 at 02:07:56PM +1000, Danny Carroll wrote:
> Jeremy Chadwick wrote:
> > I think these sets of tests are good.  There are some others I'd like to
> > see, but they'd only be applicable if the 1231-ML has hardware cache.  I
> > can mention what those are if the card does have hardware caching.
> 
> The card comes standard with 256Mb of cache.

I'd like to see the performance difference between these scenarios:

- Memory cache enabled on Areca, write caching enabled on disks
- Memory cache enabled on Areca, write caching disabled on disks
- Memory cache disabled on Areca, write caching enabled on disks
- Memory cache disabled on Areca, write caching disabled on disks

I don't know if the controller will let you disable use of memory cache,
but I'm hoping it does.  I'm pretty sure it lets you disable disk
write caching in its BIOS or via the CLI utility.

> >> I do have some concern about the size of the eventual array and ZFS' use
> >> of system memory.  Are there guidelines available that give advice on
> >> how much memory a box should have with large ZFS arrays?
> > 
> > The general concept is: "the more RAM the better".  However, if you're
> > using RELENG_7, then there's not much point (speaking solely about ZFS)
> > to getting more than maybe 3 or 4GB; you're still limited to a 2GB kmap
> > maximum.
> > 
> > Regarding size of the array vs. memory usage: as long as you tune kmem
> > and ZFS ARC, you shouldn't have much trouble.  There have been some
> > key people reporting lately that they run very large ZFS arrays without
> > issue, with proper tuning.
> 
> I followed the recommendations here:
> http://wiki.freebsd.org/ZFSTuningGuide
> 
> vm.kmem_size="1024M"
> vm.kmem_size_max="1024M"
> vfs.zfs.debug=1
> 
> And : kern.maxvnodes=400000
> 
> I have not added the following because they were listed in the i386
> section.  (These values were quoted for a machine with 768Mb of ram)
> vfs.zfs.arc_max="40M"
> vfs.zfs.vdev.cache.size="5M"
> 
> Am I right in assuming these do not apply to amd64?  The article was not
> specific.

All of the tuning variables apply to i386 and amd64.

You do not need the vfs.zfs.debug variable; I'm not sure why you enabled
that.  I imagine it will have some impact on performance.

I do not know anything about kern.maxvnodes, or vfs.zfs.vdev.cache.size.

The tuning variables I advocate for a system with 2GB of RAM or more,
on RELENG_7, are:

vm.kmem_size="1536M"
vm.kmem_size_max="1536M"
vfs.zfs.arc_min="16M"
vfs.zfs.arc_max="64M"
vfs.zfs.prefetch_disable="1"

You can gradually increase arc_min and arc_max by ~16MB increments as
you see fit; you should see general performance improvements as they
get larger (more data being kept in the ARC), but don't get too crazy.
I've tuned arc_max up to 128MB before with success, but I don't want
to try anything larger without decreasing kmem_size_*.

> > Also, just a reminder: do not pick a value of 2048M for kmem_size or
> > kmem_size_max; the machine won't boot/work.  You shouldn't go above
> > something like 1536M, although some have tuned slightly above that
> > with success.  (You need to remember that there is more to kernel
> > memory allocation than just this, so you don't want to exhaust it all
> > assigning it to kmap.  Hope that makes sense...)
> 
> It makes sense.   I'm using 1024 at the moment, but I've never really
> looked into what memory is actually being used.
> 
> Tuning advice here would be well received :-)

The only reason you need to adjust kmem_size and kmem_size_max is to
increase the amount of available kmap memory which ZFS relies heavily
on.  If the values are too low, under heavy I/O, the kernel will panic
with kmem exhaustion messages (see the ZFS Wiki for what some look
like, or my Wiki).

I would recommend you stick with a consistent set of loader.conf
tuning variables, and focus entirely on comparing the performance of
ZFS on the Areca controller vs. the ICH controller.

You can perform a "ZFS tuning comparison" later.  One step at a time;
don't over-exert yourself quite yet.  :-)

You can add raidz2 to this comparison list too if you feel it's
worthwhile, but I think most people will be using raidz1.

-- 
| Jeremy Chadwick                                jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-hardware mailing list