Areca vs. ZFS performance testing.

Danny Carroll fbsd at dannysplace.net
Thu Oct 30 21:47:49 PDT 2008


Jeremy Chadwick wrote:
> On Fri, Oct 31, 2008 at 02:07:56PM +1000, Danny Carroll wrote:
> - Memory cache enabled on Areca, write caching enabled on disks
> - Memory cache enabled on Areca, write caching disabled on disks
> - Memory cache disabled on Areca, write caching enabled on disks
> - Memory cache disabled on Areca, write caching disabled on disks

Does it matter what type of disk we are talking about?   What I mean is,
do you want to see this with both Raid5 and Raid6 arrays?

Also, I'm pretty sure that in JBod mode the cache (on the card) will do
nothing.  But I am not certain, so I'll do the tests there as well.

What about stripe sizes?  I mainly use big files so I was going to
stripe accordingly.  But the bonnie++ tests might give strange results
in that case.

> I don't know if the controller will let you disable use of memory cache,
> but I'm hoping it does.  I'm pretty sure it lets you disable disk
> write caching in its BIOS or via the CLI utility.
> 

It's been a while since I've had a hardware raid card.  I'll see what is
available.

> All of the tuning variables apply to i386 and amd64.
> 
> You do not need the vfs.zfs.debug variable; I'm not sure why you enabled
> that.  I imagine it will have some impact on performance.

Consider it gone.

> I do not know anything about kern.maxvnodes, or vfs.zfs.vdev.cache.size.
> 

At the moment I am not hitting anywhere near the max vnodes setting.  So
I think it is irrelevant.

> The tuning variables I advocate for a system with 2GB of RAM or more,
> on RELENG_7, are:
> 
> vm.kmem_size="1536M"
> vm.kmem_size_max="1536M"
> vfs.zfs.arc_min="16M"
> vfs.zfs.arc_max="64M"
> vfs.zfs.prefetch_disable="1"
> 
> You can gradually increase arc_min and arc_max by ~16MB increments as
> you see fit; you should see general performance improvements as they
> get larger (more data being kept in the ARC), but don't get too crazy.
> I've tuned arc_max up to 128MB before with success, but I don't want
> to try anything larger without decreasing kmem_size_*.

What is the arc?  Is it the ZFS file cache?

> The only reason you need to adjust kmem_size and kmem_size_max is to
> increase the amount of available kmap memory which ZFS relies heavily
> on.  If the values are too low, under heavy I/O, the kernel will panic
> with kmem exhaustion messages (see the ZFS Wiki for what some look
> like, or my Wiki).
> 
> I would recommend you stick with a consistent set of loader.conf
> tuning variables, and focus entirely on comparing the performance of
> ZFS on the Areca controller vs. the ICH controller.

Once I am settled on a 'starting point' I won't be altering it for the
tests.

> You can perform a "ZFS tuning comparison" later.  One step at a time;
> don't over-exert yourself quite yet.  :-)

Yeah, this is weekend stuff for me at the moment, it will take me some
time to get things done.  Firstly I need to figure out how I am going to
 hook up 10 drives to my system.  I don't have the drive-bay space and I
am not shelling out for a new case so I am hunting around for an ancient
external disk cabinet.

> You can add raidz2 to this comparison list too if you feel it's
> worthwhile, but I think most people will be using raidz1.

I might as well do both.

-D


More information about the freebsd-fs mailing list