Areca vs. ZFS performance testing.

Scott Long scottl at samsco.org
Thu Nov 13 09:15:19 PST 2008


Danny Carroll wrote:
> Danny Carroll wrote:
>> Jeremy Chadwick wrote:
>>> I'd like to see the performance difference between these scenarios:
>>>
>>> - Memory cache enabled on Areca, write caching enabled on disks
>>> - Memory cache enabled on Areca, write caching disabled on disks
>>> - Memory cache disabled on Areca, write caching enabled on disks
>>> - Memory cache disabled on Areca, write caching disabled on disks
>>>
> 
> 
> The initial results for a ICH9 vs Areca in JBod mode can be found here:
> http://www.dannysplace.net/ZFS-JBODTests.html
> 
> Summary:
> 	5 Disk ZFS RaidZ array with atime turned off.
> 	ICH9      - block reads  avg 400MByte/Sec
> 	ICH9      - block writes avg 150MByte/Sec
> 	ArecaJBOD - block reads  avg 300MByte/Sec
> 	ArecaJBOD - block writes avg 160MByte/Sec
> 
> 
> The Areca seems to be in all except char and block writes.  Block reads
> are 75% as fast as the ICH9 and rewrites are about 85% as fast.
> 
> There seems to be little difference between enabling and disabling the
> disk cache on the Areca.  This leads me to two conclusions:
> 	1. Disabling the write cache does nothing on Seagate drives.
> 	2. IO to the drives is so slow that a write cache is irrelevant.
> 
> These are just some quick tests that I started with, mainly to compare
> the areca bus versus the ich9 bus.  If someone has any tuning
> suggestions, then now is the time to make them before I migrate the ICH9
> drives to the Areca bus.

The Areca controller likely doesn't buffer/cache for disks in JBOD mode,
as others in this thread have stated.  Without buffering, simple disk
controllers will almost always be faster than accelerated raid
controllers because the accelerated controllers add more latency between
the host and the disk.  A simple controller will directly funnel data
from the host to the disk as soon as it receives a command.  An
accelerated controller, however, has a CPU and a mini-OS on it that has
to schedule the work coming from the host and handle its own tasks and
interrupts.  This adds latency that quickly adds up under benchmarks.
Your numbers clearly demonstrate this.

Scott


More information about the freebsd-fs mailing list