Areca vs. ZFS performance testing.
koitsu at FreeBSD.org
Thu Oct 30 20:32:10 PDT 2008
Cross-posting this to freebsd-fs, as I'm sure people there will have
other recommendations. (This is one of those rare cross-posting
On Fri, Oct 31, 2008 at 01:14:55PM +1000, Danny Carroll wrote:
> I've just become the proud new owner of an Areca 1231-ML which I plan to
> use to set up an office server.
> I'm very curious as to how ZFS compares to a hardware solution so I plan
> to run some tests before I put this thing to work.
> The purpose of this email is to find out if anyone would like to see
> specific things tested as well as perhaps get some advice on how to get
> the most information out of the tests.
> My setup:
> Supermicro X7SBE board with 2Gb ram and an E6550 Core 2 Duo.
> FreeBSD 7.0-Stable compiled with amd64 sources from mid August.
> 1 x ST9120822AS 120gb disk (for the OS)
> For the array(s)
> 9 x ST31000340AS 1tb disks
> 1 x ST31000333AS 1tb disk (trying to swap this for a ST31000340AS)
> My thoughts are to do the following tests with bonnie++:
> 1 5 disk Areca Raid5
> 2 5 Disk ZFS RaidZ1 (Connected to Areca in JBOD mode)
> 3 5 Disk ZFS RaidZ1 (Connected to ICH9 On board SATA controller)
> 4 5 disk Areca Raid6
> 5 5 Disk ZFS RaidZ2 (Connected to Areca in JBOD mode)
> 6 5 Disk ZFS RaidZ2 (Connected to ICH9 On board SATA controller)
> 7 10 disk Areca Raid5
> 8 10 Disk ZFS RaidZ1 (Connected to Areca in JBOD mode)
> 9 10 disk Areca Raid6
> 10 10 Disk ZFS RaidZ2 (Connected to Areca in JBOD mode)
> My aim is to see what sort of performance gain you get by buying an
> Areca card for use in JBOD as well as seeing how ZFS compares to the
> hardware solution which offers write caching etc. I'm really only
> interested in testing ZFS's volume management performance, so for that
> reason I will also put ZFS on the Areca Raid drives. Not sure if it's a
> good idea to create 2 Raid drives and stripe them or simply use 1 large
> disk and give it to ZFS.
> Any thoughts on this setup as well as advice on what options to give to
> bonnie++ (or suggestions on another disk testing package) are very welcome.
I think these sets of tests are good. There are some others I'd like to
see, but they'd only be applicable if the 1231-ML has hardware cache. I
can mention what those are if the card does have hardware caching.
> I do have some concern about the size of the eventual array and ZFS' use
> of system memory. Are there guidelines available that give advice on
> how much memory a box should have with large ZFS arrays?
The general concept is: "the more RAM the better". However, if you're
using RELENG_7, then there's not much point (speaking solely about ZFS)
to getting more than maybe 3 or 4GB; you're still limited to a 2GB kmap
Regarding size of the array vs. memory usage: as long as you tune kmem
and ZFS ARC, you shouldn't have much trouble. There have been some
key people reporting lately that they run very large ZFS arrays without
issue, with proper tuning.
Also, just a reminder: do not pick a value of 2048M for kmem_size or
kmem_size_max; the machine won't boot/work. You shouldn't go above
something like 1536M, although some have tuned slightly above that
with success. (You need to remember that there is more to kernel
memory allocation than just this, so you don't want to exhaust it all
assigning it to kmap. Hope that makes sense...)
> Can an AMD64 kernel make use of memory above 2g?
Only on CURRENT; 7.x cannot, and AFAIK, will never be able to, as the
engineering efforts required to fix it are too great.
I look forward to seeing your numbers. Someone here might be able to
compile them into some graphs and other whatnots to make things easier
for future readers.
Thanks for doing all of this!
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, USA |
| Making life hard for others since 1977. PGP: 4BD6C0CB |
More information about the freebsd-hardware