Areca vs. ZFS performance testing.
morganw at chemikals.org
Tue Dec 2 04:04:37 PST 2008
On Tue, 2 Dec 2008, Jan Mikkelsen wrote:
> Wes Morgan wrote:
>> On Sun, 16 Nov 2008, Matt Simerson wrote:
>>> The Areca cards do NOT have the cache enabled by default. I
>> ordered the
>>> optional battery and RAM upgrade for my collection of
>> 1231ML cards. Even with
>>> the BBWC, the cache is not enabled by default. I had to go
>> out of my way to
>>> enable it, on every single controller.
>> Are you using these areca cards successfully with large
>> arrays? I found a
>> 1680i card for a decent price and installed it this weekend,
>> but since
>> then I'm seeing the raidz2 pool that it's running hang so
>> frequently that
>> I can't even trust using it. The hangs occur in both 7-stable and
>> 8-current with the new ZFS patch. Same exact settings that
>> have been rock
>> solid for me before now don't want to work at all. The drives
>> are just set
>> as JBOD -- the controller actually defaulted to this, so I
>> didn't have to
>> make any real changes in the BIOS.
>> Any tips on your setup? Did you have any similar problems?
> I am seeing I/O related lockups on 7.1-PRE with an Areca ARC-1220 controller
> and eight drives in a RAID-6 array. The same hardware works fine with 6.3.
> When I run gstat while it is happening I see I/O performance drop and the
> time to service each write (ms/w) goes up, and then suddenly goes back down
> to a sensible value. I have seen it get to about 22000ms.
> The system is essentially unusable for writes, which limits the utility a
> bit. Reads seem fine.
> Is this similar to the behaviour you saw?
Not quite. The zfs deadlock/hang effected both reads and writes, blocking
either of them indefinitely. They were "fixed" by the most recent set of
patches in -current.
More information about the freebsd-fs