Areca vs. ZFS performance testing.

Jeremy Chadwick koitsu at FreeBSD.org
Sun Nov 16 23:08:20 PST 2008


On Sun, Nov 16, 2008 at 10:06:42PM -0800, Matt Simerson wrote:
>
> On Nov 16, 2008, at 7:15 PM, Danny Carroll wrote:
>
>> Eirik Øverby wrote:
>>> I have noticed that my 3ware controllers, after updating firmware
>>> recently, have removed the JBOD option entirely, classifying it as
>>> something you wouldn't want to do with that kind of hardware anyway. 
>>> I
>>> believed then, and even more so now, they are correct.
>>
>> It kinda depends.  If there were a good 8 or 16+ port SATA card out
>> there that *simply* did SATA with no bells and whistles, then there
>> would be no point buying a Raid adaptor when you want to use things  
>> like
>> ZFS.
>>
>> But there are no such cards available.
>
> Allow me to introduce you to Marvell. The sell the SATA controller used 
> in the Sun thumper (X4500). I've used that same SATA controller under 
> OpenSolaris and FreeBSD. Unfortunately, that controller doesn't use 
> multi-lane cables. When you pack in 3 controllers and 24 disks, it's a 
> cabling disaster.
>
> 	http://freebsd.monkey.org/freebsd-fs/200808/msg00027.html

I participated in that thread.

	http://freebsd.monkey.org/freebsd-fs/200808/msg00028.html

The questions I had never got answered.  The most important one being:
have you actually performed a hard failure or forced disk swap with both
the Areca and Marvell controllers?  And how does FreeBSD behave when you
do this?

I've a feeling it works fine on the Areca (since CAM/da(4) are used),
but if the Marvell card uses ata(4) (and I'm guessing it does) I'm
concerned.  Why?

For sake of comparison: Promise controllers are considered one of the
most well-supported controllers under FreeBSD, mainly due to Soren
having access to their documentation; yet, when I attempted to do an
actual disk upgrade, the Promise controller did nothing but cause me
grief, forcing me to yank the entire card from my system.

http://wiki.freebsd.org/JeremyChadwick/ZFS_disk_upgrade_gone_bad

Users should read this story and the follow-up.  And in my situation,
the disk wasn't even bad/failed.

What was supposed to be a simple procedure (and it was with Intel AHCI,
as you'll read) turned into a complete nightmare.  Take my story and
apply it to a production datacentre -- but with an 8 or 16-port card and
a shelf of disks.  What're you going to tell your boss when this stuff
fails like how I documented?  "Yeah so I need US$600 to replace the
card"  "Why?  We don't have that kind of budget.  Is the card bad?  Can
we RMA it?"  "No, the card isn't bad"  "Then what is the problem?"
"Well you see......"

So when I see someone say "Yeah, try the <XXX card>, it works great", my
first response is "Just how well have you actually tested failure or
upgrade scenarios?"  Most don't, and instead just *assume* come
fail-time, that everything will "just work" -- and they find out the
horrible truth when it's already too late.

-- 
| Jeremy Chadwick                                jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-fs mailing list