Dell hardware raid 0 (sas5ir) or gmirror?
infofarmer at FreeBSD.org
Tue Jan 16 10:01:18 UTC 2007
On 1/16/07, Stefan Lambrev <stefan.lambrev at sun-fish.com> wrote:
> Andrew Pantyukhin wrote:
> > On 1/15/07, Josef Karthauser <joe at tao.org.uk> wrote:
> >> I'm purchasing a new server, and was wondering what anyone thought about
> >> whether to pay extra for the SAS5IR card so I can RAID0 the two drives,
> >> or whether to just rely on gmirror. My worry about the former is that I
> >> can't seem to find management tools for controlling the hardware
> >> controller. What if one of the drives fails? How would I know?
> > By all means I would go the gmirror way, and I always do
> > even when a hardware raid controller is already present.
> I really do not understand this. :)
> When you say something like this it will be good to explain why you
> think so.
> I have few servers with good hw raid controllers and I'm very happy with
> them, I also use
> gmirror on my desktop pc, but it is not as good as hw raid on servers
> for sure.
> Also it is harder to support it (during OS updates and etc).
> Also it (gmirror) will put some load on the CPU and hw raid have it's
> own CPU/memory for this.
> LSI have a nice tool to monitor/config RAID arrays that just works under
> fbsd in my case so I'm happy with it.
> There are a lot of reasons to use hw raids on mission critical servers...
As a matter of fact Jonathan was also surprised by
my answer. Here's a part of my response to him:
raid3, raid5 and other computation-hungry configu-
rations are a cpu hog, that's why people prefer
hardware controllers for that. I'm quite sure at some
point FreeBSD will gain an ability to use crypto/XOR
hardware for the benefit of software raid performance
and maybe then software raid5 will become a popular
As for raid0/raid1 - there's no cpu penalty at all.
gmirror/gstripe in FreeBSD might need further tweaks
and optimizations, but benchmarks show that with 2-4
drives performance almost equals the theoretical
Reliability of OS-integrated software raid is
expected to be even higher than that of hardware one,
because there's no hardware to fail and software bugs
might be found in all solutions.
What I really like about software raid is very high
flexibility and manageability. There's no issue of
having the right driver or the right userland tool,
it just works. And it's a snap to setup. And you are
free to experiment with virtual (file-based, for one)
file systems before you implement a solution.
As always, there are more than one correct opinions.
I just expressed my own and I hope my explanation
answers some of your interest.
What I would add to answer some of your claims,
Stefan, is that there's no single correct solution
here. I would argue that money spent on main CPU/RAM
are a better investment compared to a hardware raid
0/1 solution, OS buffers are there anyway, so why not
make them larger/faster if you need that. As for CPU
load, I'd argue it's negligible, at least in 2 SATA
HDD situations (which are popular in all markets).
For larger configurations, consisting of 5 drives and
more I would advise against raid 0/1 and therefore
against software raid. If you do continue to use
gmirror/gstripe, I would expect some tweaks to be
needed, but in general such systems should scale very
well, especially on SMP systems, as FreeBSD 6.x
brought mpsafe file system access to the table.
All in all there are reasons to use hw raids and
there are some not to use them. For some reasons I
hold our homegrown (FreeBSD) solutions closer to my
heart and choose them in favor of 3d-party ones.
More information about the freebsd-stable