Fast SCSI RAID controller
tedm at toybox.placo.com
Sun Feb 4 05:37:20 UTC 2007
----- Original Message -----
From: "Josef Grosch" <jgrosch at juniper.net>
To: "Ted Mittelstaedt" <tedm at toybox.placo.com>
Cc: "Philippe Lang" <philippe.lang at attiksystem.ch>; <questions at freebsd.org>;
<jgrosch at MooseRiver.com>
Sent: Saturday, February 03, 2007 5:18 PM
Subject: Re: Fast SCSI RAID controller
> On Sat, Feb 03, 2007 at 09:19:27AM -0800, Ted Mittelstaedt wrote:
> > he still ought to look at them, cheaper faster disk arrays are nothing
> > to sneeze at.
> > SCSI is only a win these days if your running the most expensive 10K
> > RPM drives in a mirrored configuration, which is common on database
> > servers. And raid-5 in particular unless you have a minimum of 5 drives
> > in your array, you are going to just be throwing the performance edge
> > of the more expensive scsi drives into the toilet, so what is the point
> > buying them?
> > If your doing raid 5 for redundancy, there's no argument, sata is the
> > clear winner on the 3ware or highpoint cards.
> > Ted
> This system is going to be a testbuild server to answer the question,
> this commit break the build?" It needs to be really fast.
Have you tried the Qlogic ISP12160?
I don't believe there's stable RAID cards supported under FreeBSD that are
much faster than the 3ware/highpoint cards with the 7200 rpm sata drives.
was me building the machine I would dispense with raid entirely and just
the disks. Your dealing with transient info and who gives a poop if a disk
dies, you just replace, restore and march onward.
I suspect though your dealing with the same problem I ran into a decade
ago when I was admining at the now defunct Central Point Software - you
got a house full of developers who all want the quickest things under their
desks so they can build their own trees. For a while they got that but code
ultimately slowed because too much time was wasted repairing blown up
developers personal systems. Finally the department head forced all of
them to give up local storage on their systems and store everything on the
network servers, they put into effect several build machines that that was
all they did all day long.
> I'm looking at
> 15K RPM drives most likley either RAID 10 or RAID 0. Most of our systems
> that have local disk are RAID 10. We have used SCSI disks becuase they are
> fast and reliable. For data that we can not loose we use NetApps either
> attached via Gige copper or fiber.
Just my opinion only but I think you ought to use NASes for everything and
dispense with local storage entirely. To get speed and redundancy, gigE
is the future. Just my opinion!
In the corporate arena that I play in these days we have been doing that for
years. Most companies have really crunched down very hard on laptops,
you can't use a laptop at work unless your a roaming sales person and
policies are setup on those so that local storage is replicated to the
without user control when they dock to the network. And laptops are
about the only reason you can justify local storage on a computer.
The liabilities today and federal reporting and document retention laws
are such that it's a huge problem to allow people to create and save work
on their local machines. Everything is put on the servers, all of the
are configured so that the users are pretty well locked down that they can't
store data anywhere else BUT the servers.
More information about the freebsd-questions