FW: 20TB Storage System

Brooks Davis brooks at one-eyed-alien.net
Tue Sep 2 13:09:07 PDT 2003


[This isn't really a performance issue so I trimmed it.]

On Tue, Sep 02, 2003 at 12:48:29PM -0700, Max Clark wrote:
> I need to attach 20TB of storage to a network (as low cost as possible), I
> need to sustain 250Mbit/s or 30MByte/s of sustained IO from the storage to
> the disk.
> 
> I have found external Fibre Channel -> ATA 133 Raid enclosures. These
> enclosures will house 16 drives so with 250GB drives a total of 3.5TB each
> after a RAID 5 format. These enclosures have advertised sustained IO of
> 90-100MByte/s each.
> 
> One solution we are thinking about is to use a Intel XEON server with 3x FC
> HBA controller cards in the server each attached to a separate storage
> enclosure. In any event we would be required to use ccd or vinum to stripe
> multiple storage enclosures together to form one logical volume.
> 
> I can partition this system into two separate 10TB storage pools.
> 
> Given the above:
> 1) What would my expected IO be using vinum to stripe the storage enclosures
> detailed above?
> 2) What is the maximum size of a filesystem that I can present to the host
> OS using vinum/ccd? Am I limited anywhere that I am not aware of?

Paul Saab recently demonstated a 2.7TB ccd so you shouldn't hit any
major limits there (I'm not sure where the next barrier is, but it
should be a ways off).  I'm not sure about UFS.

> 3) Could I put all 20TB on one system, or will I need two to sustain the IO
> required?

In theory you should be able to do 250Mbps on a single system, but I'm
not sure how well you will do in practice.  You'll need to make sure you
have sufficent PCI bus bandwidth.

> 4) If you were building this system how would you do it? (The installed $/GB
> must be below $5.00 dollars).

If you are willing to accept the management overhead of multiple
volumes, you will have a hard time beating 5U 24-disk boxes with 3
8-port 3ware arrays of 300GB disks.  That gets you 6TB per box (due to
controler limitations restricting you to 2TB per controler) for a bit
under $15000 or $2.5/GB.  The raw read speed of the arrays is around
85MBps so each array easily meets your throughput requirements.  Since
you'd have 20 arrays in 4 machines, you'd easily get meet your bandwith
requirements.  If you can't accept multiple volumes, you may still be
able to use a configuration like this using either target mode drivers
or the disk over network GEOM module that was posted recently.

You will need to use 5.x to make this work.

-- Brooks
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20030902/dedf77d5/attachment.bin


More information about the freebsd-hackers mailing list