5.8TB RAID5 SATA Array Questions - UPDATE

Benson Wong tummytech at gmail.com
Fri Apr 22 11:40:04 PDT 2005


Hi Edgar, 

Good to hear you finally got it running. Sounds like you went through
the same challenges I went through. I wound up getting FreeBSD
5.4-STABLE running and it's been stable for weeks. I put it through
quite a bit of load lately and it seems to running well.

Comments below: 

> 
> As much loved as BSD is to me.it simply just isn't up to the challenge at
> all.its far too difficult to get in a properly working state.and the
> limitations imposed are just too difficult to overcome easily.

Sounds like you hit the same 2TB limit on both FBSD and Linux. What
was the limitations that were too difficult to overcome?

> 
> I ended up using Ubuntu which not only had all the driver support to all
> the
> devices and controllers.but also had little to no problem getting the
> system
> installed properly.It however does not like/want to boot to the array.so I
> installed additional drives (Seagate sata) and created a mirror (300GB) for
> the system to live on and bring up the array (/dev/md0) using mdadm.overall
> it was easy and nice.there are several caveats left to wrestle with.

I wonder why it wouldn't want to boot off of your large array. It
could be that it is way too big for the old PC bios to recognize. I
think you could get around this by creating a small partition at the
beginning of your array. I tried this too, but no luck. My arrays were
over fiber channel but that should have been taken care of by the FC
card.

> 
> Currently although the 3ware controller can create a huge 4TB raid5 array,
> nothing exists that I am aware of that can utilize the entire container.
> Every single OS that exists seems to all share the 2TB limitations..so
> while
> the BIOS can "see" it.everything else will only see 2TB..this includes NFS
> on OSX (which don't get me started on the horrible implementation mistakes
> from apple and their poor NFS support..i mean NFSv4 comeon! Why is that
> hard!!)

That is strange that OSX can't see larger than 2TB partitions over
NFS. I would assume that an OSX client talking to an XServe would be
able to see it. I haven't tested this so I wouldn't know for sure.

I'm more curious about the 2TB limit on Linux. I figured Linux, with
it's great file system support, would be able to handle a larger than
2TB partition. What were the limitations you ran into?

> So to get past Ubuntu's 2TB problem, I created 2xRAID5 2TB (1.8TB
> reporting)
> containers on the array.and then using software raid.created 1xRAID0 using
> the 2xRAID5 containers.which create 1xRAID0 @4TB.

Why did software raid0 help you get over the 2TB limitation? Wouldn't
it still appear as one filesystem that is way too big to use?
Something doesn't add up here. Pun not intended. :)

> 
> Utterly horrible.probably the WORST half-assed installation imaginable.in
> my
> honest opinion.here are my desires.
>
I chose to break my 4.4TB system into 4 x 1.1TB arrays. This is very
well supported by FreeBSD. The downside is that I had to modify my
email system configuration and maintenance scripts to work with four
smaller arrays rather than a single large one.

I purposely avoided using software raid because it makes maintenance
of the array a lot more complex. It usually doesn't take a lot of
skills or time to fix a hardware array but the learning curve for
fixing a software array is a lot higher. Plus I don't think software
raid on linux is any good, or on FreeBSD for that matter.

> Create 1xRAID5 @ 4TB.install the OS TO the array.boot to the array and then
> share out 4TB via NFS/SMB.was that too much to ask?? Obviously it was.
> 
> So in response.I can modified the requirements.
> 
> Create 1xRAID5 at 4TB...install an OS TO a 1xRAID1 at 300GB...BOOT to the
> RAID1..and SHARE out the 4TB.

This is essentially what I did as well. Didn't know about the
limitations when I first started.

ben


More information about the freebsd-questions mailing list