terabyte limit problem with vinum/ccd

Shawn Ostapuk flagg at slumber.org
Wed Sep 10 11:40:13 PDT 2003


I've tried freebsd-questions but didnt find much help there, so now I'm
trying here. 

I have around 10 IDE drives which add up to over a terabyte. My goal
is to use them all as one big drive using any means necessary (I have a
backup so redundency is not needed, only space in this situation)

I used to use vinum (and still would like to), i hit the terabyte limit
with UFS and was told i would have to upgrade to 5.1 in order to take
advantage of UFS2 and > 1TB filesystem -- so thats what i've done.
However I still seem to have the exactly same problems. I'm now trying
it on a whole new box and set of drives with the same set of problems.

It doesn't matter if i use vinum or ccdconfig -- they all work fine and
predictably, until I make it larger than a terabyte then i get the 
following on freebsd 5.1 RELEASE:

with ccdconfig

# ccdconfig -cv ccd0 16 none /dev/ad1s1e .. /dev/ad10s1e
ccd0: 10 componets (ad1s1e, .., ad10s1e), 2223956864 blocks interleaved at 16 blocks
# newfs /dev/ccd0
newfs: wtfs: 512 bytes at sector 2223956863: Invalid argument

with vinum

# newfs /dev/vinum/vinum0
/dev/vinum/vinum0: 1085915.5MB (2223954992 sectors) block size 16384, fragment size 2048
using 5910 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
newfs: can't read old UFS1 superblock: read error from block device: Invalid arguement

If i remove any single drive in the configuration making it < 1TB it
works fine, i can newfs and what not. Add it back up to >1TB and newfs
breaks.

It seems to break when it calls wtfs, which just called bwrite, which
seems to just be a libufs front end for calling pwrite. Unfortunately at
this point i'm at a loss exactly what the values should be and why its
failing still. (newfs wrong? vinum backend?)

I've heard vinum has no practical limits, and should work fine beyond a
terabyte, and i've spoken directly with people who have newfs'd over a
terabyte using hardware raid (3ware) -- so I'm just baffled.  Am I doing
something wrong or is this configuration just simply not supported yet?

It may be intestersting to note if I do new newfs -N for testing i get no
errors and the output looks great, mainly because it never calls bwrite.

Any suggestions or confirmation on weather or not this should work or
not or alternative ideas? 12channel IDE raid controller is about $700
more than i got right now :)

Thanks for any help -- been driving me nuts for a while now.


More information about the freebsd-fs mailing list