Vinum / 5.2 cant mount/growfs/newfs terabyte

Shawn Ostapuk flagg at slumber.org
Wed Jan 21 13:56:20 PST 2004


Well its been quite a few months, and FreeBSD 5.2 got released today so
I quickly went to see if my problem has been fixed yet, specially since
the release notes were quite hopeful:

"The sizes of some members of the statfs structure have changed from 32
bits to 64 bits in order to better support multi-terabyte filesystems."

I was hoping that was the fix...but it wasnt...

And now my problem... I use vinum and a number of IDE drives on two
boxes, primary server and backup. Around 5 or 6 months ago the size
grew to be needing a terabyte or more -- so I did the usual and added
a new drive to the vinum configuration, went to growfs (which has saved
me more time than you can imagine, thanks freebsd) and no luck, newfs,
no luck, mount the old one without modifying? no luck...it just breaks
after making it larger than 1 tb. I've tried this one two boxes, various
drives configs. I can rearranage them all i want and they all work fine,
but the second i make it larger than 1 terabyte, everything fails on it.

bash-2.05b# vinum list
9 drives:
D vinumdrive1           State: up       /dev/ad1s1e     A: 0/156327 MB (0%)
D vinumdrive2           State: up       /dev/ad2s1e     A: 0/76316 MB (0%)
D vinumdrive3           State: up       /dev/ad4s1e     A: 0/117239 MB (0%)
D vinumdrive4           State: up       /dev/ad5s1e     A: 0/114470 MB (0%)
D vinumdrive5           State: up       /dev/ad6s1e     A: 0/76292 MB (0%)
D vinumdrive6           State: up       /dev/ad7s1e     A: 0/76292 MB (0%)
D vinumdrive7           State: up       /dev/ad8s1e     A: 0/156327 MB (0%)
D vinumdrive8           State: up       /dev/ad9s1e     A: 0/78159 MB (0%)
D vinumdrive9           State: up       /dev/ad11s1e    A: 0/286102 MB (0%)
 
1 volumes:
V pr0n                  State: up       Plexes:       1 Size: 1110 GB
   
1 plexes:
P vinum0.p0           C State: up       Subdisks:     9 Size: 1110 GB
      
9 subdisks:
S vinum0.p0.s1          State: up       D: vinumdrive1  Size: 152 GB
S vinum0.p0.s2          State: up       D: vinumdrive2  Size: 74 GB
S vinum0.p0.s3          State: up       D: vinumdrive3  Size: 114 GB
S vinum0.p0.s4          State: up       D: vinumdrive4  Size: 111 GB
S vinum0.p0.s5          State: up       D: vinumdrive5  Size: 74 GB
S vinum0.p0.s6          State: up       D: vinumdrive6  Size: 74 GB
S vinum0.p0.s7          State: up       D: vinumdrive7  Size: 152 GB
S vinum0.p0.s8          State: up       D: vinumdrive8  Size: 76 GB
S vinum0.p0.s9          State: up       D: vinumdrive9  Size: 279 GB

(no its not really porn stupid name i madeup too long ago to change =)
All the drives work. In fact i've used two vinum configs at the same
time using all the drives because of this terabyte limit :(

Now with the above setup, vinum thinks everything is fine and looks fine
as far as I can tell. And i've heard of vinum being used in greater than
1tb situations making me think there is no problem with it. But the
second i try to do anything...

bash-2.05b# mount /dev/vinum/pr0n
mount: /dev/vinum/pr0n: unknown special file or file system
----
bash-2.05b# growfs  /dev/vinum/pr0n
growfs: rdfs: read error: 128: Invalid argument
----
bash-2.05b# newfs /dev/vinum/pr0n
/dev/vinum/pr0n: 1137530.4MB (2329662200 sectors) block size 16384, fragment size 2048
	using 6191 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
newfs: can't read old UFS1 superblock: read error from block device: Invalid argument
----

No, it is NOT using ufs1, it is ufs2 -- in fact that code it hits in
newfs only comes up when dealing with ufs2 partitions. The second i
remove any drives that make it past the 1TB barrior i can mount/newfs it
again no problems. Again, in any order and any drives on two different
machines with a whole new set of drives.

So thoughts anyone? is no one else in the world using vinum to concat
some IDE drives past 1tb?  i've heard of mount correctly mounting ufs2
partitions off hardware raids that are larger than 1tb no problem...

it just seems to be some sort of problem with the two together...i'll
be enternally grateful for anyone who can help me past this barrior, it
is getting more and more difficult to work around separate partitions,
etc -- and i see this becoming more and more common now that 300 gig
drives are $299 retail...

I should note also i've had roughly a dozen responses from my last post
to questions recently and again to -fs/-questions about 6 months back
who have the same problem wanting to know if i've found a way around it.
Hopefully someone knows whats up :)

Thanks,
Shawn.


More information about the freebsd-fs mailing list