question on vinum

secmgr security at jim-liesl.org
Tue Oct 26 22:33:45 PDT 2004


On Tue, 2004-10-26 at 18:27, Mikhail P. wrote:

> I haven't worked with Vinum previously, but hear a lot about it. My question 
> is how to implement the above (unite four drives into single volume) using 
> Vinum, and what will happen if let's say one drive fails in volume? Am I 
> loosing the whole data, or I can just unplug the drive, tell vinum to use 
> remaining drives with the data each drive holds? I'm not looking for fault 
> tolerance solution.
Since you don't care about fault tolerance, you probably want to do
striping, also known as raid0.

> From my understanding, in above scenario, Vinum will first fill up the first 
> drive, then second, etc.
thats called concatenation, which is different than striping.  striping
balances the load across all the spindles

> I have read the handbook articles, and I got general understanding of Vinum.
> I'm particularly interested to know if I will still be able to use volume in 
> case of failed drive.
If you want to do that, then you want raid5.  If either a concat or
stripe set looses a drive, the data will need to be restored.  

> Some minimal configuration examples would be greatly appreciated!
Read the following.  Really!
http://www.vinumvm.org/vinum/vinum.ps  
http://www.vinumvm.org/cfbsd/vinum.txt
Both of these have examples and will clear up your confusion about
concat vs stripe vs raid5.

concat is the easiest to add to, stripe has the best performance, raid5
trades write speed and n+1 drives for resilience.  raid10 gets back the
performance at the cost of 2*n drives 

Broken down:
volume - top level.  what the filesystem talks to.  mirroring is defined
at the volume level as is raid 10 (mirrored stripe plexes)
plex   - a virtual storage area made up of 1 or more subdisks for concat
2 or more for stripe, or 3 or more subdisks for raid 5.
subdisk - area delegated from a bsd partition
drive - the actual bsd partition (as in /dev/da1s1h)

generally, the order is as follows:
-fdisk the drives to be used so they have at least one bsd slice each. 
-use disklabel to edit the slice label so you have at least one
partition of type vinum (that isn't the C partition)
-in an editor, create the configuration
drives
volume
  plex
    sd

when you define the subdisk, don't use the whole drive.  Leave at least
64 blocks unused.

use the file you created as input to vinum
vinum create -v -f config

Or you can cheat and just say, 
"vinum stripe -n volname /dev/ad0s1h /dev/ad1s1h /dev/ad2s1h
/dev/ad3s1h" (should be all on one line)

Raid5 plexs have to be init'ed 

newfs -v /dev/vinum/volname
mount /dev/vinum/volname /mnt

hopefully I haven't made you're understanding worse





More information about the freebsd-stable mailing list