geom - help ...

John Nielsen lists at jnielsen.net
Thu Sep 21 11:52:50 PDT 2006


On Thursday 21 September 2006 01:37, Matthew Seaman wrote:
> Marc G. Fournier wrote:
> > So, again, if I'm reading through things correctly, I'll have to do
> > something like:
> >
> > gstripe st1 da1 da2
> > gstripe st2 da3 da4
> > gmirror drive st1 st2
> > newfs drive
>
> That's the wrong way round, I think.  If you lose a drive, then you've
> the whole of one of your stripes and have no resilience.  Shouldn't you
> rather stripe the mirrors:
>
>    gmirror gm0 da1 da2
>    gmirror gm1 da3 da4
>    gstripe gs0 gm0 gm1
>    newfs gs0
>
> This way if you lose a drive then only one of your gmirrors loses
> resilience and the other half of your disk space is unaffected.

I would recommend the 1+0 approach as well. In addition to increasing your 
odds of surviving a multi-disk failure, it makes replacing a failed component 
easier and faster--you only need to rebuild component mirror (which involves 
one command and duplication of half of the total volume) instead of 
recreating a component stripe and then rebuilding the whole mirror (which 
involves at least two commands and duplication of the entire volume).

Regarding the spare, I think you're right that there isn't (yet) a way to 
configure a system-wide hot spare, but it would not be hard to write a 
monitoring script that gives you essentially the same thing. Assuming the 1+0 
approach: every N seconds, check the health of both mirrors (using "gmirror 
status" or similar). If volume V is degraded, do a "gmirror forget V; gmirror 
insert V sparedev", e-mail the administrator, and mark the spare as 
unavailable. After the failed drive is replaced, the script (or better, a 
knob that the script knows how to check) should be updated with the 
devicename of the new spare.

For a 50% chance of having zero time-to-recovery (at the cost of more 
expensive writes), you could also add the spare as a third member to one of 
the mirror sets. If a member of that set fails, you still have a redundant 
mirror. If a member of the other set fails, you just do a "gmirror remove" to 
free the spare from the 3-way mirror and then add it to the failed set.

From my own experience, I've been very happy with both gmirror and gstripe, 
and in fact I just finished setting up a rather unorthodox volume on my 
desktop at work. I have three drives (two of which were scavenged from other 
machines): one 60GB and two 40GB. I wanted fault tolerance for both / 
and /usr, I wanted /usr to be as big as possible, and I wanted reasonable 
performance. I ruled out graid3 and gvinum raid5 since I want to be able to 
boot easily from / and performance would be poor since the 40GB drives share 
a controller. I made / a mirror of two 10GB partitions on the 40GB drives, 
made a stripe out of the remaining 30GB from the 40GB drives, and added the 
stripe into a mirror set with the 60GB drive. It's working quite nicely so 
far.

JN


More information about the freebsd-questions mailing list