Looking for a Text on ZFS

Wojciech Puchar wojtek at wojtek.tensor.gdynia.pl
Mon Feb 4 04:41:43 PST 2008


> /usr to spread the load while making worlds and I mount /usr/obj
> asynchronously to increase write speed. With several filesystems I can
> spread to load the way I want it and decide where the data goes. And one
> broken fs doesn't screw up the others in the process.

did you ever got your UFS filesystem broken not because your drive failed?

i don't. UFS it's not FAT, and doesn't break up.

>
> I do know the drawbacks of this: Storage is pretty static. Correcting
> wrong estimates about the needed fs-sizes is a big problem. That is why I

you CAN't estimate well how much space you need in longer term.
in practice partitioning like yours means at least 100% more disk space 
requirements.

of course - there are often cases today that whole system needs few gigs, 
but smallest new drive is 80GB - it will work..

still - making all in / is much easier and works fine.

making all in / and /lessused, where / is at first part on disk, and 
/lessused on second - make big performance improvements (shorter seeks!).

>> 2) it takes many drives to the pool and you may add then new drives.
>> same as gconcat+growfs.
>
> I read about this. However, I didn't find anything conclusive as to how
> well the drives can still live on their own if they are ever seperated.
> Now I don't think they will be addressed as a RAID0 with all the risks of
> that. But what happens if one of four drives breaks down? Does it make a
> difference, if the broken drive is the first one, the last one or a middle
> one?


if it's just concat, you will loose lots of data, just like any other 
filesystem.

with concat+mirror - you replace single drive that failed and rebuild 
mirror. that's all.



after reading your answer on 3-rd question i will end the topic, because 
you understand quota as workaround of problems creating 1000 partitions.
or simply - looks like you don't understand it at all, because it is not 
workaround. it's excellent tool.


More information about the freebsd-questions mailing list