Looking for a Text on ZFS

Christian Baer christian.baer at uni-dortmund.de
Mon Feb 4 06:44:46 PST 2008

On Mon, 4 Feb 2008 13:39:52 +0100 (CET) Wojciech Puchar wrote:

> did you ever got your UFS filesystem broken not because your drive failed?

That is not the point here. I have been using FreeBSD sind version 3.3,
which was released in 1999. Before that I used Linux. So I can't even look
back on 10 years of FreeBSD yet and I don't have that many drives I have
to worry about. So the fact if one of my files systems ever broke isn't
really representative.

To answer the question: Yes, it did happen and not only once. This was in
the time when I was setting up a new computer with 6.0-RELEASE and a new
S-ATA controller. There was a bug in the driver which the developer
managed to fix after we exhanged a few eMails. Before the error was fixed,
my machine crashed several times with a kernel panic. There were something
like two dozen crashes in that time. Twice the filesystem could be
salvaged by fsck, but the data on it was pretty messed up. I don't know
how that happened and frankly, I don't care either. The rest of the
times, fsck did get the fs into normal working order again whith just the
file broken that was last being written. Since the boot drive wasn't
connected to the new controller and I was using this machine as a
plattform to debug the driver, no real damage was caused.

> i don't. UFS it's not FAT, and doesn't break up.

That's ok to believe if you want to. UFS is designed to minimize errors.
There is no guarantee that there will be none.

> you CAN't estimate well how much space you need in longer term.
> in practice partitioning like yours means at least 100% more disk space 
> requirements.

I wouldn't be that pessemistic. True, you can't be sure you allocated
enough space to X, so you leave a safety margine. But the fact that the
HDD doesn't grow limits your space anyway. I am not denying that you might
waste space this way but it's still nothing I'd lose any sleep over.

> of course - there are often cases today that whole system needs few gigs, 
> but smallest new drive is 80GB - it will work..

I work with lots of drives that are a lot smaller than that. And the
systems still work. :-)

> still - making all in / is much easier and works fine.

Maybe I'm just too conservative for that.
Mind you, I don't break up all drives by default. I have some 500GB drives
that have only one large partition. This partition is for data (which
means everything but system stuff). All I break up into pieces are the
default system areas.

> making all in / and /lessused, where / is at first part on disk, and 
> /lessused on second - make big performance improvements (shorter seeks!).

There are about 10 things I can think of that I'd do before I tried
something like that. I'm a little surprised about a suggestion like this
coming from you because you seem to be a great advocacy of dynamic
systems. And here you have to decides what is used often and what not.
This is an estimate that you could also mess up - I'm sure I probably
would. :-) And chaninge a file from the seldom to the often area isn't
that trivial either.

I increase performance by mounting /tmp and /usr/obj async and I mount
systems I want to work fast with noatime.

But ok, noone will judge either of us for working with our systems the way
we please. :-) Anyone with Unix knowledge will find his way around my
boxes and the same should be true for you. The rest are just details. :-)

>> I read about this. However, I didn't find anything conclusive as to how
>> well the drives can still live on their own if they are ever seperated.
>> Now I don't think they will be addressed as a RAID0 with all the risks of
>> that. But what happens if one of four drives breaks down? Does it make a
>> difference, if the broken drive is the first one, the last one or a middle
>> one?
> if it's just concat, you will loose lots of data, just like any other 
> filesystem.
> with concat+mirror - you replace single drive that failed and rebuild 
> mirror. that's all.

Which doesn't really address the issue of what happens if a drive that is
part of a big ZFS is removed (because it's broken).

> after reading your answer on 3-rd question i will end the topic, because 
> you understand quota as workaround of problems creating 1000 partitions.
> or simply - looks like you don't understand it at all, because it is not 
> workaround. it's excellent tool.

Maybe you just don't understand my English? :-)

I understand quota very well and also what it can do. It is a very useful
tool but it is not the holy grail. I actually use both block and file
quote on some of the systems I have to watch. And I use both hard and soft
at that. Quota does eliminate the need to create one partition for each
home directory, even if you think it is not meant for that. And actually,
it is used a lot for just that purpose. ISPs with shared hosting products
usually don't allow direct write access outside the users ~ anyway. So the
quota just stops him from uploading to much. But I know quota is also very
useful in "mixed enviroments" where several users have access to a
directory or a directory structure and you want to stop one user from
filling up everything with his stuff. So even if quota is a pretty simple
tool (as it's only purpose is to limit the resources a user can use), its
field of application is large.

Quota does not address the different needs of certain applications. With
quota you can limit the amount of inodes a user may grab but you cannot
create areas with more inodes and others with less. Quota solves many
problems and is a great tool, no doubt in that, but it doesn't make your
computer fast, you less thirsty and it doesn't improve your sexlife either -
at least that didn't happen here. :-)


More information about the freebsd-questions mailing list