What are the limits for FFS file systems and assorted questions

Bruce Evans brde at optusnet.com.au
Mon Jan 14 02:38:01 UTC 2013


On Sun, 13 Jan 2013, Eitan Adler wrote:

> Can anyone provide an up to date answer for the following:
>
> If these are all already perfect and correct can you please tell me so?
>
> On 18 December 2012 23:13, Eitan Adler <lists at eitanadler.com> wrote:
>
>> http://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/book.html#ffs-limits
>> Are the bugs listed still bugs?

This almost useless, since it pre-dates ffs2.

It seems to be derived from something I wrote in a mailing list.  The
"Should Work" column in the table is now implemented, but it is only
for ffs1 and is buggy for a block sizes of 8K (the limit should be
16TB, not 32TB).  The wording of the descriptions could be improved.

All related known bugs for ffs1, including the ones described there,
were fixed 5-15 years ago.  But recent work on ext2fs showed a new
one.  A very minor one that only recently became reachable, but it has
been fixed in Linux-ext2fs: there is a block count (di_nblocks in
ffs[1-2]) that is only 32 bits in ffs1 and in ext2fs (actually it only
has 31 bits in ffs1 and in FreeBSD-ext2fs, since it is signed).  Fs
block numbers in these fs's are also 32 (or 31) bits, but this block
counter doesn't suffice for counting them because it has units of
512-blocks while fs block numbers have larger units.  When this block
counter overflows, the only (?) thing broken is st_nblocks in stat(2).
One way of fixing this is to limit the file size to 1TB - 1.  This
would also simplify describing the limit.  This is only a serious
restriction for sparse files.  With the default block size of 32K,
ffs1 can only handle file systems of size 64TB.  It can only handle 1
non-sparse file of size nearly 64TB, or 63 non-sparse files of size
1TB-1.  It is now barely reasonable to have non-sparse files
of these sizes, but systems with such files probably wouldn't be using
ffs1.  Sparse files are more interesting.  You can fit a large number
of sparse files of size 64TB-1 in on a file system of size just a few
GB, and also write them in less than a day or two.  Also, the
potentially-overflowing block counter is for physical blocks, so it
can't overflow for fairly sparse files.  Thus restricting the file
size to 1TB-1 would break some cases unnecessarily.

ffs2 generally gives much larger limits for file system sizes but
halves the limits for file sizes (since block numbers are twice as
large, the block size must be twice as large to fit the same number
of block numbers in an indirect block).

>> http://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/book.html#mount-foreign-fs
>> Is this completely true?  Should it be updated?

This doesn't give much detail, so there is less to go wrong in it.

>> http://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/book.html#alternate-directory-layout
>> Does this still deserve to be listed as a FAQ?

I think it never did, since it is about a technical problem that can't
really be solved outside of the file system, especially with today's
disk sizes allowing 10's if not thousands as many files as when it was
written in 1998, or thousands if not millions as many files as when ffs
was written in ~1983.  With millions of files, you just can't make
much difference with a few changes to the directory layout.  It was
written by mckusick in 1998, so it is also out-of-date with respect to
the better layout policies that he implemented in ffs in 2001.

BTW, cp(1) still has bogus sorting related to this.  It sorts files
so that non-directory files are copied before directory files, because
it knows too much about ffs's internals and about ffs being the only
file system.  Perhaps this is still good if the file system is ffs,
but I think it is better to preserve any existing order that you get
from the command line or from a directory traversal (use fts and specify
pre- or post-order).  But the sorting function is of low quality and
tends to destroy any existing order:
- it uses qsort(), which gives an unstable sort for items that compare
   equal
- everything except directories vs non-directories compares equal.
The result is that if you have a perfectly sorted list on the command
line, say consisting of all regular files in alphabetical order, then
the order is very unstable.  Except the instability is very stable --
it is usually close to a perfect inversion of the order.  Anyway, this
instability makes it impossible to either preserve existing orders in
file hierarchies or to specify optimal orders on the command line.

Bruce


More information about the freebsd-fs mailing list