newfs and mount vs. half-baked disks

Bruce Evans bde at zeta.org.au
Sun Nov 9 23:02:16 PST 2003


On Sun, 9 Nov 2003, Kirk McKusick wrote:

> > From: Bruce Evans <bde at zeta.org.au>

> > The block count is in units of sector size, so disks much larger than
> > 2TB can be supported by disklabel using (fake if necessary) sector sizes
> > larger than 512.  File systems need to use similarly large block (fragment
                                                                      ^^^^^^^^
> > for ffs) sizes, and some patches are needed for reading superblocks if
    ^^^^^^^
> > the sector size is larger than 8K.  Since ffs uses a block size of 16K
> > by default, a sector size of 16K are not unreasonable and this is sufficent
> > for disks smaller then 64TB.
>
> Actually, FFS requires its fragment size be no smaller than the sector size
> (since it presumes that it cannot do read/write in smaller than sector
> sizes). So, on a 16K filesystem, you get 2K fragments. So your hack only
> gets you to 8TB which is not going to last long at current disk growth
> rates.

This point was noted in the underlined phrase.  The blocks size for ffs is
actually the fragment size in this context.  So fragments would be as
large as necessary (16K if that is the sector size), and the block size
(he one given by newfs's -b parameter) would be larger.  A fragment size
of 16K may even be the right size for very large disks.  My benchmarks
say that 16K/8K block/fragment size is not much slower than 16K/2K on
a 60GB disk, but 16K/16K and 32K/any are significantly slower.

Bruce


More information about the freebsd-arch mailing list