Writing contigiously to UFS2?

Bruce Evans brde at optusnet.com.au
Wed Sep 26 13:20:05 PDT 2007

On Wed, 26 Sep 2007, Rick C. Petty wrote:

> On Wed, Sep 26, 2007 at 06:37:18PM +1000, Bruce Evans wrote:
>> On Tue, 25 Sep 2007, Rick C. Petty wrote:
>>> On Sat, Sep 22, 2007 at 04:10:19AM +1000, Bruce Evans wrote:
>>>> of disk can be mapped.  I get 180MB in practice, with an inode bitmap
>>>> size of only 3K, so there is not much to be gained by tuning -i but
>>> I disagree.  There is much to be gained by tuning -i: 224.50 MB per CG vs.
>>> 183.77 MB..  that's a 22% difference.
>> That's a 22% reduction in seeks where the cost of seeking every 187MB
>> is a few mS every second.  Say the disk speed 61MB/S and the seek cost
>> is 15 mS.  Then we waste 15 mS every 3 seconds with 183 MB cg's, or 2%.
>> After saving 22%, we waste only 1.8%.
> I'm not sure why this discussion has moved into speed/performance
> comparisons.  I'm saying 22% difference in CG size.

Size is uninteresting except where it affects speed.  "-i large" saves some
disk space but not 22%, and disk space is almost free.  "-b large -f large"
costs disk space.

>> Since I
>> got to within 1% of the raw disk speed, there is little more to be
>> gained in speed here.  (The OP's problem was not speed.)
> I agree-- why are you discussing speed?  I mean, it's interesting.  But I
> was only discussing CG sizes and suggesting using the inode density option
> to reduce the amount of space "wasted" with filesystem metadata.

The OP's problem was that due to an apparently-untuned maxbpg and/or maxbpg
not actually working, data was scattered over all cg's and thus over all
disks when it was expected/wanted to be packed into a small number of disks.
Packing into a large number of small cg's should give the same effect on
the number of disks used as packing into a small number of large cg's, but
apparently doesn't, due to the untuned maxbpg and/or bugs.

> I do think the performance differences are interesting, but how much of the
> differences are irrelevant when looking at modern drives with tagged
> queuing, large I/O caches, and reordered block operations?

It depends on how big the seeks are (except a really modern drive would be
RAM with infinitely fast seeks :-).  I think any large-enough cylinder
groups would be large enough for the seek time to be significant.


More information about the freebsd-fs mailing list