Writing contigiously to UFS2?
brde at optusnet.com.au
Wed Sep 26 01:37:22 PDT 2007
On Tue, 25 Sep 2007, Rick C. Petty wrote:
> On Sat, Sep 22, 2007 at 04:10:19AM +1000, Bruce Evans wrote:
>> of disk can be mapped. I get 180MB in practice, with an inode bitmap
>> size of only 3K, so there is not much to be gained by tuning -i but
> I disagree. There is much to be gained by tuning -i: 224.50 MB per CG vs.
> 183.77 MB.. that's a 22% difference.
That's a 22% reduction in seeks where the cost of seeking every 187MB
is a few mS every second. Say the disk speed 61MB/S and the seek cost
is 15 mS. Then we waste 15 mS every 3 seconds with 183 MB cg's, or 2%.
After saving 22%, we waste only 1.8%.
These estimates are consistent with numbers I gave in previous mail.
With the broken default of -e 2048 for 16K-blocks for ffs1, there was
an unnecessary seek or 2 after only every 32MB. The disk speed was
52 MB/S (disk manufacturers's MB = 10^6 B). -e 2048 gave 50 MB/S and
-e 8192 gave 51.5 MB/S. (52 MB/S was measured on the raw disk using
dd. The raw disk tends to actually be slower than the file system due
to not streaming.) Seeking after every 32MB (real MB) gives a seek
every 645 mS, so if 2 seeks take 15 mS each the wastage was 4.7% so
it was not surprising to get a speedup of 3% using -e 8192. Since I
got to within 1% of the raw disk speed, there is little more to be
gained in speed here. (The OP's problem was not speed.) (All this
is for the benchmark "dd if=/dev/zero of=zz bs=1m count=N" where
N = 200 or 1000.)
>> more to be gained by tuning -b and -f (several doublings are reasonable).
> I completely agree with this. It's unfortunate that newfs doesn't scale
> the defaults here based on the device size. Before someone dives in and
> commits any adjustments, I hope they do sufficient testing and post their
> results on this mailing list.
Testing shows that only one doubling of -b and -f is reasonable for
/usr/src but it makes little difference, so nothing should be changed.
I'm still trying to make halving -b and -f back to 512/512 work right,
so that it has the same disk speed as any/any, using contiguous layout
and clustering so that physical disk i/o sizes are independent of the
fs block sizes unless small i/o sizes are sufficient. Clustering
already almost does this for data blocks provided the allocator manages
to do a contiguous layout. Clustering already wastes a lot of CPU doing
this by brute force, but CPU is relatively free.
More information about the freebsd-fs