Very low disk performance on 5.x

Poul-Henning Kamp phk at phk.freebsd.dk
Mon May 2 12:20:53 PDT 2005


In message <005401c54f4a$7f271890$b3db87d4 at multiplay.co.uk>, "Steven Hartland" 
writes:


>> As such that is a fair end-user benchmark, but unfortunately it
>> doesn't really tell us anything useful for the purpose of this
>> discussion.
>
>Yes but the end-user performance is really the only thing that matters.
>There are two killer issues here:

No, there is three issues here, and you correctly identify the
secondary two, but forget the first:

0. Does the user know enough about what he is doing.

>1. Write performance being nearly 3x that of read performance
>2. Read performance only equalling that of single disk.

If the user expects an out of the box configuration with
default parameters to give him maximal performance, the
answer to issue number zero is:  Obviously not.


>I'm quite willing to test and optimise things but so far no one has
>had any concrete suggestions on that to try.

First thing I heard about this was a few hours ago.  (Admittedly,
my email has been in a sucky state last week, so that is probably
my own fault).

>This is just me though I think we do need to strive for good out
>the box performance in these types of senarios

We strive for a sensibly balanced system, no matter what use
people put an out-of-the-box configuration to.

>> Testing end-to-end means that we have very little to go from to
>> find out where things went wrong in any one instance.
>
>To eliminate various parts of the subsystems I've just tested:
>dd if=/dev/da0 of=/dev/null bs=64k count=100000
>Read: 220Mb/s

This is a very interesting number to measure, you'll never
see anything else going faster than that.  Presumably
this is -current ?

>Compared with:
>dd if=/usr/testfile of=/dev/null bs=64k count=100000
>Read: 152Mb/s

On -current and 5.4 you don't have to make partitions if you
intend to use the entire disk (and provided you don't want
to boot from it).  You can simply:

	newfs /dev/da0
	mount /dev/da0 /where_ever

This should have the sideeffect of aligning your filesystem
correctly to the RAID volume.

>So looks like the FS is adding quite an overhead ~70Mb/s ( 60% )
>although from the linux tests we know the disks are capable
>of at least another 40Mb/s

Yes, filesystems add overhead.  That's just the way things are.

One thing you could try is to use a larger block/fragment size
on your filesystem.  Try:

	newfs -b 32768 -f 4096 /dev/da0

>> Did you remember to disable all the debugging in FreeBSD 6-Current ?
>> (see top of src/UPDATING)
>
>Yep all debugging was disabled on my second run on current. 

Just checking: what exactly did you disable ?

>N.B. Current had at least on out of order lock issue while I was using
>it but not while the tests where going on.

Yes, current is current :-)

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.


More information about the freebsd-performance mailing list