Cached file read performance

Mark Kirkwood markir at paradise.net.nz
Thu Dec 21 17:50:18 PST 2006


I recently did some testing on the performance of cached reads using two 
(almost identical) systems, one running FreeBSD 6.2PRE and the other 
running Gentoo Linux - the latter acting as a control. I initially 
started a thread of the same name on -stable, but it was suggested I 
submit a mail here.

My background for wanting to examine this is that I work with developing 
database software (postgres internals related) and cached read 
performance is pretty important - since we typically try hard to 
encourage cached access whenever possible.

Anyway on to the results: I used the attached program to read a cached 
781MB file sequentially and randomly with a specified block size (see 
below). The conclusion I came to was that our (i.e FreeBSD) cached read 
performance (particularly for smaller block sizes) could perhaps be 
improved... now I'm happy to help in any way - the machine I've got 
running STABLE can be upgraded to CURRENT in order to try out patches 
(or in fact to see if CURRENT is faster at this already!)...

Best wishes

Mark


----------------------results-etc---------------------------------
Machines
========

FreeBSD (6.2-PRERELEASE #7: Mon Nov 27 19:32:33 NZDT 2006):
- Supermicro P3TDER
- 2xSL5QL 1.26 GHz PIII
- 2xKingston PC133 RCC Registered 1GB DIMMS
- 3Ware 7506 4x Maxtor Plus 9 ATA-133 7200 80G
- Kernal GENERIC + SMP
- /etc/malloc.conf -> >aj
- ufs2 32k blocksize, 4K fragments
- RAID0 256K stripe using twe driver

Gentoo (2.6.18-gentoo-r3 ):
- Supermicro P3TDER
- 2xSL5QL 1.26 GHz PIII
- 2xKingston PC133 RCC Registered 1GB DIMMS
- Promise TX4000 4x Maxtor plus 8 ATA-133 7200 40G
- default make CFLAGS (-O2 -march-i686)
- xfs stripe width 2
- RAID0 256K stripe using md driver (software RAID)

Given the tests were about cached I/O, the differences in RAID 
controller and the disks themselves were seen as not significant (indeed 
booting the FreeBSD box with the Gentoo livecd and running the tests 
there confirmed this).

Results
=======

FreeBSD:
--------

$ ./readtest /data0/dump/file 8192 0
random reads: 100000 of: 8192 bytes elapsed: 4.4477s io rate: 184186327 
bytes/s
$ ./readtest /data0/dump/file 8192 1
sequential reads: 100000 of: 8192 bytes elapsed: 1.9797s io rate: 
413804878 bytes/s

$ ./readtest /data0/dump/file 32768 0
random reads: 25000 of: 32768 bytes elapsed: 2.0076s io rate: 408040469 
bytes/s
$ ./readtest /data0/dump/file 32768 1
sequential reads: 25000 of: 32768 bytes elapsed: 1.7068s io rate: 
479965034 bytes/s

$ ./readtest /data0/dump/file 65536 0
random reads: 12500 of: 65536 bytes elapsed: 1.7856s io rate: 458778279 
bytes/s
$ ./readtest /data0/dump/file 65536 1
sequential reads: 12500 of: 65536 bytes elapsed: 1.6611s io rate: 
493158866 bytes/s


Gentoo:
-------

$ ./readtest /data0/dump/file 8192 0
random reads: 100000 of: 8192 bytes elapsed: 1.2698s io rate: 645155193 
bytes/s
$ ./readtest /data0/dump/file 8192 1
sequential reads: 100000 of: 8192 bytes elapsed: 1.1329s io rate: 
723129371 bytes/s


$ ./readtest /data0/dump/file 32768 0
random reads: 25000 of: 32768 bytes elapsed: 1.1583s io rate: 707244595 
bytes/s
$ ./readtest /data0/dump/file 32768 1
sequential reads: 25000 of: 32768 bytes elapsed: 1.1178s io rate: 
732838631 bytes/s

$ ./readtest /data0/dump/file 65536 0
random reads: 12500 of: 65536 bytes elapsed: 1.1478s io rate: 713742417 
bytes/s
$ ./readtest /data0/dump/file 65536 1
sequential reads: 12500 of: 65536 bytes elapsed: 1.1012s io rate: 
743921133 bytes/s




More information about the freebsd-performance mailing list