Cached file read performance with 6.2-PRERELEASE

Antony Mawer fbsd-stable at mawer.org
Tue Dec 19 19:27:19 PST 2006


On 20/12/2006 12:05 PM, Mark Kirkwood wrote:
> In the process of investigating performance in another area I happened 
> to be measuring sequential cached reads (in a fairly basic manner):
> 
> $ dd if=/dev/zero of=/tmp/file bs=8k count=100000      # create file
> 819200000 bytes transferred in 4.849394 secs (168928321 bytes/sec)
> 
> $ dd of=/dev/null if=/tmp/file bs=8k                   # read it
> 819200000 bytes transferred in 2.177922 secs (376138354 bytes/sec)
> $ dd of=/dev/null if=/tmp/file bs=8k                   # read again
> 819200000 bytes transferred in 2.178407 secs (376054620 bytes/sec)
> $ dd of=/dev/null if=/tmp/file bs=32k                  # read it
> 819200000 bytes transferred in 1.801944 secs (454620117 bytes/sec)
> 
> I ran vmstat to check there really was no read access to the filesystem.
> 
> Now I had no idea whether this was the sort of performance to be 
> expected or not, so checked on an *identical* cpu, memory, mobo machine 
> running Gentoo - this gets 620MB/s for 8k blocks and 700MB/s for 32k ones.
> 
> ...
> 
> The system is 2x1.26Ghz PIII, Supermicro P3TDER Serverworks HE-SL (dual 
> channel), 2x1G PC133 ECC DIMMS

What does the memory-related stats from "top" show you? Did you have any 
other memory intensive applications running at the time? A random 
example from one of my systems (1GB RAM):

Mem: 478M Active, 317M Inact, 150M Wired, 36M Cache, 111M Buf, 16M Free

Glancing at the 'top' man page, "150M Wired" seems to be the data file 
cache, while "111M Buf" is block-level caching... and "36M Cache" is 
related to the VM. See the end of this email for the same figures after 
some testing - the "Wired" figure goes up while "Cache" disappears.

That should give you an idea as to how much RAM is being used for the 
buffer/block IO cache ("111M Buf" in the above example, as I understand 
it), and the VM disk cache ("36M Cache" in the above example).

You might also want to look at:

     sysctl vfs.

and see whether or not there is anything there that may affect it. For 
instance, whether there is a maximum size in terms of files that will be 
cached...? Someone with more VFS/etc knowledge than I may be able to 
better advise you there...

It might be worthwhile trying with a series of different file size to 
determine if there is a point where the caching performance drops... I 
just did a few quick tests on a relatively old machine (2x P3-933Mhz, 
1GB RAM)... in this case, /tmp is on a 3ware SATA RAID controller 
(8xxx?) running RAID1 on two 160gb SATA disks)...

First, with an 8MB file:

$ dd if=/dev/zero of=/tmp/zero bs=8k count=1000
1000+0 records in
1000+0 records out
8192000 bytes transferred in 0.238275 secs (34380470 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
1000+0 records in
1000+0 records out
8192000 bytes transferred in 0.022824 secs (358919664 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
1000+0 records in
1000+0 records out
8192000 bytes transferred in 0.022845 secs (358590033 bytes/sec)

Next, with an 80MB file:

$ dd if=/dev/zero of=/tmp/zero bs=8k count=10000
10000+0 records in
10000+0 records out
81920000 bytes transferred in 2.549876 secs (32127050 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
10000+0 records in
10000+0 records out
81920000 bytes transferred in 0.226559 secs (361583258 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
10000+0 records in
10000+0 records out
81920000 bytes transferred in 0.232528 secs (352301702 bytes/sec)

Then with an 800MB file, which based on the results (~360mb/sec down to 
~42mb/sec) presumably blows the cache:

$ dd if=/dev/zero of=/tmp/zero bs=8k count=100000
100000+0 records in
100000+0 records out
819200000 bytes transferred in 26.029121 secs (31472442 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
100000+0 records in
100000+0 records out
819200000 bytes transferred in 19.463309 secs (42089451 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
100000+0 records in
100000+0 records out
819200000 bytes transferred in 19.224657 secs (42611944 bytes/sec)

Trying with something in between, I tried a 200mb file:

$ dd if=/dev/zero of=/tmp/zero bs=8k count=25000
25000+0 records in
25000+0 records out
204800000 bytes transferred in 6.517742 secs (31421925 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
25000+0 records in
25000+0 records out
204800000 bytes transferred in 0.866951 secs (236230194 bytes/sec)
$ dd if=/tmp/zero of=/dev/null bs=8k
25000+0 records in
25000+0 records out
204800000 bytes transferred in 0.849929 secs (240961277 bytes/sec)

So here we are somewhere in between -- around 240mb/sec... Looking at 
"top" now, I am seeing:

Mem: 479M Active, 282M Inact, 199M Wired, 111M Buf, 36M Free

compared with the earlier figures:

Mem: 478M Active, 317M Inact, 150M Wired, 36M Cache, 111M Buf, 16M Free

Hopefully all this means something and points you in the right 
direction...!!!

--Antony


More information about the freebsd-stable mailing list