Swapping caused by very large (regular) file size

Ian West ian at niw.com.au
Sun Dec 2 22:03:02 PST 2007


Hello, I have noticed while benchmarking a system with a fair bit of ram
(3G usable of 4G installed) that when using a very large file (3G
upwards) in a simple benchmark it will cause the system to swap, even
though the actual process does not show in top to be using a lot of
memory, as soon as the swapping starts the throughput degrades
dramatically. The 'inactive' ram shown in top increases rapidly and
'free' ram reduces, this seems fair and sensible, but allowing it to
then page to possibly the same spindle/array seems like a bad idea ?

I have tested this on a 4.11 system with 512M of ram as well as a
RELENG-6 system with an areca raid controller, both behave in the same
way, once the file gets to a certain size the system starts paging. Is
there any way to tune this behaviour ?

The test I have been doing is just generating a big file full of nulls,
but bonnie++ causes the same behaviour with very large file sizes.

dd if=/dev/zero bs=32768 of=junkfile count=100000 seems to do it quite
reliably on all the boxes I have tested ?

Using cp to copy the file doesnt appear to cause the problem.

Any thoughts or suggestions would be much appreciated ?




More information about the freebsd-stable mailing list