Swapping caused by very large (regular) file size

Kris Kennaway kris at FreeBSD.org
Fri Jan 11 09:44:23 PST 2008


Ian West wrote:
> Hello, I have noticed while benchmarking a system with a fair bit of ram
> (3G usable of 4G installed) that when using a very large file (3G
> upwards) in a simple benchmark it will cause the system to swap, even
> though the actual process does not show in top to be using a lot of
> memory, as soon as the swapping starts the throughput degrades
> dramatically. The 'inactive' ram shown in top increases rapidly and
> 'free' ram reduces, this seems fair and sensible, but allowing it to
> then page to possibly the same spindle/array seems like a bad idea ?
> 
> I have tested this on a 4.11 system with 512M of ram as well as a
> RELENG-6 system with an areca raid controller, both behave in the same
> way, once the file gets to a certain size the system starts paging. Is
> there any way to tune this behaviour ?
> 
> The test I have been doing is just generating a big file full of nulls,
> but bonnie++ causes the same behaviour with very large file sizes.
> 
> dd if=/dev/zero bs=32768 of=junkfile count=100000 seems to do it quite
> reliably on all the boxes I have tested ?
> 
> Using cp to copy the file doesnt appear to cause the problem.
> 
> Any thoughts or suggestions would be much appreciated ?

I am unable to reproduce this on 7.0.  On a system with 4GB of RAM, 
creating an 8GB file causes almost all of memory to be allocated to 
'inactive' (as expected; these are cached pages in case something 
accesses the file again).  However when free memory gets down to about 
110M the inactive pages are replaced and free memory does not drop below 
this.  Can you confirm whether 7.0 continues to manifest the problem for 
you?

kris


More information about the freebsd-stable mailing list