Swapping caused by very large (regular) file size
Adam McDougall
mcdouga9 at egr.msu.edu
Wed Jan 16 13:29:00 PST 2008
On Sat, Jan 12, 2008 at 12:05:38AM -0500, John Baldwin wrote:
On Friday 11 January 2008 10:31:47 pm Peter Jeremy wrote:
> On Fri, Jan 11, 2008 at 06:44:20PM +0100, Kris Kennaway wrote:
> >Ian West wrote:
> >> dd if=/dev/zero bs=32768 of=junkfile count=100000 seems to do it quite
> >> reliably on all the boxes I have tested ?
> >
> >I am unable to reproduce this on 7.0.
>
> I can't reproduce it on 6.3-PRERELEASE/amd64 with 1GB RAM.
>
> vmstat -s;dd if=/dev/zero bs=32768 of=junkfile count=100000;vmstat -s
> shows the following changes:
> 2 swap pager pageins
> 2 swap pager pages paged in
> 4 swap pager pageouts
> 5 swap pager pages paged out
> 24 vnode pager pageins
> 78 vnode pager pages paged in
> 0 vnode pager pageouts
> 0 vnode pager pages paged out
You may not have a fast enough disk. We have noticed an issue at work
but only on faster controllers (e.g. certain mfi(4) drive configurations)
when doing I/O to a single file like the dd command mentioned causes the
buffer cache to fill up. The problem being that we can't lock the vm
object to recycle pages when we hit the limit that is supposed to prevent
this because all the pages in the cache are for the file (vm object) we
are working on. Stephan (ups@) says this is fixed in 7. The tell-tale
sign that we see is pagedaemon starts chewing up lots of CPU as the kernel
tries to realign the page queues along with I/O throughput going down the
toilet and being very erratic.
--
John Baldwin
These are the same symptoms as on a friend's system a little while back with 6.x.
I forwarded him a message from this thread and he agreed, and he confirmed
having an mfi.
More information about the freebsd-stable
mailing list