Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

Matthew Dillon dillon at
Fri Mar 24 18:00:38 UTC 2006

:On an amd64 system running about 6-week old -stable, both behave
:pretty much identically.  In both cases, systat reports that the disk
:is about 96% busy whilst loading the cache.  In the cache case, mmap
:is significantly faster.
:turion% ls -l /6_i386/var/tmp/test
:-rw-r--r--  1 peter  wheel  586333684 Mar 24 19:24 /6_i386/var/tmp/test
:turion% /usr/bin/time -l grep dfhfhdsfhjdsfl /6_i386/var/tmp/test
:       21.69 real         0.16 user         0.68 sys
:[umount/remount /6_i386/var]
:turion% /usr/bin/time -l grep --mmap dfhfhdsfhjdsfl /6_i386/var/tmp/test
:       21.68 real         0.41 user         0.51 sys
:The speed gain with mmap is clearly evident when the data is cached and
:the CPU clock wound right down (99MHz ISO 2200MHz):
:Peter Jeremy

    That pretty much means that the read-ahead algorithm is working.
    If it weren't, the disk would not be running at near 100%.

    Ok.  The next test is to NOT do umount/remount and then use a data set
    that is ~2x system memory (but can still be mmap'd by grep).  Rerun
    the data set multiple times using grep and grep --mmap.

    If the times for the mmap case blow up relative to the non-mmap case,
    then the vm_page_alloc() calls and/or vm_page_count_severe() (and other
    tests) in the vm_fault case are causing the read-ahead to drop out.
    If this is the case the problem is not in the read-ahead path, but 
    probably in the pageout code not maintaining a sufficient number of
    free and cache pages.  The system would only be allocating ~60MB/s
    (or whatever your disk can do), so the pageout thread ought to be able
    to keep up.

    If the times for the mmap case do not blow up, we are back to square
    one and I would start investigating the disk driver that Mikhail is

					Matthew Dillon 
					<dillon at>

More information about the freebsd-stable mailing list