How does disk caching work?

Uwe Doering gemini at geminix.org
Sat Apr 17 00:41:24 PDT 2004


Jim C. Nasby wrote:
> On Sat, Apr 17, 2004 at 01:56:55AM +0400, "Igor Shmukler"  wrote:
> 
>>>Is there a document anywhere that describes in detail how FreeBSD
>>>handles disk caching? I've read Matt Dillon's description of the VM
>>>system, but it deals mostly with programs, other than vague statements
>>>such as 'FreeBSD uses all available memory for disk caching'.
>>
>>Well, the statement is not vague. FreeBSD has a unified buffer cache. This means that ALL AVAILABLE 
>>MEMORY IS A BUFFER CACHE for all device IO.
>>
>>>I think I know how caching memory mapped IO works for the most part,
>>>since it should be treated just like program data, but what about files
>>>that aren't memory mapped? What impact is there as pages move from
>>>active to inactive to cache to free? What role do wired and buffer pages
>>>play?
>>
>>If file is not memory mapped it is not in memory, is it? Where do you cache it? Maybe I am missing 
>>somewhing? Do you maybe want to know about node caching?
> 
> What if the file isn't memory mapped? You can access a file without
> mapping it into memory, right?

In FreeBSD, file and directory data always exists as VM objects, that 
is, a collection of virtual memory pages.  Those that have been accessed 
exist in physical memory (if not recycled due to inactivity), the rest 
is just reservations.  That's why it is called "virtual memory". 
Whether these objects get accessed by read()/write() or mmap() depends 
on your application.  These system calls are just different userland 
interfaces to the same kernel resource.

>>When pages are rotated from active to inactive and then to cache buckets they is still retains vnode 
>>references. Once it is in free queue, there is no way to put it back to cache. Association is lost.
>>
>>Wired pages are to pin memory. So that we do not get situation when fault handling code is paged out.
>>
>>I am not FreeBSD guru so I never heard of BUFFER pages. Is there such a concept?
> 
> I'm reffering to the 'Buf' column at the top of top. I remember reading
> something about that being used to cache file descriptors before the
> files are mapped into memory, but I'm not very clear on what is actually
> happening.

The disk i/o buffers you refer to (the 'Buf' column in 'top') are the 
actual interface between the VM system and the disk device drivers.  For 
file and directory data, sets of VM pages get referred by and assigned 
to disk i/o buffers.  There they are dealt with by a kernel daemon 
process that does the actual synchronization between VM and disks. 
That's where the soft updates algorithm is implemented, for instance.

In case of file and directory data, once the data has been written out 
to disk (if the memory pages were "dirty") the respective disk i/o 
buffer gets released immediately and can be recycled for other purposes, 
since it just referred to memory pages that continue to exist within the 
VM system.

Meta data (inodes etc.) is a different matter, though.  There is no VM 
representation for this, so for disk i/o they have to be cached in extra 
memory allocated for this purpose.  A disk i/o buffer then refers to 
this memory range and tries to keep it around for as long as possible. 
A classical cache algorithm like LRU recycles these buffers and memory 
allocations eventually.

As usual, the actual implementation is even more complex, but I think 
you got a picture of how it works.

    Uwe
-- 
Uwe Doering         |  EscapeBox - Managed On-Demand UNIX Servers
gemini at geminix.org  |  http://www.escapebox.net


More information about the freebsd-performance mailing list