Tuning for PostGreSQL Database
Michael E. Conlen
meconlen at obfuscated.net
Fri Jul 25 11:26:00 PDT 2003
Usually a database server will have better algorithms to cache information
than the OS, as the OS handles the general case, and the server can handle
the specific case. In this case, I'd look to the Postgress people to see if
they know which is better, but there's also this tidbit found on the page
under the first link
"There is one way to decide what is best for you. Set a high value of this
parameter and run the database for typical usage. Watch usage of shared
memory using ipcs or similar tools. A recommended figure would be between
1.2 to 2 times peak shared memory usage. "
Give it a run and see.
I was concerned about the disk cache because of this link
PostgreSQL counts a lot on the OS to cache data files and
hence does not bother with duplicating its file caching effort.
The shared buffers parameter assumes that OS is going to cache
a lot of files and hence it is generally very low compared
with system RAM. Even for a dataset in excess of 20GB, a
setting of 128MB may be too much, if you have only 1GB RAM
and an aggressive-at-caching OS like Linux.
But now that I have looked a bit more I see that this link
Ideally, the POSTGRESQL shared buffer cache will be:
* Large enough to hold most commonly-accessed tables
* Small enough to avoid swap pagein activity
So I have conflicting documentation.
I have machine with 4Gig of ram. What is the maximum
value of SHMMAX on FreeBSD?
More information about the freebsd-performance