What is loading my server so much?

Laszlo Nagy gandalf at shopzeus.com
Thu Dec 9 22:39:04 UTC 2010


> Step 1, get them to define "server" and "too slow":
>
> If you log in and do shell ops, is the system slow to respond?  Based on
> what you've reported, I'd be willing to bet that shell ops are pretty
> responsive.  I can't be 100% sure without more information, but I'm
> willing to be that what your users are complaining about is your web
> application being slow.  Since you don't say what that application is,
> I can only provide general advice.
Shell response is fast. The application may be slow, but we should see 
why (high CPU load or high I/O?)
> I'm guessing that PostgreSQL is the bottleneck.  I'm going to first make
> a few general suggestions, then provide suggestions on how to isolate the
> problem more specifically.
>
> First off, you have 24G of RAM available and PostgreSQL only seems to
> have access to 400M of it.  Bump shared_buffers up to 2 or 3 G at least,
> and bump up work_mem to at least a few hundred meg, and
> maintenance_work_mem up to at 1/2G or so.
Good point. Changed shared_buffers to 4G, work_mem to 512M and 
maintenance_work_mem to 2G
> If the top and gstat outputs are typical, it looks like PostgreSQL is
> doing mostly writes, but is not significantly blocked on writes.  It looks
> like individual PostgreSQL processes are simply taking a long time to do
> their work.
>
> What's in your PostgreSQL log files?  If there's nothing, then bump up
> the logging information in your postgresql.conf.  I particularly like
> log_min_duration_statement at 500 ... any query that takes longer than
> 1/2 second to execute is suspect in the types applications I work with
> most frequently.
Thanks for that hint also. We have some programs making huge queries 
(once in a day or so) so I set this to 3000 for now.
> If your application is developed in-house, I'd be willing to bet a paycheck
> that there are LOTS of indexes missing and that PostgreSQL is doing lots
> of seq scans where it could run lots faster if it had indexes.
>
> Check also your autovacuum settings and ensure that tables are not bloating
> out of control due to insufficient vacuuming.  You may have to vacuum full/
> reindex the entire database to get things back under control, which can take
> a long time if it's badly bloated.
Well, we have tables with 5M+ rows (table size over 4G) and we have 500+ 
tables. This is probably something that is out of scope - probably you 
cannot help with that without knowing the structure of the database and 
how it is used, and it would take a long time to understand. But anyway, 
if there is a problem with the database (not having indexes), we see 
heavy I/O or CPU load of the postgresql processes right? I mean, if the 
bottleneck is postgresql, then we should be able to see it at the OS level.
> Your application may also be suffering from lock contention if there are
> lots of table locks used.  Looking at the pg_locks table while things are
> slow can quickly identify if this is the case, and looking at
> pg_stat_activity in conjunction with that table will usually narrow down
> the problem pretty quickly.
We do not issue "lock" statements directly, but we use database 
transactions. We have a tool for checking the state of those. Most of 
the time, only a few open database transactions are opened, and usually 
we don't have locks that are not granted.
> Finally, if you find that PostgreSQL is the bottleneck and you can't
> narrow it down enough to fix, join the PostgreSQL general questions
> mailing list and ask for help with the same level of detail you did
> here.  You'll find that they're an equally helpful community.
I'll do that, but first let me test the new settings.

Yes, thank you very much for you help.


    Laszlo



More information about the freebsd-questions mailing list