[GENERAL] PostgreSQL's vacuumdb fails to allocate memory for

Sven Willenberger sven at dmv.com
Thu Jun 30 03:45:49 GMT 2005



Tom Lane presumably uttered the following on 06/29/05 19:12:
> Sven Willenberger <sven at dmv.com> writes:
> 
>>I have found the answer/problem. On a hunch I increased maxdsiz to 1.5G
>>in the loader.conf file and rebooted. I ran vacuumdb and watched top as
>>the process proceeded. What I saw was SIZE sitting at 603MB (which was
>>512MB plus another 91MB which corresponded nicely to the value of RES
>>for the process. A bit into the process I saw SIZE jump to 1115 -- i.e.
>>another 512 MB of RAM was requested and this time allocated. At one
>>point SIZE dropped back to 603 and then back up to 1115. I suspect the
>>same type of issue was occuring in regular vacuum from the psql client
>>connecting to the backend, for some reason not as frequently. I am
>>gathering that maintenance work mem is either not being recognized as
>>having already been allocated and another malloc is made or the process
>>is thinking the memory was released and tried to grab a chunk of memory
>>again.
> 
> 
> Hmm.  It's probably a fragmentation issue.  VACUUM will allocate a 
> maintenance work mem-sized chunk during command startup, but that's
> likely not all that gets allocated, and if any stuff allocated after
> it is not freed at the same time, the process size won't go back down.
> Which wouldn't be a killer in itself, but unless the next iteration
> is able to fit that array in the same space, you'd see the above
> behavior.
> 
So maintenance work mem is not a measure of the max that can allocated 
by a maintenance procedure but rather an increment of memory that is 
requested by a maintenance process (which currently are vacuum and 
index, no?), if my reading of the above is correct.

> BTW, do you have any evidence that it's actually useful to set
> maintenance work mem that high for VACUUM?  A quick and dirty solution
> would be to bound the dead-tuples array size at something more sane...
> 

I was under the assumption that on systems with RAM to spare, it was 
beneficial to set main work mem high to make those processes more 
efficient. Again my thinking was that the value you set for that 
variable determined a *max* allocation by any given maintenance process, 
not a memory allocation request size. If, as my tests would indicate, 
the process can request and receive more memory than specified in 
maintenance work mem, then to play it safe I imagine I could drop that 
value to 256MB or so.

Sven


More information about the freebsd-stable mailing list