vfs.ufs.dirhash_maxmem is a bit low

Dennis Berger db at nipsi.de
Tue Mar 15 07:30:44 PST 2005


hi all,
I recently had the problem to change a line in about 2.5 million files.
In about 1530 subdirectories.
i started this command.

find . \( -not -name restoresym\* -and -not -name \*.log -and -not -name \*.gif \) -type f -print0 | xargs -0 sed -i '' 's/\/wurstbrot\//\/kaesebrot\//'

on another shell i watched with gstat how it works after several minutes
i noticed a terrible breakdown. no read/write anymore. After watching with
top i notice 25% system usage. (it's a 4 processor mashine)
I tracked down the error to the vfs.ufs.dirhash. I guess the system
is running out of memory for the hashtable so everything is getting terrible
slow.
I raised vfs.ufs.dirhash_maxmem to 30 mb. after that everything was fine again.

My request can we add white kernel message for the behavier of running out of hashmemory? Like it is for running out of open files.
A message will appear on /dev/console indicating that the dirhash is too low.
Or pointing out in tuning(9) that it should raised of servers with lots of files.
Or raised the default from 2MB to 10MB or something.

any comments?

ragards,

-Dennis


More information about the freebsd-performance mailing list