Directories with 2million files
Eric Anderson
anderson at centtech.com
Fri Apr 23 06:27:35 PDT 2004
Tim Kientzle wrote:
> Eric Anderson wrote:
>
>> First, let me say that I am impressed (but not shocked) - FreeBSD
>> quietly handled my building of a directory with 2055476 files in it.
>> However, several tools seem to choke on that many files ...
>>
>> $ ls -al | wc -l
>> ls: fts_read: Cannot allocate memory
>> 0
>>
>> Watching memory usage, it goes up to about 515Mb, and runs out of
>> memory (can't swap it), and then dies. (I only have 768Mb in this
>> machine).
>
>
> Not "can't swap", but "doesn't need to swap." Your 'top' output
> shows you've got plenty of free swap, so that's not the issue.
> I suspect you have got a per-process data limit of 512MB, so the
> kernel is killing the process when it gets too big. Up that
> limit, and it should succeed.
First, what I was referring to was the fact that after using 512mb of
ram, it dies with a memory allocation error. So, since it looked like
maybe it *did* run out of memory (I have 768Mb ram, but at the time it
died, I have many other things running on it, eating up memory - so it
was possible). Your right - there is a 512mb per-process limit (I
didn't realize it was set so low), which I just found out with your
commands below:
>
> What does "limit -d" say?
Resource limits (current):
datasize 524288 kb
Ouch! That seems pretty low to me. 1gb would be closer to reasonable
if you ask me, but I'm nobody, so take it with a grain of salt.
> What is the 'datasize' set to in /etc/login.conf?
:datasize=unlimited:
> What are you using for DFLDSIZ in your kernel config file?
> (See /usr/src/sys/conf/NOTES for more information on DFLDSIZ,
> which I believe defaults to 512MB.)
Defaults - I'm running the GENERIC kernel on this machine.. 512MB
appears to be the default - I also think this should be 1gb default. At
least, MAXDSIZ should be higher than 512MB, and it's not clear if the
512MB default limit it mentions is applicable to the DFLDSIZ, MAXSSIZ,
MAXDSIZ or all three limits.
> If you're using directories with over 2million files,
> you probably have other processes that could use
> more memory as well, so upping this limit is advisable.
Thanks - I'll do that.
> The Real Fix
>
> Of course, 'ls' should probably not be using this
> much memory.
[..snip..]
Thanks Tim for the thoughts.. Again, the offer is open - I'll test any
code anyone has for me, and even give an account to an interested hacker..
Eric
--
------------------------------------------------------------------
Eric Anderson Sr. Systems Administrator Centaur Technology
Today is the tomorrow you worried about yesterday.
------------------------------------------------------------------
More information about the freebsd-current
mailing list