ZFS exhausts kernel memory just by importing zpools
Nagy, Attila
bra at fsn.hu
Wed Jul 3 14:37:50 UTC 2019
On 2019. 07. 02. 18:13, Mike Tancsa wrote:
> On 7/2/2019 10:58 AM, Nagy, Attila wrote:
>> Hi,
>>
>> Running latest stable/12 on amd64 with 64 GiB memory on a machine with
>> 44 4T disks. Each disks have its own zpool on it (because I solve the
>> redundancy between machines and not locally with ZFS).
>>
>> One example zpool holds 2.2 TiB of data (according to df) and have
>> around 75 million files in hashed directories, this is the typical
>> usage on them.
>>
>> When I import these zpools, top says around 50 GiB wired memory (ARC
>> is minimal, files weren't yet touched) and after I start to use (heavy
>> reads/writes) the pools, the free memory quickly disappears (ARC
>> grows) until all memory is gone and the machine starts to kill
>> processes, ends up in a deadlock, where nothing helps.
>>
>> If I import the pools one by one, each of them adds around 1-1.5 GiB
>> of wired memory.
> Hi,
>
> You mean you have 44 different zpools ? 75mil files per pool sounds
> like a lot. I wonder for testing purposes, you made 1 or two zpools with
> 44 (or 22) different datasets and had 3.3billion files, would you run
> into the same memory exhaustion ?
>
Yes, 44 different pools.
I think this is related to how ZFS stores pool metadata in memory. I
don't think these scales with the number of the files, but maybe with
the number of stored blocks.
Sadly, I can't put the same amount of data to a machine with a different
setup ATM.
More information about the freebsd-fs
mailing list