ZFS exhausts kernel memory just by importing zpools
mike at sentex.net
Tue Jul 2 16:14:04 UTC 2019
On 7/2/2019 10:58 AM, Nagy, Attila wrote:
> Running latest stable/12 on amd64 with 64 GiB memory on a machine with
> 44 4T disks. Each disks have its own zpool on it (because I solve the
> redundancy between machines and not locally with ZFS).
> One example zpool holds 2.2 TiB of data (according to df) and have
> around 75 million files in hashed directories, this is the typical
> usage on them.
> When I import these zpools, top says around 50 GiB wired memory (ARC
> is minimal, files weren't yet touched) and after I start to use (heavy
> reads/writes) the pools, the free memory quickly disappears (ARC
> grows) until all memory is gone and the machine starts to kill
> processes, ends up in a deadlock, where nothing helps.
> If I import the pools one by one, each of them adds around 1-1.5 GiB
> of wired memory.
You mean you have 44 different zpools ? 75mil files per pool sounds
like a lot. I wonder for testing purposes, you made 1 or two zpools with
44 (or 22) different datasets and had 3.3billion files, would you run
into the same memory exhaustion ?
More information about the freebsd-fs