ZFS ARC Metadata "sizing" for datasets&snapshots
Peter Eriksson
pen at lysator.liu.se
Sun Mar 29 21:26:30 UTC 2020
Hmm.. I wonder if someone knows how much ZFS ARC metadata one should expect for a certain server.
Just for fun, I did some tests.
On a test server with 512GB RAM and arc_max set to 384GB, arc_meta_limit set to 256GB, 12866 filesystems (datasets) and 430610 snapshots (spread out over those filesystems), and an uptime of 1 day doing nothing basically (taking some snapshots) I get these numbers:
anon_size: 1.0 M
arc_max: 412.3 G
arc_meta_limit: 274.9 G
arc_meta_max: 33.5 G
arc_meta_used: 33.5 G
compressed_size: 9.5 G
data_size: 7.4 G
hdr_size: 462.3 M
metadata_size: 30.2 G
mru_size: 16.7 G
other_size: 2.8 G
overhead_size: 28.2 G
size: 40.9 G
uncompressed_size: 45.9 G
Ie ARC metadata_size at 30GB / arc_meta_used at 33.5G.
Doing a “zfs list -t all” takes ~100s and “zfs list” takes ~3s.
On production server (just booted) with just 256GB RAM doing “zfs list” takes 0.4s but “zfs list -t all” is taking a long time…
anon_size: 3.3 M
arc_max: 103.1 G
arc_meta_limit: 51.5 G
arc_meta_max: 2.2 G
arc_meta_used: 2.2 G
compressed_size: 1.3 G
data_size: 2.7 G
hdr_size: 17.9 M
metadata_size: 2.0 G
mru_size: 3.3 G
other_size: 180.5 M
overhead_size: 3.4 G
size: 4.9 G
uncompressed_size: 3.4 G
The “zfs list -t all” took 2542 seconds (42 minutes) - 131256 datasets+snapshots (1600 filesystems). That is ~50 snapshots/filesystems per second.
After that command has executed metadata_size has increased with 5GB and a new “zfs list -t all” just takes 37 seconds.
anon_size: 5.1 M
arc_max: 103.1 G
arc_meta_limit: 51.5 G
arc_meta_max: 7.9 G
arc_meta_used: 7.8 G
compressed_size: 2.5 G
data_size: 1.5 G
hdr_size: 98.0 M
metadata_size: 7.1 G
mru_size: 7.3 G
other_size: 660.9 M
overhead_size: 6.1 G
size: 9.4 G
uncompressed_size: 8.2 G
So... perhaps ~40KB (5G/131256) per dataset/snapshot on average. (Yes, oversimplificated but anyway)
Hmm.. Perhaps one should regularly do a “zfs list -t all >/dev/null” just to prime the ARC metadata cache (and keep it primed) :-)
- Peter
More information about the freebsd-fs
mailing list