Poor ZFS ARC metadata hit/miss stats after recent ZFS updates

Fabian Keil freebsd-listen at fabiankeil.de
Mon Oct 17 12:50:04 UTC 2016


After rebasing some of my systems from r305866 to r307312
(plus local patches) I noticed that most of the ARC accesses
are counted as misses now.

Example:

[fk at elektrobier2 ~]$ uptime
 2:03PM  up 1 day, 18:36, 7 users, load averages: 0.29, 0.36, 0.30
[fk at elektrobier2 ~]$ zfs-stats -E

------------------------------------------------------------------------
ZFS Subsystem Report                            Mon Oct 17 14:03:58 2016
------------------------------------------------------------------------

ARC Efficiency:                                 3.38m
        Cache Hit Ratio:                12.87%  435.23k
        Cache Miss Ratio:               87.13%  2.95m
        Actual Hit Ratio:               9.55%   323.15k

        Data Demand Efficiency:         6.61%   863.01k

        CACHE HITS BY CACHE LIST:
          Most Recently Used:           18.97%  82.54k
          Most Frequently Used:         55.28%  240.60k
          Most Recently Used Ghost:     8.88%   38.63k
          Most Frequently Used Ghost:   24.84%  108.12k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  13.10%  57.03k
          Prefetch Data:                0.00%   0
          Demand Metadata:              32.94%  143.36k
          Prefetch Metadata:            53.96%  234.85k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  27.35%  805.98k
          Prefetch Data:                0.00%   0
          Demand Metadata:              71.21%  2.10m
          Prefetch Metadata:            1.44%   42.48k

------------------------------------------------------------------------

I suspect that this is caused by r307265 ("MFC r305323: MFV r302991:
6950 ARC should cache compressed data") which removed a
ARCSTAT_CONDSTAT() call but I haven't confirmed this yet.

The system performance doesn't actually seem to be negatively affected
and repeated metadata accesses that are counted as misses are still served
from memory. On my freshly booted laptop I get:

fk at t520 /usr/ports $for i in 1 2 3; do \
 /usr/local/etc/munin/plugins/zfs-absolute-arc-hits-and-misses; \
 time git status > /dev/null; \
 done; \
 /usr/local/etc/munin/plugins/zfs-absolute-arc-hits-and-misses;
zfs_arc_hits.value 5758
zfs_arc_misses.value 275416
zfs_arc_demand_metadata_hits.value 4331
zfs_arc_demand_metadata_misses.value 270252
zfs_arc_demand_data_hits.value 304
zfs_arc_demand_data_misses.value 3345
zfs_arc_prefetch_metadata_hits.value 1103
zfs_arc_prefetch_metadata_misses.value 1489
zfs_arc_prefetch_data_hits.value 20
zfs_arc_prefetch_data_misses.value 334

real	1m23.398s
user	0m0.974s
sys	0m12.273s
zfs_arc_hits.value 11346
zfs_arc_misses.value 389748
zfs_arc_demand_metadata_hits.value 7723
zfs_arc_demand_metadata_misses.value 381018
zfs_arc_demand_data_hits.value 400
zfs_arc_demand_data_misses.value 3412
zfs_arc_prefetch_metadata_hits.value 3202
zfs_arc_prefetch_metadata_misses.value 4885
zfs_arc_prefetch_data_hits.value 21
zfs_arc_prefetch_data_misses.value 437

real	0m1.472s
user	0m0.452s
sys	0m1.820s
zfs_arc_hits.value 11348
zfs_arc_misses.value 428536
zfs_arc_demand_metadata_hits.value 7723
zfs_arc_demand_metadata_misses.value 419782
zfs_arc_demand_data_hits.value 400
zfs_arc_demand_data_misses.value 3436
zfs_arc_prefetch_metadata_hits.value 3204
zfs_arc_prefetch_metadata_misses.value 4885
zfs_arc_prefetch_data_hits.value 21
zfs_arc_prefetch_data_misses.value 437

real	0m1.537s
user	0m0.461s
sys	0m1.860s
zfs_arc_hits.value 11352
zfs_arc_misses.value 467334
zfs_arc_demand_metadata_hits.value 7723
zfs_arc_demand_metadata_misses.value 458556
zfs_arc_demand_data_hits.value 400
zfs_arc_demand_data_misses.value 3460
zfs_arc_prefetch_metadata_hits.value 3208
zfs_arc_prefetch_metadata_misses.value 4885
zfs_arc_prefetch_data_hits.value 21
zfs_arc_prefetch_data_misses.value 437

Disabling ARC compression through vfs.zfs.compressed_arc_enabled
does not affect the accounting issue.

Can anybody reproduce this?

Fabian
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 181 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20161017/a8520ed0/attachment.sig>


More information about the freebsd-stable mailing list