[Bug 216364] ZFS ARC cache is duplicating data? The cache size gets bigger then the pool.
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Wed Feb 8 10:28:13 UTC 2017
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=216364
--- Comment #7 from k_georgiev at deltanews.bg ---
Hello,
There's a compression stats:
for i in `zfs list |awk -F ' ' '{print $1}' |grep -v NAME`; do zfs get
compression $i |tail -n 1; zfs get compressratio $i |tail -n 1; echo
'----------' ; done;
zroot compression lz4 local
zroot compressratio 3.74x -
----------
zroot/ROOT compression lz4 inherited from zroot
zroot/ROOT compressratio 1.89x -
----------
zroot/ROOT/default compression lz4 inherited from zroot
zroot/ROOT/default compressratio 1.89x -
----------
zroot/tmp compression lz4 inherited from zroot
zroot/tmp compressratio 4.57x -
----------
zroot/usr compression lz4 inherited from zroot
zroot/usr compressratio 2.37x -
----------
zroot/usr/home compression lz4 inherited from zroot
zroot/usr/home compressratio 1.01x -
----------
zroot/usr/ports compression lz4 inherited from zroot
zroot/usr/ports compressratio 2.23x -
----------
zroot/usr/src compression lz4 inherited from zroot
zroot/usr/src compressratio 2.46x -
----------
zroot/var compression lz4 inherited from zroot
zroot/var compressratio 6.53x -
----------
zroot/var/audit compression lz4 inherited from zroot
zroot/var/audit compressratio 1.00x -
----------
zroot/var/crash compression lz4 inherited from zroot
zroot/var/crash compressratio 1.05x -
----------
zroot/var/log compression lz4 inherited from zroot
zroot/var/log compressratio 6.64x -
----------
zroot/var/mail compression lz4 inherited from zroot
zroot/var/mail compressratio 1.00x -
----------
zroot/var/tmp compression lz4 inherited from zroot
zroot/var/tmp compressratio 1.50x -
----------
Currently the cache size is not larger than the pool size, but it is still a
lot larger than the allocated data so I don't know if this will do the job for
you. This is the current situation:
root at varnish:~ # zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 5.97G 2.87G 3.10G - 73% 48% 1.00x ONLINE -
ARC: 5730M Total, 75M MFU, 5242M MRU, 1060K Anon, 22M Header, 391M Other
And the statistics you requested:
root at varnish:~ # sysctl kstat.zfs.misc.arcstats
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch: 1139
kstat.zfs.misc.arcstats.sync_wait_for_async: 97
kstat.zfs.misc.arcstats.arc_meta_min: 975210240
kstat.zfs.misc.arcstats.arc_meta_max: 719384968
kstat.zfs.misc.arcstats.arc_meta_limit: 3900840960
kstat.zfs.misc.arcstats.arc_meta_used: 719067792
kstat.zfs.misc.arcstats.duplicate_reads: 2495
kstat.zfs.misc.arcstats.duplicate_buffers_size: 0
kstat.zfs.misc.arcstats.duplicate_buffers: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
kstat.zfs.misc.arcstats.l2_write_pios: 0
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
kstat.zfs.misc.arcstats.l2_write_full: 0
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 1
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
kstat.zfs.misc.arcstats.l2_write_in_l2: 0
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 0
kstat.zfs.misc.arcstats.l2_padding_needed: 0
kstat.zfs.misc.arcstats.l2_compress_failures: 0
kstat.zfs.misc.arcstats.l2_compress_zeros: 0
kstat.zfs.misc.arcstats.l2_compress_successes: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.l2_asize: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cdata_free_on_write: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_evict_l1cached: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_writes_lock_retry: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_write_bytes: 0
kstat.zfs.misc.arcstats.l2_read_bytes: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.mfu_ghost_evictable_metadata: 0
kstat.zfs.misc.arcstats.mfu_ghost_evictable_data: 0
kstat.zfs.misc.arcstats.mfu_ghost_size: 0
kstat.zfs.misc.arcstats.mfu_evictable_metadata: 11820032
kstat.zfs.misc.arcstats.mfu_evictable_data: 66081280
kstat.zfs.misc.arcstats.mfu_size: 78476800
kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata: 0
kstat.zfs.misc.arcstats.mru_ghost_evictable_data: 0
kstat.zfs.misc.arcstats.mru_ghost_size: 0
kstat.zfs.misc.arcstats.mru_evictable_metadata: 95819264
kstat.zfs.misc.arcstats.mru_evictable_data: 5226756608
kstat.zfs.misc.arcstats.mru_size: 5500123136
kstat.zfs.misc.arcstats.anon_evictable_metadata: 0
kstat.zfs.misc.arcstats.anon_evictable_data: 0
kstat.zfs.misc.arcstats.anon_size: 1070080
kstat.zfs.misc.arcstats.other_size: 409599856
kstat.zfs.misc.arcstats.metadata_size: 286570496
kstat.zfs.misc.arcstats.data_size: 5293100032
kstat.zfs.misc.arcstats.hdr_size: 22897440
kstat.zfs.misc.arcstats.size: 6012167824
kstat.zfs.misc.arcstats.c_max: 15603363840
kstat.zfs.misc.arcstats.c_min: 1950420480
kstat.zfs.misc.arcstats.c: 15603363840
kstat.zfs.misc.arcstats.p: 7801681920
kstat.zfs.misc.arcstats.hash_chain_max: 3
kstat.zfs.misc.arcstats.hash_chains: 1290
kstat.zfs.misc.arcstats.hash_collisions: 156194
kstat.zfs.misc.arcstats.hash_elements_max: 79495
kstat.zfs.misc.arcstats.hash_elements: 79489
kstat.zfs.misc.arcstats.evict_l2_skip: 0
kstat.zfs.misc.arcstats.evict_l2_ineligible: 2048
kstat.zfs.misc.arcstats.evict_l2_eligible: 67584
kstat.zfs.misc.arcstats.evict_l2_cached: 0
kstat.zfs.misc.arcstats.evict_not_enough: 0
kstat.zfs.misc.arcstats.evict_skip: 1
kstat.zfs.misc.arcstats.mutex_miss: 0
kstat.zfs.misc.arcstats.deleted: 8
kstat.zfs.misc.arcstats.allocated: 9883033
kstat.zfs.misc.arcstats.mfu_ghost_hits: 0
kstat.zfs.misc.arcstats.mfu_hits: 6223489
kstat.zfs.misc.arcstats.mru_ghost_hits: 0
kstat.zfs.misc.arcstats.mru_hits: 1746472
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 9869
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 13974
kstat.zfs.misc.arcstats.prefetch_data_misses: 89
kstat.zfs.misc.arcstats.prefetch_data_hits: 1
kstat.zfs.misc.arcstats.demand_metadata_misses: 3949342
kstat.zfs.misc.arcstats.demand_metadata_hits: 7185451
kstat.zfs.misc.arcstats.demand_data_misses: 123344
kstat.zfs.misc.arcstats.demand_data_hits: 784508
kstat.zfs.misc.arcstats.misses: 4082644
kstat.zfs.misc.arcstats.hits: 7983934
Hope this helps
Thanks
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-fs
mailing list