[Bug 216178] ZFS ARC and L2ARC are unrealistically large, maybe after r307265
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Mon Feb 20 15:37:24 UTC 2017
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=216178
Rémi Guyomarch <remi.guyomarch at ign.fr> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |remi.guyomarch at ign.fr
--- Comment #12 from Rémi Guyomarch <remi.guyomarch at ign.fr> ---
Same thing here, running 10.3-STABLE r313140.
It did NOT happen on r301989.
This is a large virtual NAS, offering both NFSv3 and SMB shares. Cache devices
are also virtualized, TRIM isn't running here.
# zpool list -v tank
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 288T 174T 114T - 19% 60% 1.00x ONLINE -
raidz2 48,0T 28,9T 19,0T - 19% 60%
da9 - - - - - -
da10 - - - - - -
da11 - - - - - -
da12 - - - - - -
da13 - - - - - -
da14 - - - - - -
raidz2 48,0T 28,9T 19,0T - 19% 60%
da15 - - - - - -
da16 - - - - - -
da17 - - - - - -
da18 - - - - - -
da19 - - - - - -
da20 - - - - - -
raidz2 48,0T 28,9T 19,0T - 19% 60%
da21 - - - - - -
da22 - - - - - -
da23 - - - - - -
da24 - - - - - -
da25 - - - - - -
da26 - - - - - -
raidz2 48,0T 29,0T 19,0T - 19% 60%
da27 - - - - - -
da28 - - - - - -
da29 - - - - - -
da30 - - - - - -
da31 - - - - - -
da32 - - - - - -
raidz2 48,0T 28,9T 19,0T - 19% 60%
da33 - - - - - -
da34 - - - - - -
da35 - - - - - -
da36 - - - - - -
da37 - - - - - -
da38 - - - - - -
raidz2 48,0T 28,9T 19,0T - 19% 60%
da39 - - - - - -
da40 - - - - - -
da41 - - - - - -
da42 - - - - - -
da43 - - - - - -
da44 - - - - - -
log - - - - - -
mirror 2,98G 2,01M 2,98G - 20% 0%
da1 - - - - - -
da2 - - - - - -
cache - - - - - -
da3 256G 764G 16,0E - 0% 298%
da4 256G 757G 16,0E - 0% 295%
da5 256G 762G 16,0E - 0% 297%
da6 256G 747G 16,0E - 0% 291%
da7 256G 776G 16,0E - 0% 303%
da8 256G 743G 16,0E - 0% 290%
# zfs-stats -a
------------------------------------------------------------------------
ZFS Subsystem Report Mon Feb 20 16:33:13 2017
------------------------------------------------------------------------
System Information:
Kernel Version: 1003511 (osreldate)
Hardware Platform: amd64
Processor Architecture: amd64
ZFS Storage pool Version: 5000
ZFS Filesystem Version: 5
FreeBSD 10.3-STABLE #2 r313140M: Fri Feb 3 09:38:12 CET 2017 root
16:33 up 7 days, 8:32, 1 user, load averages: 0,16 0,37 0,55
------------------------------------------------------------------------
System Memory:
0.00% 5.28 MiB Active, 0.47% 670.72 MiB Inact
89.61% 125.70 GiB Wired, 0.00% 0 Cache
9.92% 13.92 GiB Free, 0.00% 4.00 KiB Gap
Real Installed: 160.00 GiB
Real Available: 89.98% 143.97 GiB
Real Managed: 97.44% 140.28 GiB
Logical Total: 160.00 GiB
Logical Used: 90.89% 145.42 GiB
Logical Free: 9.11% 14.58 GiB
Kernel Memory: 1.30 GiB
Data: 97.94% 1.27 GiB
Text: 2.06% 27.29 MiB
Kernel Memory Map: 140.28 GiB
Size: 82.86% 116.24 GiB
Free: 17.14% 24.04 GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 23.76m
Recycle Misses: 0
Mutex Misses: 36.19k
Evict Skips: 6.43k
ARC Size: 83.09% 115.73 GiB
Target Size: (Adaptive) 83.11% 115.76 GiB
Min Size (Hard Limit): 12.50% 17.41 GiB
Max Size (High Water): 8:1 139.28 GiB
ARC Size Breakdown:
Recently Used Cache Size: 62.61% 72.48 GiB
Frequently Used Cache Size: 37.39% 43.28 GiB
ARC Hash Breakdown:
Elements Max: 14.41m
Elements Current: 98.47% 14.19m
Collisions: 16.02m
Chain Max: 7
Chains: 2.27m
------------------------------------------------------------------------
ARC Efficiency: 3.28b
Cache Hit Ratio: 18.94% 620.69m
Cache Miss Ratio: 81.06% 2.66b
Actual Hit Ratio: 5.26% 172.47m
Data Demand Efficiency: 30.02% 138.53m
Data Prefetch Efficiency: 82.77% 124.81m
CACHE HITS BY CACHE LIST:
Anonymously Used: 71.34% 442.81m
Most Recently Used: 2.11% 13.07m
Most Frequently Used: 25.68% 159.41m
Most Recently Used Ghost: 0.02% 102.06k
Most Frequently Used Ghost: 0.86% 5.31m
CACHE HITS BY DATA TYPE:
Demand Data: 6.70% 41.58m
Prefetch Data: 16.64% 103.30m
Demand Metadata: 3.92% 24.33m
Prefetch Metadata: 72.74% 451.49m
CACHE MISSES BY DATA TYPE:
Demand Data: 3.65% 96.95m
Prefetch Data: 0.81% 21.51m
Demand Metadata: 95.51% 2.54b
Prefetch Metadata: 0.03% 880.35k
------------------------------------------------------------------------
L2 ARC Summary: (DEGRADED)
Passed Headroom: 975.75k
Tried Lock Failures: 121.14m
IO In Progress: 5
Low Memory Aborts: 217
Free on Write: 181.66k
Writes While Full: 46.56k
R/W Clashes: 0
Bad Checksums: 3.00m
IO Errors: 0
SPA Mismatch: 1.97b
L2 ARC Size: (Adaptive) 6.82 TiB
Header Size: 0.01% 973.67 MiB
L2 ARC Evicts:
Lock Retries: 120
Upon Reading: 0
L2 ARC Breakdown: 2.66b
Hit Ratio: 0.61% 16.16m
Miss Ratio: 99.39% 2.64b
Feeds: 688.84k
L2 ARC Buffer:
Bytes Scanned: 5.87 PiB
Buffer Iterations: 688.84k
List Iterations: 2.75m
NULL List Iterations: 2.97k
L2 ARC Writes:
Writes Sent: 100.00% 365.54k
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency: 16.05b
Hit Ratio: 2.21% 354.01m
Miss Ratio: 97.79% 15.69b
Colinear: 0
Hit Ratio: 100.00% 0
Miss Ratio: 100.00% 0
Stride: 0
Hit Ratio: 100.00% 0
Miss Ratio: 100.00% 0
DMU Misc:
Reclaim: 0
Successes: 100.00% 0
Failures: 100.00% 0
Streams: 0
+Resets: 100.00% 0
-Resets: 100.00% 0
Bogus: 0
------------------------------------------------------------------------
VDEV Cache Summary: 5.56m
Hit Ratio: 22.19% 1.23m
Miss Ratio: 65.26% 3.63m
Delegations: 12.55% 696.99k
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 9550
vm.kmem_size 150625865728
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 1319413950874
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 0
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.mode 1
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.version.ioctl 7
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.dva_throttle_enabled 1
vfs.zfs.zio.exclude_metadata 0
vfs.zfs.zio.use_uma 1
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.min_auto_ashift 12
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 10000
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.queue_depth_pct 1000
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 60
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 100
vfs.zfs.vdev.async_write_min_active 10
vfs.zfs.vdev.async_read_max_active 60
vfs.zfs.vdev.async_read_min_active 10
vfs.zfs.vdev.sync_write_max_active 200
vfs.zfs.vdev.sync_write_min_active 100
vfs.zfs.vdev.sync_read_max_active 100
vfs.zfs.vdev.sync_read_min_active 100
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 4194304
vfs.zfs.vdev.cache.max 65536
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.txg.timeout 4
vfs.zfs.space_map_blksz 4096
vfs.zfs.spa_min_slop 134217728
vfs.zfs.spa_slop_shift 5
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 0
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.debug_flags 0
vfs.zfs.recover 0
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.free_bpobj_enabled 1
vfs.zfs.free_max_blocks -1
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.max_distance 8388608
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 64
vfs.zfs.prefetch_disable 0
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 4294967296
vfs.zfs.max_recordsize 1048576
vfs.zfs.send_holes_without_birth_time 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.dedup.prefetch 1
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_esize 62297529344
vfs.zfs.mfu_ghost_metadata_esize 0
vfs.zfs.mfu_ghost_size 62297529344
vfs.zfs.mfu_data_esize 66706433536
vfs.zfs.mfu_metadata_esize 2435194880
vfs.zfs.mfu_size 69802579968
vfs.zfs.mru_ghost_data_esize 60685495808
vfs.zfs.mru_ghost_metadata_esize 0
vfs.zfs.mru_ghost_size 60685495808
vfs.zfs.mru_data_esize 49709753856
vfs.zfs.mru_metadata_esize 1613580288
vfs.zfs.mru_size 51551468032
vfs.zfs.anon_data_esize 0
vfs.zfs.anon_metadata_esize 0
vfs.zfs.anon_size 6747136
vfs.zfs.l2arc_norw 0
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 0
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 32
vfs.zfs.l2arc_write_boost 268435456
vfs.zfs.l2arc_write_max 67108864
vfs.zfs.arc_meta_limit 77309411328
vfs.zfs.arc_free_target 254980
vfs.zfs.compressed_arc_enabled 1
vfs.zfs.arc_shrink_shift 7
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 18694015488
vfs.zfs.arc_max 149552123904
------------------------------------------------------------------------
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-fs
mailing list