[Bug 191510] New: ZFS doesn't use all available memory
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Mon Jun 30 08:37:57 UTC 2014
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510
Bug ID: 191510
Summary: ZFS doesn't use all available memory
Product: Base System
Version: 9.2-RELEASE
Hardware: amd64
OS: Any
Status: Needs Triage
Severity: Affects Some People
Priority: Normal
Component: kern
Assignee: freebsd-bugs at FreeBSD.org
Reporter: vsjcfm at gmail.com
I have a machine that serves some tenths of tebibytes big files over HTTP from
AIO on ZFS. Machine has 256 G RAM, but ARC uses only 170-190 G.
Stats below:
root at cs0:~# fgrep " memory " /var/run/dmesg.boot
real memory = 274877906944 (262144 MB)
avail memory = 265899143168 (253581 MB)
root at cs0:~# zfs-stats -a
------------------------------------------------------------------------
ZFS Subsystem Report Mon Jun 30 11:31:57 2014
------------------------------------------------------------------------
System Information:
Kernel Version: 902001 (osreldate)
Hardware Platform: amd64
Processor Architecture: amd64
ZFS Storage pool Version: 5000
ZFS Filesystem Version: 5
FreeBSD 9.2-RELEASE-p8 #0 r267147: Fri Jun 6 10:22:17 EEST 2014 root
11:31 up 15 days, 21:12, 1 user, load averages: 1,15 1,56 1,74
------------------------------------------------------------------------
System Memory:
5.42% 13.47 GiB Active, 0.12% 300.54 MiB Inact
77.71% 193.03 GiB Wired, 0.00% 0 Cache
16.74% 41.59 GiB Free, 0.00% 3.00 MiB Gap
Real Installed: 256.00 GiB
Real Available: 99.98% 255.96 GiB
Real Managed: 97.04% 248.38 GiB
Logical Total: 256.00 GiB
Logical Used: 83.64% 214.12 GiB
Logical Free: 16.36% 41.88 GiB
Kernel Memory: 183.36 GiB
Data: 99.99% 183.35 GiB
Text: 0.01% 10.88 MiB
Kernel Memory Map: 242.40 GiB
Size: 74.89% 181.54 GiB
Free: 25.11% 60.85 GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 1.14b
Recycle Misses: 2.38m
Mutex Misses: 3.15m
Evict Skips: 229.53m
ARC Size: 74.90% 185.28 GiB
Target Size: (Adaptive) 74.90% 185.28 GiB
Min Size (Hard Limit): 12.50% 30.92 GiB
Max Size (High Water): 8:1 247.38 GiB
ARC Size Breakdown:
Recently Used Cache Size: 88.40% 163.78 GiB
Frequently Used Cache Size: 11.60% 21.50 GiB
ARC Hash Breakdown:
Elements Max: 18.24m
Elements Current: 99.44% 18.14m
Collisions: 783.39m
Chain Max: 22
Chains: 3.87m
------------------------------------------------------------------------
ARC Efficiency: 5.80b
Cache Hit Ratio: 78.92% 4.58b
Cache Miss Ratio: 21.08% 1.22b
Actual Hit Ratio: 58.85% 3.41b
Data Demand Efficiency: 99.60% 1.55b
Data Prefetch Efficiency: 43.26% 2.13b
CACHE HITS BY CACHE LIST:
Anonymously Used: 23.60% 1.08b
Most Recently Used: 24.06% 1.10b
Most Frequently Used: 50.52% 2.31b
Most Recently Used Ghost: 0.17% 7.61m
Most Frequently Used Ghost: 1.66% 75.89m
CACHE HITS BY DATA TYPE:
Demand Data: 33.82% 1.55b
Prefetch Data: 20.16% 922.83m
Demand Metadata: 34.87% 1.60b
Prefetch Metadata: 11.14% 510.16m
CACHE MISSES BY DATA TYPE:
Demand Data: 0.50% 6.17m
Prefetch Data: 98.95% 1.21b
Demand Metadata: 0.54% 6.61m
Prefetch Metadata: 0.00% 58.71k
------------------------------------------------------------------------
L2 ARC Summary: (DEGRADED)
Passed Headroom: 83.52m
Tried Lock Failures: 267.02m
IO In Progress: 841
Low Memory Aborts: 28
Free on Write: 3.35m
Writes While Full: 1.40m
R/W Clashes: 51.46k
Bad Checksums: 16
IO Errors: 0
SPA Mismatch: 53.09b
L2 ARC Size: (Adaptive) 1.67 TiB
Header Size: 0.18% 3.01 GiB
L2 ARC Evicts:
Lock Retries: 63.60k
Upon Reading: 173
L2 ARC Breakdown: 1.22b
Hit Ratio: 31.64% 386.94m
Miss Ratio: 68.36% 836.05m
Feeds: 2.81m
L2 ARC Buffer:
Bytes Scanned: 15.61 PiB
Buffer Iterations: 2.81m
List Iterations: 151.58m
NULL List Iterations: 17.35k
L2 ARC Writes:
Writes Sent: 100.00% 2.67m
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency: 4.69b
Hit Ratio: 81.41% 3.82b
Miss Ratio: 18.59% 871.72m
Colinear: 871.72m
Hit Ratio: 0.02% 164.66k
Miss Ratio: 99.98% 871.55m
Stride: 2.61b
Hit Ratio: 99.90% 2.61b
Miss Ratio: 0.10% 2.62m
DMU Misc:
Reclaim: 871.55m
Successes: 0.91% 7.97m
Failures: 99.09% 863.59m
Streams: 1.21b
+Resets: 0.07% 871.59k
-Resets: 99.93% 1.21b
Bogus: 0
------------------------------------------------------------------------
VDEV Cache Summary: 10.23m
Hit Ratio: 9.34% 955.87k
Miss Ratio: 90.47% 9.26m
Delegations: 0.19% 19.15k
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 384
vm.kmem_size 266698448896
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 329853485875
vfs.zfs.arc_max 265624707072
vfs.zfs.arc_min 33203088384
vfs.zfs.arc_meta_used 14496156952
vfs.zfs.arc_meta_limit 66406176768
vfs.zfs.l2arc_write_max 41943040
vfs.zfs.l2arc_write_boost 83886080
vfs.zfs.l2arc_headroom 4
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_noprefetch 0
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_norw 1
vfs.zfs.anon_size 180224
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_data_lsize 0
vfs.zfs.mru_size 158295406592
vfs.zfs.mru_metadata_lsize 842248704
vfs.zfs.mru_data_lsize 156589069824
vfs.zfs.mru_ghost_size 35747599360
vfs.zfs.mru_ghost_metadata_lsize 1232837120
vfs.zfs.mru_ghost_data_lsize 34514762240
vfs.zfs.mfu_size 34995384832
vfs.zfs.mfu_metadata_lsize 6317844992
vfs.zfs.mfu_data_lsize 27855011840
vfs.zfs.mfu_ghost_size 162725010432
vfs.zfs.mfu_ghost_metadata_lsize 24083810304
vfs.zfs.mfu_ghost_data_lsize 138641200128
vfs.zfs.l2c_only_size 1708927019520
vfs.zfs.dedup.prefetch 1
vfs.zfs.nopwrite_enabled 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.no_write_throttle 0
vfs.zfs.write_limit_shift 3
vfs.zfs.write_limit_min 33554432
vfs.zfs.write_limit_max 34353957888
vfs.zfs.write_limit_inflated 824494989312
vfs.zfs.write_limit_override 0
vfs.zfs.prefetch_disable 0
vfs.zfs.zfetch.max_streams 8
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.top_maxinflight 32
vfs.zfs.resilver_delay 2
vfs.zfs.scrub_delay 4
vfs.zfs.scan_idle 50
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.no_scrub_io 0
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.mg_alloc_failures 36
vfs.zfs.write_to_degraded 0
vfs.zfs.check_hostid 1
vfs.zfs.recover 0
vfs.zfs.deadman_synctime 1000
vfs.zfs.deadman_enabled 1
vfs.zfs.txg.synctime_ms 1000
vfs.zfs.txg.timeout 10
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.cache.size 20971520
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.max_pending 10
vfs.zfs.vdev.min_pending 4
vfs.zfs.vdev.time_shift 29
vfs.zfs.vdev.ramp_rate 2
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.trim_max_bytes 2147483648
vfs.zfs.vdev.trim_max_pending 64
vfs.zfs.zil_replay_disable 0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zio.use_uma 0
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.snapshot_list_prefetch 0
vfs.zfs.super_owner 0
vfs.zfs.debug 0
vfs.zfs.version.ioctl 3
vfs.zfs.version.acl 1
vfs.zfs.version.spa 5000
vfs.zfs.version.zpl 5
vfs.zfs.trim.enabled 0
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.timeout 30
vfs.zfs.trim.max_interval 1
------------------------------------------------------------------------
root at cs0:~# top -aSHz -d 1
last pid: 29833; load averages: 0.60, 1.35, 1.66
up 15+21:12:46 11:32:49
818 processes: 25 running, 733 sleeping, 60 waiting
CPU: % user, % nice, % system, % interrupt, % idle
Mem: 13G Active, 300M Inact, 196G Wired, 39G Free
ARC: 188G Total, 35G MFU, 148G MRU, 304K Anon, 4279M Header, 1512M Other
Swap: 2048M Total, 2048M Free
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
4 root -8 - 0K 176K l2arc_ 15 30.8H 6.79%
[zfskern{l2arc_feed_threa}]
12 root -92 - 0K 960K WAIT 10 499:17 2.88% [intr{irq274:
ix0:que }]
12 root -92 - 0K 960K WAIT 7 562:34 2.29% [intr{irq271:
ix0:que }]
12 root -92 - 0K 960K WAIT 8 519:17 2.29% [intr{irq272:
ix0:que }]
12 root -92 - 0K 960K WAIT 1 556:54 2.10% [intr{irq265:
ix0:que }]
65513 root 21 - 0K 16K aiordy 4 0:06 1.76% [aiod5]
29832 root 21 - 0K 16K aiordy 11 0:00 1.76% [aiod1]
12 root -92 - 0K 960K WAIT 2 553:50 1.46% [intr{irq266:
ix0:que }]
12 root -92 - 0K 960K WAIT 0 539:07 1.37% [intr{irq264:
ix0:que }]
12 root -92 - 0K 960K WAIT 9 527:26 1.17% [intr{irq273:
ix0:que }]
79590 www 20 0 25156K 8724K kqread 12 56:58 1.17% nginx: worker
process (nginx)
12 root -92 - 0K 960K WAIT 5 550:28 1.07% [intr{irq269:
ix0:que }]
13 root -8 - 0K 48K - 18 491:43 1.07% [geom{g_down}]
79585 www 20 0 25156K 8728K kqread 19 61:40 1.07% nginx: worker
process (nginx)
65507 root 20 - 0K 16K aiordy 6 0:11 1.07% [aiod2]
79574 www 20 0 25156K 9260K kqread 17 63:53 0.98% nginx: worker
process (nginx)
12 root -92 - 0K 960K WAIT 4 565:24 0.88% [intr{irq268:
ix0:que }]
12 root -92 - 0K 960K WAIT 3 550:34 0.88% [intr{irq267:
ix0:que }]
13 root -8 - 0K 48K - 18 442:35 0.88% [geom{g_up}]
79583 www 20 0 25156K 9300K kqread 19 60:26 0.88% nginx: worker
process (nginx)
12 root -68 - 0K 960K WAIT 11 410:05 0.78% [intr{swi2:
cambio}]
12 root -92 - 0K 960K WAIT 6 550:33 0.68% [intr{irq270:
ix0:que }]
79578 www 20 0 25156K 8468K kqread 23 60:54 0.68% nginx: worker
process (nginx)
79576 www 20 0 25156K 8792K kqread 11 63:21 0.49% nginx: worker
process (nginx)
79572 www 20 0 25156K 8464K kqread 8 62:23 0.49% nginx: worker
process (nginx)
12 root -92 - 0K 960K WAIT 11 512:14 0.39% [intr{irq275:
ix0:que }]
26851 root 20 0 71240K 13868K select 1 64:08 0.39%
/usr/local/sbin/snmpd -p /var/run/net_snmpd.pid -c /usr/local/e
79584 www 20 0 25156K 8204K kqread 17 56:42 0.39% nginx: worker
process (nginx)
0 root -16 0 0K 9904K - 14 212:49 0.29%
[kernel{zio_read_intr_12}]
0 root -16 0 0K 9904K - 9 212:47 0.29%
[kernel{zio_read_intr_5}]
0 root -16 0 0K 9904K - 2 212:39 0.29%
[kernel{zio_read_intr_1}]
79571 www 20 0 25156K 8460K kqread 1 60:31 0.29% nginx: worker
process (nginx)
0 root -16 0 0K 9904K - 10 212:45 0.20%
[kernel{zio_read_intr_14}]
0 root -16 0 0K 9904K - 6 212:45 0.20%
[kernel{zio_read_intr_7}]
root at cs0:~# zpool list zdata
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zdata 162T 136T 26,4T 83% 1.00x ONLINE -
root at cs0:~# zpool status zdata
pool: zdata
state: ONLINE
scan: resilvered 1,71T in 8h57m with 0 errors on Tue May 27 02:08:16 2014
config:
NAME STATE READ WRITE CKSUM
zdata ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
label/zdr00 ONLINE 0 0 0
label/zdr01 ONLINE 0 0 0
label/zdr02 ONLINE 0 0 0
label/zdr03 ONLINE 0 0 0
label/zdr04 ONLINE 0 0 0
label/zdr05 ONLINE 0 0 0
label/zdr06 ONLINE 0 0 0
label/zdr07 ONLINE 0 0 0
label/zdr08 ONLINE 0 0 0
raidz3-1 ONLINE 0 0 0
label/zdr10 ONLINE 0 0 0
label/zdr11 ONLINE 0 0 0
label/zdr12 ONLINE 0 0 0
label/zdr13 ONLINE 0 0 0
label/zdr14 ONLINE 0 0 0
label/zdr15 ONLINE 0 0 0
label/zdr16 ONLINE 0 0 0
label/zdr17 ONLINE 0 0 0
label/zdr18 ONLINE 0 0 0
raidz3-2 ONLINE 0 0 0
label/zdr20 ONLINE 0 0 0
label/zdr21 ONLINE 0 0 0
label/zdr22 ONLINE 0 0 0
label/zdr23 ONLINE 0 0 0
label/zdr24 ONLINE 0 0 0
label/zdr25 ONLINE 0 0 0
label/zdr26 ONLINE 0 0 0
label/zdr27 ONLINE 0 0 0
label/zdr28 ONLINE 0 0 0
raidz3-3 ONLINE 0 0 0
label/zdr30 ONLINE 0 0 0
label/zdr31 ONLINE 0 0 0
label/zdr32 ONLINE 0 0 0
label/zdr33 ONLINE 0 0 0
label/zdr34 ONLINE 0 0 0
label/zdr35 ONLINE 0 0 0
label/zdr36 ONLINE 0 0 0
label/zdr37 ONLINE 0 0 0
label/zdr38 ONLINE 0 0 0
raidz3-4 ONLINE 0 0 0
label/zdr40 ONLINE 0 0 0
label/zdr41 ONLINE 0 0 0
label/zdr42 ONLINE 0 0 0
label/zdr43 ONLINE 0 0 0
label/zdr44 ONLINE 0 0 0
label/zdr45 ONLINE 0 0 0
label/zdr46 ONLINE 0 0 0
label/zdr47 ONLINE 0 0 0
label/zdr48 ONLINE 0 0 0
cache
gpt/l2arc0 ONLINE 0 0 0
gpt/l2arc1 ONLINE 0 0 0
gpt/l2arc2 ONLINE 0 0 0
gpt/l2arc3 ONLINE 0 0 0
spares
label/spare0 AVAIL
errors: No known data errors
root at cs0:~#
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-bugs
mailing list