vmstat -z: zfs related failures on r255173
Dmitriy Makarov
supportme at ukr.net
Tue Oct 15 13:52:22 UTC 2013
[:~]# zfs-stats -a
------------------------------------------------------------------------
ZFS Subsystem Report Tue Oct 15 16:48:43 2013
------------------------------------------------------------------------
System Information:
Kernel Version: 1000051 (osreldate)
Hardware Platform: amd64
Processor Architecture: amd64
ZFS Storage pool Version: 5000
ZFS Filesystem Version: 5
FreeBSD 10.0-CURRENT #3 r255173: Fri Oct 11 17:15:50 EEST 2013 root
16:48 up 16:27, 1 user, load averages: 12,58 12,51 14,44
------------------------------------------------------------------------
System Memory:
15.05% 18.76 GiB Active, 0.05% 61.38 MiB Inact
83.42% 103.98 GiB Wired, 0.55% 702.44 MiB Cache
0.92% 1.14 GiB Free, 0.01% 16.93 MiB Gap
Real Installed: 128.00 GiB
Real Available: 99.96% 127.95 GiB
Real Managed: 97.41% 124.65 GiB
Logical Total: 128.00 GiB
Logical Used: 98.52% 126.11 GiB
Logical Free: 1.48% 1.89 GiB
Kernel Memory: 91.00 GiB
Data: 99.99% 90.99 GiB
Text: 0.01% 13.06 MiB
Kernel Memory Map: 124.65 GiB
Size: 69.88% 87.11 GiB
Free: 30.12% 37.54 GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 30.38m
Recycle Misses: 25.16m
Mutex Misses: 7.45m
Evict Skips: 444.42m
ARC Size: 100.00% 90.00 GiB
Target Size: (Adaptive) 100.00% 90.00 GiB
Min Size (Hard Limit): 44.44% 40.00 GiB
Max Size (High Water): 2:1 90.00 GiB
ARC Size Breakdown:
Recently Used Cache Size: 92.69% 83.42 GiB
Frequently Used Cache Size: 7.31% 6.58 GiB
ARC Hash Breakdown:
Elements Max: 14.59m
Elements Current: 99.70% 14.54m
Collisions: 71.31m
Chain Max: 25
Chains: 2.08m
------------------------------------------------------------------------
ARC Efficiency: 1.11b
Cache Hit Ratio: 93.89% 1.04b
Cache Miss Ratio: 6.11% 67.70m
Actual Hit Ratio: 91.73% 1.02b
Data Demand Efficiency: 90.56% 294.97m
Data Prefetch Efficiency: 9.64% 7.07m
CACHE HITS BY CACHE LIST:
Most Recently Used: 8.80% 91.66m
Most Frequently Used: 88.89% 925.41m
Most Recently Used Ghost: 0.50% 5.16m
Most Frequently Used Ghost: 2.97% 30.95m
CACHE HITS BY DATA TYPE:
Demand Data: 25.66% 267.11m
Prefetch Data: 0.07% 681.36k
Demand Metadata: 72.04% 749.94m
Prefetch Metadata: 2.24% 23.31m
CACHE MISSES BY DATA TYPE:
Demand Data: 41.15% 27.86m
Prefetch Data: 9.43% 6.38m
Demand Metadata: 48.71% 32.98m
Prefetch Metadata: 0.71% 478.11k
------------------------------------------------------------------------
L2 ARC Summary: (HEALTHY)
Passed Headroom: 1.38m
Tried Lock Failures: 403.24m
IO In Progress: 1.19k
Low Memory Aborts: 6
Free on Write: 1.69m
Writes While Full: 3.48k
R/W Clashes: 608.58k
Bad Checksums: 0
IO Errors: 0
SPA Mismatch: 321.48m
L2 ARC Size: (Adaptive) 268.26 GiB
Header Size: 0.85% 2.27 GiB
L2 ARC Breakdown: 67.70m
Hit Ratio: 54.97% 37.21m
Miss Ratio: 45.03% 30.48m
Feeds: 62.45k
L2 ARC Buffer:
Bytes Scanned: 531.83 TiB
Buffer Iterations: 62.45k
List Iterations: 3.96m
NULL List Iterations: 334.83k
L2 ARC Writes:
Writes Sent: 100.00% 61.84k
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency: 1.66b
Hit Ratio: 52.82% 874.41m
Miss Ratio: 47.18% 780.96m
Colinear: 780.96m
Hit Ratio: 0.00% 9.21k
Miss Ratio: 100.00% 780.95m
Stride: 871.48m
Hit Ratio: 99.63% 868.25m
Miss Ratio: 0.37% 3.22m
DMU Misc:
Reclaim: 780.95m
Successes: 0.42% 3.27m
Failures: 99.58% 777.68m
Streams: 6.12m
+Resets: 0.87% 53.59k
-Resets: 99.13% 6.07m
Bogus: 0
------------------------------------------------------------------------
VDEV Cache Summary: 8.17m
Hit Ratio: 25.98% 2.12m
Miss Ratio: 73.37% 6.00m
Delegations: 0.66% 53.76k
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 8525
vm.kmem_size 133836881920
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 1319413950874
vfs.zfs.arc_max 96636764160
vfs.zfs.arc_min 42949672960
vfs.zfs.arc_meta_used 17673106536
vfs.zfs.arc_meta_limit 5368709120
vfs.zfs.l2arc_write_max 25000000
vfs.zfs.l2arc_write_boost 50000000
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_norw 1
vfs.zfs.anon_size 99774976
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_data_lsize 0
vfs.zfs.mru_size 87902746112
vfs.zfs.mru_metadata_lsize 1672704
vfs.zfs.mru_data_lsize 78890405888
vfs.zfs.mru_ghost_size 8778126848
vfs.zfs.mru_ghost_metadata_lsize 8681146368
vfs.zfs.mru_ghost_data_lsize 96980480
vfs.zfs.mfu_size 1736881152
vfs.zfs.mfu_metadata_lsize 10414592
vfs.zfs.mfu_data_lsize 2311168
vfs.zfs.mfu_ghost_size 87868106240
vfs.zfs.mfu_ghost_metadata_lsize 11637033472
vfs.zfs.mfu_ghost_data_lsize 76230990848
vfs.zfs.l2c_only_size 254670908416
vfs.zfs.dedup.prefetch 1
vfs.zfs.nopwrite_enabled 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.no_write_throttle 0
vfs.zfs.write_limit_shift 3
vfs.zfs.write_limit_min 134217728
vfs.zfs.write_limit_max 17173743104
vfs.zfs.write_limit_inflated 412169834496
vfs.zfs.write_limit_override 8589934592
vfs.zfs.prefetch_disable 1
vfs.zfs.zfetch.max_streams 8
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.top_maxinflight 32
vfs.zfs.resilver_delay 2
vfs.zfs.scrub_delay 4
vfs.zfs.scan_idle 50
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.no_scrub_io 0
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.mg_alloc_failures 18
vfs.zfs.write_to_degraded 0
vfs.zfs.check_hostid 1
vfs.zfs.recover 0
vfs.zfs.deadman_synctime 1000
vfs.zfs.deadman_enabled 1
vfs.zfs.space_map_last_hope 0
vfs.zfs.txg.synctime_ms 1000
vfs.zfs.txg.timeout 5
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.cache.size 16777216
vfs.zfs.vdev.cache.bshift 14
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.max_pending 200
vfs.zfs.vdev.min_pending 4
vfs.zfs.vdev.time_shift 29
vfs.zfs.vdev.ramp_rate 2
vfs.zfs.vdev.aggregation_limit 268435456
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.trim_max_bytes 2147483648
vfs.zfs.vdev.trim_max_pending 64
vfs.zfs.max_auto_ashift 13
vfs.zfs.zil_replay_disable 0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zio.use_uma 0
vfs.zfs.zio.exclude_metadata 0
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.snapshot_list_prefetch 0
vfs.zfs.super_owner 0
vfs.zfs.debug 0
vfs.zfs.version.ioctl 3
vfs.zfs.version.acl 1
vfs.zfs.version.spa 5000
vfs.zfs.version.zpl 5
vfs.zfs.trim.enabled 1
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.timeout 30
vfs.zfs.trim.max_interval 1
------------------------------------------------------------------------
> On 2013-10-15 07:53, Dmitriy Makarov wrote:
> > Please, any idea, thougth, help!
> > Maybe what information can be useful for diggin - anything...
> >
> > System what I'm talkin about has a huge problem: performance degradation in short time period (day-two). Don't know can we somehow relate this vmstat fails with degradation.
> >
> >
> >
> >> Hi all
> >>
> >> On CURRENT r255173 we have some interesting values from vmstat -z : REQ = FAIL
> >>
> >> [server]# vmstat -z
> >> ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
> >> ....... skipped....
> >> NCLNODE: 528, 0, 0, 0, 0, 0, 0
> >> space_seg_cache: 64, 0, 289198, 299554,25932081,25932081, 0
> >> zio_cache: 944, 0, 37512, 50124,1638254119,1638254119, 0
> >> zio_link_cache: 48, 0, 50955, 38104,1306418638,1306418638, 0
> >> sa_cache: 80, 0, 63694, 56, 198643,198643, 0
> >> dnode_t: 864, 0, 128813, 3, 184863,184863, 0
> >> dmu_buf_impl_t: 224, 0, 1610024, 314631,157119686,157119686, 0
> >> arc_buf_hdr_t: 216, 0,82949975, 56107,156352659,156352659, 0
> >> arc_buf_t: 72, 0, 1586866, 314374,158076670,158076670, 0
> >> zil_lwb_cache: 192, 0, 6354, 7526, 2486242,2486242, 0
> >> zfs_znode_cache: 368, 0, 63694, 16, 198643,198643, 0
> >> ..... skipped ......
> >>
> >> Can anybody explain this strange failures in zfs related parameters in vmstat, can we do something with this and is this really bad signal?
> >>
> >> Thanks!
> >
> > _______________________________________________
> > freebsd-current at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-current
> > To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"
> I am guessing those 'failures' are failures to allocate memory. I'd
> recommend you install sysutils/zfs-stats and send the list the output of
> 'zfs-stats -a'
>
> --
> Allan Jude
More information about the freebsd-current
mailing list