[Bug 198242] [zfs] L2ARC degraded. Checksum errors, I/O errors
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Tue Mar 3 19:03:35 UTC 2015
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=198242
Bug ID: 198242
Summary: [zfs] L2ARC degraded. Checksum errors, I/O errors
Product: Base System
Version: 10.1-RELEASE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Some People
Priority: ---
Component: kern
Assignee: freebsd-bugs at FreeBSD.org
Reporter: kenneth at tornet10.dyndns.org
When the L2ARC has filled up I/O errors and Bad checksums appears.
The zpool status is ok.
The device is an Intel S3700 100GB SSD.
Partitioned and attached as a 8G log and a 80G cache device.
The L2ARC size is reported as 161GiB, compression is used and the problem
appears when the size is in the 80G range.
I have seen more problems with this, here is one:
https://bugs.freenas.org/issues/5347
[root at tornet10 /home/kenneth]# zfs-stats -L
------------------------------------------------------------------------
ZFS Subsystem Report Tue Mar 3 19:54:46 2015
------------------------------------------------------------------------
L2 ARC Summary: (DEGRADED)
Passed Headroom: 59.87m
Tried Lock Failures: 7.87m
IO In Progress: 14.35k
Low Memory Aborts: 22
Free on Write: 27.64k
Writes While Full: 8.35k
R/W Clashes: 109
Bad Checksums: 14.49k
IO Errors: 12.88k
SPA Mismatch: 990.88k
L2 ARC Size: (Adaptive) 161.27 GiB
Header Size: 0.28% 456.38 MiB
L2 ARC Evicts:
Lock Retries: 145
Upon Reading: 0
L2 ARC Breakdown: 9.21m
Hit Ratio: 11.06% 1.02m
Miss Ratio: 88.94% 8.19m
Feeds: 2.54m
L2 ARC Buffer:
Bytes Scanned: 6.54 PiB
Buffer Iterations: 2.54m
List Iterations: 162.35m
NULL List Iterations: 344.58k
L2 ARC Writes:
Writes Sent: 100.00% 94.06k
------------------------------------------------------------------------
[root at tornet10 /home/kenneth]# sysctl -a | grep l2
kern.cam.ctl2cam.max_sense: 252
kern.features.linuxulator_v4l2: 1
vfs.zfs.l2arc_write_max: 33554432
vfs.zfs.l2arc_write_boost: 67108864
vfs.zfs.l2arc_headroom: 2
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_feed_min_ms: 200
vfs.zfs.l2arc_noprefetch: 1
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_norw: 1
vfs.zfs.l2c_only_size: 163026834432
vfs.cache.numfullpathfail2: 0
kstat.zfs.misc.arcstats.evict_l2_cached: 693505577984
kstat.zfs.misc.arcstats.evict_l2_eligible: 98812823040
kstat.zfs.misc.arcstats.evict_l2_ineligible: 31591627776
kstat.zfs.misc.arcstats.l2_hits: 1019407
kstat.zfs.misc.arcstats.l2_misses: 8195111
kstat.zfs.misc.arcstats.l2_feeds: 2538329
kstat.zfs.misc.arcstats.l2_rw_clash: 109
kstat.zfs.misc.arcstats.l2_read_bytes: 15623280128
kstat.zfs.misc.arcstats.l2_write_bytes: 357188632064
kstat.zfs.misc.arcstats.l2_writes_sent: 94115
kstat.zfs.misc.arcstats.l2_writes_done: 94115
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 321
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 145
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 27644
kstat.zfs.misc.arcstats.l2_abort_lowmem: 22
kstat.zfs.misc.arcstats.l2_cksum_bad: 14495
kstat.zfs.misc.arcstats.l2_io_error: 12877
kstat.zfs.misc.arcstats.l2_size: 173169375232
kstat.zfs.misc.arcstats.l2_asize: 168640780800
kstat.zfs.misc.arcstats.l2_hdr_size: 478502856
kstat.zfs.misc.arcstats.l2_compress_successes: 1057448
kstat.zfs.misc.arcstats.l2_compress_zeros: 0
kstat.zfs.misc.arcstats.l2_compress_failures: 892255
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 7866025
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 59875065
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 990877
kstat.zfs.misc.arcstats.l2_write_in_l2: 142755184565
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 14352
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 117559529322
kstat.zfs.misc.arcstats.l2_write_full: 8352
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 2538329
kstat.zfs.misc.arcstats.l2_write_pios: 94115
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 7359951087932928
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 162354237
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 344583
[root at tornet10 /home/kenneth]# zpool status
pool: tank
state: ONLINE
scan: scrub repaired 0 in 4h45m with 0 errors on Mon Feb 23 07:47:02 2015
config:
NAME STATE READ WRITE
CKSUM
tank ONLINE 0 0
0
raidz2-0 ONLINE 0 0
0
diskid/DISK-S2H7J9BZC03819%20%20%20%20%20%20 ONLINE 0 0
0
diskid/DISK-S2H7J9BZC03822%20%20%20%20%20%20 ONLINE 0 0
0
diskid/DISK-S2H7J9BZC03825%20%20%20%20%20%20 ONLINE 0 0
0
diskid/DISK-S2H7J9BZC03831%20%20%20%20%20%20 ONLINE 0 0
0
diskid/DISK-S2H7J9BZC03833%20%20%20%20%20%20 ONLINE 0 0
0
diskid/DISK-S2H7J9BZC03834%20%20%20%20%20%20 ONLINE 0 0
0
diskid/DISK-S2H7J9BZC03656%20%20%20%20%20%20 ONLINE 0 0
0
diskid/DISK-S2H7J9DZC00545%20%20%20%20%20%20 ONLINE 0 0
0
logs
gpt/log ONLINE 0 0
0
cache
gpt/cache ONLINE 0 0
0
errors: No known data errors
[root at tornet10 /home/kenneth]# uname -v
FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014
root at releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-bugs
mailing list