[Bug 195746] New: zfs L2ARC wrong alloc/free size

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Sat Dec 6 14:45:20 UTC 2014


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195746

            Bug ID: 195746
           Summary: zfs L2ARC wrong alloc/free size
           Product: Base System
           Version: 10.1-RELEASE
          Hardware: Any
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: kern
          Assignee: freebsd-bugs at FreeBSD.org
          Reporter: danmer at danmer.net

There is a server FreeBSD 10.1-RELEASE GENERIC kernel amd64 with 2 zfs pools.
There are two Intel 480Gb SSD disks in server, used like ZIL (mirror 4GB per
pool) and L2ARC (stripe 75+75 GB per pool). On some zfs datasets compression
are used. After same days working I noticed wrong L2ARC alloc and free sizes
pool1 in zpool iostat -v, later I seen same wrong size in pool2. It looks like:

pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
pool1                                 13,0T  34,3T     45  3,56K  3,93M  51,9M
  raidz3                              13,0T  34,3T     45  3,51K  3,93M  47,1M
    multipath/pd01                        -      -     31     97   311K  4,96M
    multipath/pd02                        -      -     31     97   311K  4,96M
    multipath/pd03                        -      -     31     97   311K  4,96M
    multipath/pd04                        -      -     31     97   311K  4,96M
    multipath/pd05                        -      -     31     97   311K  4,96M
    multipath/pd06                        -      -     31     97   311K  4,96M
    multipath/pd07                        -      -     31     97   311K  4,96M
    multipath/pd08                        -      -     31     97   311K  4,96M
    multipath/pd09                        -      -     31     97   311K  4,96M
    multipath/pd10                        -      -     31     97   311K  4,96M
    multipath/pd11                        -      -     31     97   311K  4,96M
    multipath/pd12                        -      -     31     97   311K  4,96M
    multipath/pd13                        -      -     31     97   311K  4,96M
logs                                      -      -      -      -      -      -
  mirror                               812K  3,97G      0     45      0  4,83M
    diskid/DISK-CVWL435200Y1480QGNp1      -      -      0     45      4  4,83M
    diskid/DISK-CVWL4353000F480QGNp1      -      -      0     45      4  4,83M
cache                                     -      -      -      -      -      -
  diskid/DISK-CVWL435200Y1480QGNp4     371G  16,0E      4     27   163K  3,16M
  diskid/DISK-CVWL4353000F480QGNp4     441G  16,0E      8     25   145K  2,94M
------------------------------------  -----  -----  -----  -----  -----  -----
pool2                                 10,2T  37,0T     81  1,36K  9,82M  80,2M
  raidz3                              10,2T  37,0T     81    870  9,82M  45,9M
    multipath/pd14                        -      -     21     82   903K  4,67M
    multipath/pd15                        -      -     21     82   903K  4,67M
    multipath/pd16                        -      -     21     82   903K  4,67M
    multipath/pd17                        -      -     21     82   903K  4,67M
    multipath/pd18                        -      -     21     82   904K  4,67M
    multipath/pd19                        -      -     21     82   903K  4,67M
    multipath/pd20                        -      -     21     82   903K  4,67M
    multipath/pd21                        -      -     21     82   903K  4,67M
    multipath/pd22                        -      -     21     82   903K  4,67M
    multipath/pd23                        -      -     21     82   903K  4,67M
    multipath/pd24                        -      -     21     82   903K  4,67M
    multipath/pd25                        -      -     21     82   903K  4,67M
    multipath/pd26                        -      -     21     82   903K  4,67M
logs                                      -      -      -      -      -      -
  mirror                               238M  3,74G      0    525      0  34,3M
    diskid/DISK-CVWL435200Y1480QGNp2      -      -      0    525      4  34,3M
    diskid/DISK-CVWL4353000F480QGNp2      -      -      0    525      4  34,3M
cache                                     -      -      -      -      -      -
  diskid/DISK-CVWL435200Y1480QGNp5     207G  16,0E      1     21  45,1K  2,56M
  diskid/DISK-CVWL4353000F480QGNp5     203G  16,0E      2     21  94,6K  2,60M


Cache values “371G  16,0E” are abnormal, real aloc size is 75G. After that I
looked zfs-stat -L and seen DEGRADED L2ARC and too big L2ARC size:

L2 ARC Summary: (DEGRADED)
        Passed Headroom:                        6.05m
        Tried Lock Failures:                    22.36m
        IO In Progress:                         2.75k
        Low Memory Aborts:                      2.86k
        Free on Write:                          5.48m
        Writes While Full:                      339.48k
        R/W Clashes:                            2.07k
        Bad Checksums:                          211.52k
        IO Errors:                              101.41k
        SPA Mismatch:                           3.16b

L2 ARC Size: (Adaptive)                         1.27    TiB
        Header Size:                    1.42%   18.56   GiB

kstat.zfs.misc.arcstats.l2_io_error: 101531
kstat.zfs.misc.arcstats.l2_cksum_bad: 211782

smartctl show that both SSD is fine, without any IO errors. After reboot no
problem's for some time.

I found same issues which described about L2ARC compression:
http://forums.freebsd.org/threads/l2arc-degraded.47540/
http://lists.freebsd.org/pipermail/freebsd-current/2013-October/045088.html
My problem looks like same bug.

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-bugs mailing list