New ZFSv28 patchset for 8-STABLE

J. Hellenthal jhell at
Sun Jan 2 22:38:58 UTC 2011

On 01/02/2011 03:45, Attila Nagy wrote:
>  On 01/02/2011 05:06 AM, J. Hellenthal wrote:
>> Hash: SHA1
>> On 01/01/2011 13:18, Attila Nagy wrote:
>>>   On 12/16/2010 01:44 PM, Martin Matuska wrote:
>>>> Link to the patch:
>>> I've used this:
>>> on a server with amd64, 8 G RAM, acting as a file server on
>>> ftp/http/rsync, the content being read only mounted with nullfs in
>>> jails, and the daemons use sendfile (ftp and http).
>>> The effects can be seen here:
>>> the exact moment of the switch can be seen on zfs_mem-week.png, where
>>> the L2 ARC has been discarded.
>>> What I see:
>>> - increased CPU load
>>> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
>>> hard disk load (IOPS graph)
>>> Maybe I could accept the higher system load as normal, because there
>>> were a lot of things changed between v15 and v28 (but I was hoping if I
>>> use the same feature set, it will require less CPU), but dropping the
>>> L2ARC hit rate so radically seems to be a major issue somewhere.
>>> As you can see from the memory stats, I have enough kernel memory to
>>> hold the L2 headers, so the L2 devices got filled up to their maximum
>>> capacity.
>>> Any ideas on what could cause these? I haven't upgraded the pool version
>>> and nothing was changed in the pool or in the file system.
>> Running[1] -p4 should print a summary about your l2arc
>> and you should also notice in that section that there is a high number
>> of "SPA Mismatch" mine usually grew to around 172k before I would notice
>> a crash and I could reliably trigger this while in scrub.
>> What ever is causing this needs desperate attention!
>> I emailed mm@ privately off-list when I noticed this going on but have
>> not received any feedback as of yet.
> It's at zero currently (2 days of uptime):
> kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0

Right but do you have a 'cache' 'l2arc' vdev attached to any pool in the
system ? This suggests to me that you do not at this time.

If not can you attach a cache vdev and run a scrub on it and monitor the
value of that MIB ?




More information about the freebsd-stable mailing list