New ZFSv28 patchset for 8-STABLE

Attila Nagy bra at
Sun Jan 9 11:52:59 UTC 2011

  On 01/01/2011 08:09 PM, Artem Belevich wrote:
> On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy<bra at>  wrote:
>> What I see:
>> - increased CPU load
>> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
>> hard disk load (IOPS graph)
> ...
>> Any ideas on what could cause these? I haven't upgraded the pool version and
>> nothing was changed in the pool or in the file system.
> The fact that L2 ARC is full does not mean that it contains the right
> data.  Initial L2ARC warm up happens at a much higher rate than the
> rate L2ARC is updated after it's been filled initially. Even
> accelerated warm-up took almost a day in your case. In order for L2ARC
> to warm up properly you may have to wait quite a bit longer. My guess
> is that it should slowly improve over the next few days as data goes
> through L2ARC and those bits that are hit more often take residence
> there. The larger your data set, the longer it will take for L2ARC to
> catch the right data.
> Do you have similar graphs from pre-patch system just after reboot? I
> suspect that it may show similarly abysmal L2ARC hit rates initially,
> too.
I've finally found the time to read the v28 patch and figured out the 
problem: vfs.zfs.l2arc_noprefetch was changed to 1, so it doesn't use 
the prefetched data on the L2ARC devices.
This is a major hit in my case. Enabling this again restored the 
previous hit rates and lowered the load on the hard disks significantly.

More information about the freebsd-stable mailing list