New ZFSv28 patchset for 8-STABLE

Attila Nagy bra at
Mon Jan 3 20:03:12 UTC 2011

  On 01/01/2011 08:09 PM, Artem Belevich wrote:
> On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy<bra at>  wrote:
>> What I see:
>> - increased CPU load
>> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
>> hard disk load (IOPS graph)
> ...
>> Any ideas on what could cause these? I haven't upgraded the pool version and
>> nothing was changed in the pool or in the file system.
> The fact that L2 ARC is full does not mean that it contains the right
> data.  Initial L2ARC warm up happens at a much higher rate than the
> rate L2ARC is updated after it's been filled initially. Even
> accelerated warm-up took almost a day in your case. In order for L2ARC
> to warm up properly you may have to wait quite a bit longer. My guess
> is that it should slowly improve over the next few days as data goes
> through L2ARC and those bits that are hit more often take residence
> there. The larger your data set, the longer it will take for L2ARC to
> catch the right data.
> Do you have similar graphs from pre-patch system just after reboot? I
> suspect that it may show similarly abysmal L2ARC hit rates initially,
> too.
After four days, the L2 hit rate is still hovering around 10-20 percents 
(was between 60-90), so I think it's clearly a regression in the ZFSv28 
And the massive growth in CPU usage can also very nicely be seen...

I've updated the graphs at (switch time can be checked on the zfs-mem 

There is a new phenomenom: the large IOPS peaks. I use this munin script 
on a lot of machines and never seen anything like this... I'm not sure 
whether it's related or not.

More information about the freebsd-stable mailing list