ZFS L2arc 16.0E size
Steven Hartland
killing at multiplay.co.uk
Mon Feb 16 00:52:34 UTC 2015
IIRC this was fixed by r273060, if your remove your cache device and
then add it back I think you should be good.
On 16/02/2015 00:23, Frank de Bot (lists) wrote:
> Hello,
>
> I have a FreeBSD 10.1 system with a raidz2 zfs configuration with 2ssd's
> for l2arc . It is running '10.1-STABLE FreeBSD 10.1-STABLE #0 r278805'
> Currently I'm running tests before it can go to production, but I have
> the following issue. After a while the l2arc devices indicate 16.0E free
> space and it starts 'consuming' more than it can hold
>
> cache - - - - - -
> gpt/l2arc1 107G 16.0E 0 2 0 92.7K
> gpt/l2arc2 68.3G 16.0E 0 1 0 60.8K
>
> It ran good for a while, where data was removed from cache so it could
> be filled with newer data. (Free space was always around 200/300Mbytes).
>
> I've read about similar issues, which should be fixed in different
> commits, but I'm running the latest stable 10.1 kernel right now. (One
> of the last similar issue is:
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197164 )
> Another similar issue reported at FreeNAS
> https://bugs.freenas.org/issues/5347 suggested it would be a hardware
> issue, but I have 2 servers which experience the same problem. One has a
> Crucial M500 drive and the other a M550. Both have a 64G partition voor
> l2arc.
>
> What is really going on here?
>
>
> Regards,
>
>
> Frank de Bot
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
More information about the freebsd-stable
mailing list