ZFS L2arc 16.0E size
Frank de Bot (lists)
lists at searchy.net
Sat Feb 21 19:58:05 UTC 2015
I've done some extra tests:
I used a different ssd (samsung 840 pro), same result. I checkout the
commitpoint '273060' and build another kernel, uname now says:
'FreeBSD nas 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #1 r273060: Sat Feb
21 18:42:08 UTC 2015 user at nas:/usr/obj/usr/src/sys/GENERIC amd64
Still same result.
cache - - - - - -
gpt/l2arc1 68.0G 16.0E 4 165 23.2K 19.3M
gpt/l2arc2 68.0G 16.0E 5 162 24.8K 18.6M
This already occures after 1 tot 2 hours when transferring a lot of data
via rsync to the dataset.
Is it worth to try hooking up the ssd to another controller (current on
a LSI MegaRAID SAS 9211-4i)
Regards,
Frank de Bot
Frank de Bot wrote:
> I did remove and added devices again, with:
>
> 'zpool remove tank gpt/l2arc1 gpt/l2arc2'
> and then
> 'zpool add tank cache gpt/l2arc1 gpt/l2arc2'
>
> I left it running overnight and the same situation occured.
>
> cache - - - - - -
> gpt/l2arc1 175G 16.0E 11 106 55.5K 9.75M
> gpt/l2arc2 167G 16.0E 14 107 68.8K 9.81M
>
> For faster filling of the l2arc I also had 2 systcl's set:
>
> vfs.zfs.l2arc_write_max: 33554432
> vfs.zfs.l2arc_write_boost: 33554432
>
> I do not plan to use this for production, only in testing.
>
>
> Regards,
>
> Frank de Bot
> Steven Hartland wrote:
>> IIRC this was fixed by r273060, if your remove your cache device and
>> then add it back I think you should be good.
>>
>> On 16/02/2015 00:23, Frank de Bot (lists) wrote:
>>> Hello,
>>>
>>> I have a FreeBSD 10.1 system with a raidz2 zfs configuration with 2ssd's
>>> for l2arc . It is running '10.1-STABLE FreeBSD 10.1-STABLE #0 r278805'
>>> Currently I'm running tests before it can go to production, but I have
>>> the following issue. After a while the l2arc devices indicate 16.0E free
>>> space and it starts 'consuming' more than it can hold
>>>
>>> cache - - - - - -
>>> gpt/l2arc1 107G 16.0E 0 2 0 92.7K
>>> gpt/l2arc2 68.3G 16.0E 0 1 0 60.8K
>>>
>>> It ran good for a while, where data was removed from cache so it could
>>> be filled with newer data. (Free space was always around 200/300Mbytes).
>>>
>>> I've read about similar issues, which should be fixed in different
>>> commits, but I'm running the latest stable 10.1 kernel right now. (One
>>> of the last similar issue is:
>>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197164 )
>>> Another similar issue reported at FreeNAS
>>> https://bugs.freenas.org/issues/5347 suggested it would be a hardware
>>> issue, but I have 2 servers which experience the same problem. One has a
>>> Crucial M500 drive and the other a M550. Both have a 64G partition voor
>>> l2arc.
>>>
>>> What is really going on here?
>>>
>>>
>>> Regards,
>>>
>>>
>>> Frank de Bot
>>> _______________________________________________
>>> freebsd-stable at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>>
>> _______________________________________________
>> freebsd-stable at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>>
>
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>
More information about the freebsd-stable
mailing list