ZFS L2ARC checksum errors after compression
Ben RUBSON
ben.rubson at gmail.com
Fri Nov 4 18:05:54 UTC 2016
> On 04 Nov 2016, at 15:20, Andriy Gapon <avg at FreeBSD.org> wrote:
>
> On 03/11/2016 21:43, Lev Serebryakov wrote:
>> On 29.10.2016 16:32, Andriy Gapon wrote:
>>
>> Looks like L2ARC is unusable now if your have compression enabled on
>> FSes. It shows 2x compression (ALLOC = 2xSIZE), and tons of checksum
>> errors. I simply don't have compressible enough data on my FSes! It is
>> mostly media files! Looks like this data is bogus.
>>
>>> I think that a recent upstream change, compressed ARC support, reintroduced an a
>>> old problem that was fixed a while ago.
>>
>
> Lev,
>
> because of the confusing variable names I made a mistake in the patch that I
> offered you. Could you please try a new slight different patch?
> (...)
> + if ((write_psize + asize) > target_sz) {
Do you think the issue comes from this test ?
target_sz is only the threshold under which we still have to write buffers to L2.
Note that I also proposed a modification around this test in the following PR :
https://github.com/openzfs/openzfs/pull/189
Here is an extract of the interesting code :
uint64_t size = arc_hdr_size(hdr);
passed_sz += size;
if (passed_sz > headroom) {
/*
* Searched too far.
*/
mutex_exit(hash_lock);
break;
}
if (!l2arc_write_eligible(guid, hdr)) {
mutex_exit(hash_lock);
continue;
}
if ((write_asize + size) > target_sz) {
full = B_TRUE;
mutex_exit(hash_lock);
break;
}
Note that I also faced the 16.0E bug in 11.0-RC3 (which has not compressed ARC support) :
https://www.illumos.org/issues/7410
Ben
More information about the freebsd-fs
mailing list