Re: ASC/ASCQ Review

From: Alan Somers <asomers_at_freebsd.org>
Date: Fri, 14 Jul 2023 18:30:51 UTC
On Fri, Jul 14, 2023 at 11:05 AM Warner Losh <imp@bsdimp.com> wrote:
>
>
>
> On Fri, Jul 14, 2023, 11:12 AM Alan Somers <asomers@freebsd.org> wrote:
>>
>> On Thu, Jul 13, 2023 at 12:14 PM Warner Losh <imp@bsdimp.com> wrote:
>> >
>> > Greetings,
>> >
>> > i've been looking closely at failed drives for $WORK lately. I've noticed that a lot of errors that kinda sound like fatal errors have SS_RDEF set on them.
>> >
>> > What's the process for evaluating whether those error codes are worth retrying. There are several errors that we seem to be seeing (preliminary read of the data) before the drive gives up the ghost altogether. For those cases, I'd like to post more specific lists. Should I do that here?
>> >
>> > Independent of that, I may want to have a more aggressive 'fail fast' policy than is appropriate for my work load (we have a lot of data that's a copy of a copy of a copy, so if we lose it, we don't care: we'll just delete any files we can't read and get on with life, though I know others will have a more conservative attitude towards data that might be precious and unique). I can set the number of retries lower, I can do some other hacks for disks that tell the disk to fail faster, but I think part of the solution is going to have to be failing for some sense-code/ASC/ASCQ tuples that we don't want to fail in upstream or the general case. I was thinking of identifying those and creating a 'global quirk table' that gets applied after the drive-specific quirk table that would let $WORK override the defaults, while letting others keep the current behavior. IMHO, it would be better to have these separate rather than in the global data for tracking upstream...
>> >
>> > Is that clear, or should I give concrete examples?
>> >
>> > Comments?
>> >
>> > Warner
>>
>> Basically, you want to change the retry counts for certain ASC/ASCQ
>> codes only, on a site-by-site basis?  That sounds reasonable.  Would
>> it be configurable at runtime or only at build time?
>
>
> I'd like to change the default actions. But maybe we just do that for everyone and assume modern drives...
>
>> Also, I've been thinking lately that it would be real nice if READ
>> UNRECOVERABLE could be translated to EINTEGRITY instead of EIO.  That
>> would let consumers know that retries are pointless, but that the data
>> is probably healable.
>
>
> Unlikely, unless you've tuned things to not try for long at recovery...
>
> But regardless... do you have a concrete example of a use case? There's a number of places that map any error to EIO. And I'd like a use case before we expand the errors the lower layers return...
>
> Warner

My first use-case is a user-space FUSE file system.  It only has
access to errnos, not ASC/ASCQ codes.  If we do as I suggest, then it
could heal a READ UNRECOVERABLE by rewriting the sector, whereas other
EIO errors aren't likely to be healed that way.

My second use-case is ZFS.  zfsd treats checksum errors differently
from I/O errors.  A checksum error normally means that a read returned
wrong data.  But I think that READ UNRECOVERABLE should also count.
After all, that means that the disk's media returned wrong data which
was detected by the disk's own EDC/ECC.  I've noticed that zfsd seems
to fault disks too eagerly when their only problem is READ
UNRECOVERABLE errors.  Mapping it to EINTEGRITY, or even a new error
code, would let zfsd be tuned better.