Reading a corrupted file on ZFS

Alan Somers asomers at freebsd.org
Fri Feb 12 18:52:10 UTC 2021


On Fri, Feb 12, 2021 at 11:26 AM Artem Kuchin <artem at artem.ru> wrote:

> 12.02.2021 19:37, Karl Denninger пишет:
> > On 2/12/2021 11:22, Artem Kuchin wrote:
> >>
> >> This is frustrating. why..why..
> >
> > You created a synthetic situation that in the real world almost-never
> > exists (ONE byte modified in all copies in the same allocation block
> > but all other data in that block is intact and recoverable.)
> >
> I could be 1 GB file with ZFS wisth block size of 1MB and with rotten
> bits within the same 1MB of block on different disks. How i did it is
> not important, life is unpredictable, i'm not trying to avoid
> everything. The question is what to do when it happens. And currently
> the answer is - nothing.
>
>
> > In almost-all actual cases of "bit rot" it's exactly that; random and
> > by statistics extraordinarily unlikely to hit all copies at once in
> > the same allocation block.  Therefore, ZFS can and does fix it; UFS or
> > FAT silently returns the corrupted data, propagates it, and eventually
> > screws you down the road.
>
> In active fs you are right. But if this is a storage disk with movies
> and photos, then i can just checksum all files with a little script and
> recheck once in a while. So, for storage
>
> perposes i have all ZFS postitives and also can read as much data as i
> can. Because for long time storage it is more important to have ability
> read the data in any case.
>
>
> >
> > The nearly-every-case situation in the real world where a disk goes
> > physically bad (I've had this happen *dozens* of times over my IT
> > career) results in the drive being unable to
>
>
> *NEARLY* is not good enough for me.
>
>
> > return the block at all;
>
>
> You mix device blocks and ZFS block. As far as i remember default ZFS
> block for checksumming is 16K and for big files storage better to have
> it around 128K.
>
>
> > In short there are very, very few actual "in the wild" failures where
> > one byte is damaged and the rest surrounding that one byte is intact
> > and retrievable.  In most cases where an actual failure occurs the
> > unreadable data constitutes *at least* a physical sector.
> >
> "very very few" is enough for me to think about.
>
> One more thing. If you have one bad byte in a block of 16K and you have
> checksum and recalculate it then it is quite possible to just brute
> force every byte to match the checksum, thus restoring the data.
>
> If you have mirror with two different bytes then bute forcing is even
> ether,
>
> Somehow, ZFS slaps my hands and does not allow to be sure that i can
> restore data when i needed it and decide myself if it is okay or not.
>
> For long time storage of big files it now seems better to store it on
> UFS mirror, checksum each 512bytes blocks of files and store checksums
> separetelly and run monthly/weekly "scrub". This way i would sleep better.
>

GOD NO.   ZFS is really quite good at preserving your data integrity.  For
example, with your suggested scheme what would protect you from a corrupted
checksum file?  Nothing.  In ZFS, the Merkle hash tree would detect such a
thing.  Karl is correct: the type of corruption you're worried about is
almost non-existent in the real world.  Why?  LDPC coding, for one reason.
For the last 10+ years, hard disks have encoded data using LDPC.  Older
hard disk encoding schemes, like Reed-Solomon encoding, stored the data in
a format similar to RAID: as data + parity.  That's why older ATA standards
had a "READ LONG" command.  But with LDPC, the "original" data does not
exist anywhere on the platter.  It gets transformed into a large codeword
with data and parity intermingled.  Physical damage will either be
correctable (most likely), render the entire codeword illegible (less
likely), or cause it to decode into completely wrong data (least likely).
There simply isn't any way to randomly flip a single bit, once it's been
written to the media.

But if you really, really REALLY want to read blocks that have been
deliberately corrupted, you can do it laboriously with zdb.  Use zdb to
show the dnode, which will include the record pointers for each block.  You
can decode those and extra the data from the disks with dd.  The exact
procedure is left as an exercise to the reader.

-Alan


More information about the freebsd-fs mailing list