ZFS scrub/selfheal not really working

Andrew Snow andrew at modulus.org
Wed May 27 21:46:23 UTC 2009


Dmitry Marakasov wrote:
> I've recently moved my ZFS pool to 6x1TB hitachi HDDs. However,
> those turned out to be quite crappy, and tend to grow unreadable
> sectors.  Those sectors are really nasty, cause though they are not
> readable, they won't be marked as bad and relocated until there's
> write failure. And write failure actually never happens - if the sector
> is rewritten it's pervectly readable again.

It seems like its a good idea to chuck out the whole lot, after first 
double-checking or replacing your controller, cabling, and power supply. 
  ZFS can't help you :-)

> So, my question is why doesn't ZFS rewrite those sectors with READ
> errors during scrub?

Because of the transactional nature of ZFS it writes the fresh data in a 
different part of the disk and then marks the old bad sectors as free.

An a situation where
> there's no parity available, will it narrow down read block size to read
> the data and not the unused sectors with curruption?

Correct.  If no parity is available it will try its best to read as much 
data as possible and return read errors up to the application layer on 
sector failure.


- Andrew




More information about the freebsd-fs mailing list