zfs i/o error, no driver error

Jeremy Chadwick freebsd at jdc.parodius.com
Mon Jun 7 10:38:32 UTC 2010


On Mon, Jun 07, 2010 at 12:28:42PM +0300, Andriy Gapon wrote:
> on 07/06/2010 12:08 Jeremy Chadwick said the following:
> > On Mon, Jun 07, 2010 at 11:55:24AM +0300, Andriy Gapon wrote:
> >> on 07/06/2010 11:34 Jeremy Chadwick said the following:
> >>> On Mon, Jun 07, 2010 at 11:15:54AM +0300, Andriy Gapon wrote:
> >>>> During recent zpool scrub one read error was detected and "128K repaired".
> >>>>
> >>>> In system log I see the following message:
> >>>> ZFS: vdev I/O failure, zpool=tank
> >>>> path=/dev/gptid/536c6f78-e4f3-11de-b9f8-001cc08221ff offset=284456910848
> >>>> size=131072 error=5
> >>>>
> >>>> On the other hand, there are no other errors, nothing from geom, ahci, etc.
> >>>> Why would that happen? What kind of error could this be?
> >>> I believe this indicates silent data corruption[1], which ZFS can
> >>> auto-correct if the pool is a mirror or raidz (otherwise it can detect
> >>> the problem but not fix it).
> >> This pool is a mirror.
> >>
> >>> This can happen for a lot of reasons, but
> >>> tracking down the source is often difficult.  Usually it indicates the
> >>> disk itself has some kind of problem (cache going bad, some sector
> >>> remaps which didn't happen or failed, etc.).
> >> Please note that this is not a CKSUM error, but READ error.
> > 
> > Okay, then it indicates reading some data off the disk failed.  ZFS
> > auto-corrected it by reading the data from the other member in the pool
> > (ada0p4).  That's confirmed here:
> 
> Yes, right, of course.
> If you read my original post you'll see that my question was: why ZFS saw I/O
> error, but disk/controller/geom/etc driver didn't see it.
> I do not see us moving towards an answer to that.

My understanding is that a "vdev I/O error" indicates some sort of
communication failure with a member in the pool, or some other layer
within FreeBSD (GEOM I think, like you said).  I don't think there has
to be a 1:1 ratio between vdev I/O errors and controller/disk errors.

For AHCI and storage controllers, I/O errors are messages that are
returned from the controller to the OS, or from the disk through the
controller to the OS.  I suppose it's possible ZFS could be throwing
an error for something that isn't actually block/disk-level.

I'm interested to see what this turns out to be!

I agree that your SMART statistics look fine -- the only test that isn't
working is a manual or automatic offline data collection test, but this
one fails (gets aborted) pretty often when the system is in use.  You
can see that here:

> Offline data collection status:  (0x84) Offline data collection activity
>                                         was suspended by an interrupting command from host.
>                                         Auto Offline Data Collection: Enabled.

This is the test that "-t offline" induces (not -t short/long).  It
takes a very long time to run, which is why it often gets aborted:

> Total time to complete Offline
> data collection:                 (11160) seconds.

That's the only thing that looks even remotely of concern with ada1,
and it's not even worth focusing on.

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-fs mailing list