zfs i/o error, no driver error

Jeremy Chadwick freebsd at jdc.parodius.com
Mon Jun 7 08:34:31 UTC 2010


On Mon, Jun 07, 2010 at 11:15:54AM +0300, Andriy Gapon wrote:
> During recent zpool scrub one read error was detected and "128K repaired".
>
> In system log I see the following message:
> ZFS: vdev I/O failure, zpool=tank
> path=/dev/gptid/536c6f78-e4f3-11de-b9f8-001cc08221ff offset=284456910848
> size=131072 error=5
> 
> On the other hand, there are no other errors, nothing from geom, ahci, etc.
> Why would that happen? What kind of error could this be?

I believe this indicates silent data corruption[1], which ZFS can
auto-correct if the pool is a mirror or raidz (otherwise it can detect
the problem but not fix it).  This can happen for a lot of reasons, but
tracking down the source is often difficult.  Usually it indicates the
disk itself has some kind of problem (cache going bad, some sector
remaps which didn't happen or failed, etc.).

What I'd need to determine the cause:

- Full "zpool status tank" output before the scrub
- Full "zpool status tank" output after the scrub
- Full "smartctl -a /dev/XXX" for all disk members of zpool "tank"

Furthermore, what made you decide to scrub the pool on a whim?

[1]: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta
     http://blogs.sun.com/bonwick/entry/zfs_end_to_end_data
     http://blogs.sun.com/bonwick/entry/raid_z

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-fs mailing list