error destroying zfs filesystem

Alexandr Kovalenko alexandr.kovalenko at gmail.com
Fri Feb 15 12:44:54 UTC 2013


On Fri, Feb 15, 2013 at 11:30 AM, Alexandr Krivulya
<shuriku at shurik.kiev.ua> wrote:
> Hello everyone!
>
> After upgrading my zfs-only system from 8.2 to 9.1 I have many errors
> related to zfs in my /var/log/messages:
>
> Feb 15 13:12:44 gw kernel: metaslab_free_dva(): bad DVA
> 0:264842321920Solaris: WARNING: metaslab_free_dva(): bad DVA 0:338480095232
> Feb 15 13:12:44 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad
> DVA 0:277633901056Solaris: WARNING:
> Feb 15 13:12:45 gw kernel: metaslab_free_dva(): bad DVA
> 0:277263710208Solaris: WARNING: metaslab_free_dva(): bad DVA
> 0:277633606144Solaris: WARNING: metaslab_free_dva(): bad DVA
> 0:278349642240Solaris: WARNING: metaslab_free_dva(): bad DVA
> 0:278429099008Solaris: WARNING: metaslab_free_dva(): bad DVA
> 0:278349926400Solaris: WARNING: metaslab_free_dva(): bad DVA
> 0:278245378560Solaris: WARNING: metaslab_free_dva(): bad DVA
> 0:256838777344Solaris: WARNING: metaslab_free_dva(): bad DVA 0:327364684800
> Feb 15 13:12:45 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad
> DVA 0:312373604864
>
> root at gw:/ # zpool status -v
>   pool: zmirror
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
>         corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
>         entire pool from backup.
>    see: http://illumos.org/msg/ZFS-8000-8A
>   scan: scrub repaired 0 in 1h39m with 1 errors on Thu Feb 14 17:48:53 2013
> config:
>
>         NAME            STATE     READ WRITE CKSUM
>         zmirror         ONLINE       0     0     2
>           mirror-0      ONLINE       0     0     8
>             gpt/disk01  ONLINE       0     0     8
>             gpt/disk02  ONLINE       0     0     8
>
> errors: Permanent errors have been detected in the following files:
>
>         zmirror/usr:<0x0>
>         <0xc8>:<0x0>
[dd]
> How can I solve this issue?

Make smartctl -t long /dev/<your_physical_drive_here> and then take a
look if there any pending sectors/errors in output of smartctl -a
/dev/<your_physical_drive_here> ? (for both of drives used)

-- 
Alexandr Kovalenko
http://uafug.org.ua/


More information about the freebsd-fs mailing list