zfs problem vdev I/O failure
Konstantin Kuklin
konstantin.kuklin at gmail.com
Sun Apr 24 05:30:43 UTC 2011
Good morning, I have a problem with ZFS:
ZFS filesystem version 4
ZFS storage pool version 15
Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error
detached,when I copy a big file...
and after reboot in 2 wd green 1tb say me goodbye. One of them die and other
with zfs errors:
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=187921768448 size=512 error=6
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=187921768960 size=512 error=6
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=311738368 size=21504 error=6
Apr 24 04:53:41 Flash root: ZFS: zpool I/O failure, zpool=zroot error=6
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=
size= error=
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=635155456 size=3072 error=6
Apr 24 04:53:41 Flash root: ZFS: zpool I/O failure, zpool=zroot error=6
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=
size= error=
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=635158528 size=12288 error=6
Apr 24 04:53:41 Flash root: ZFS: zpool I/O failure, zpool=zroot error=6
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=
size= error=
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=635170816 size=512 error=6
Apr 24 04:53:41 Flash root: ZFS: zpool I/O failure, zpool=zroot error=6
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=
size= error=
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=635171328 size=512 error=6
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=635171840 size=512 error=6
Apr 24 04:53:41 Flash root: ZFS: zpool I/O failure, zpool=zroot error=6
zpool status:
Flash# zpool status
pool: zroot
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool
clear'.
see: http://www.sun.com/msg/ZFS-8000-HC
scrub: resilver in progress for 0h6m, 0.00% done, 1582566h29m to go
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 12 0 1
mirror DEGRADED 36 0 4
7159451150335751026 UNAVAIL 0 0 0 was
/dev/gpt/disk0
gpt/disk1 ONLINE 0 0 40
errors: 12 data errors, use '-v' for a list
Zpool scrub freeze and time to resilver up in time...
How i can repair it, if zpool scrub -s zroot and detach don`t work...and
don`t work all of zfs commands =\
Thx
More information about the freebsd-fs
mailing list