Pool I/O failure, zpool=$pool error=$6

Budi Janto budijanto at studiokaraoke.co.id
Mon Feb 1 14:37:27 UTC 2021


Hi,

I need help to fixed this ZFS disk failure after "zpool scrub pool" 
whereis run once for a week.

# uname -mv
FreeBSD 12.2-STABLE r368820 GENERIC  amd64

# zcat /var/log/messages.1.bz2 | grep ZFS | more
Feb  1 10:17:51 SMD-DB-P1 ZFS[9243]: pool I/O failure, zpool=$pool error=$6
Feb  1 10:17:51 SMD-DB-P1 ZFS[9244]: catastrophic pool I/O failure, 
zpool=$pool
Feb  1 10:21:58 SMD-DB-P1 ZFS[9278]: pool I/O failure, zpool=$pool error=$6
Feb  1 10:21:58 SMD-DB-P1 ZFS[9279]: catastrophic pool I/O failure, 
zpool=$pool
Feb  1 11:08:28 SMD-DB-P1 kernel: ZFS filesystem version: 5
Feb  1 11:08:28 SMD-DB-P1 kernel: ZFS storage pool version: features 
support (5000)
Feb  1 11:08:28 SMD-DB-P1 ZFS[818]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[820]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[825]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[831]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[836]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[852]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[872]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[880]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[882]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[884]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[885]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[887]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[888]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[890]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[891]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[893]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[894]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[896]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[897]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[899]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[900]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[902]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[903]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[905]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[906]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[908]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[909]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[911]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[912]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
Feb  1 11:08:28 SMD-DB-P1 ZFS[914]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
Feb  1 11:08:28 SMD-DB-P1 ZFS[915]: vdev state changed, 
pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
[...]

After restarting the machines, my HDD gone from BIOS (Undetected).
I try to change port SATA in MB, and change SATA cable. But problem 
still persist (There is a delay in the boot process), my questions is
does ZFS scrubbing cause this problem, or indeed a bad hard drive?

FYI, I use Ironwolf 4 TB x 2 for my pool with striped mode. Thanks



-- 
Regards,


Budi Janto

-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 203 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-questions/attachments/20210201/6fd916f7/attachment.sig>


More information about the freebsd-questions mailing list