vdev state changed & zfs scrub
Dan Langille
dan at langille.org
Wed Apr 19 15:02:46 UTC 2017
I see this on more than one system:
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
zpool status output includes:
$ zpool status
pool: system
state: ONLINE
scan: scrub in progress since Wed Apr 19 03:12:22 2017
2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
0 repaired, 41.94% done
The timing of the scrub is not coincidental.
Why is vdev status changing?
Thank you.
--
Dan Langille - BSDCan / PGCon
dan at langille.org
More information about the freebsd-fs
mailing list