vdev state changed & zfs scrub
avg at FreeBSD.org
Thu Apr 20 11:19:11 UTC 2017
On 20/04/2017 12:39, Johan Hendriks wrote:
> Op 19/04/2017 om 16:56 schreef Dan Langille:
>> I see this on more than one system:
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
>> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
>> zpool status output includes:
>> $ zpool status
>> pool: system
>> state: ONLINE
>> scan: scrub in progress since Wed Apr 19 03:12:22 2017
>> 2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
>> 0 repaired, 41.94% done
>> The timing of the scrub is not coincidental.
>> Why is vdev status changing?
>> Thank you.
> I have the same "issue", I asked this in the stable list but did not got
> any reaction.
> In my initial mail it was only one machine running 11.0, the rest was
> running 10.x.
> Now I have upgraded other machines to 11.0 and I see it there also.
Previously none of ZFS events were logged at all, that's why you never saw them.
As to those particular events, unfortunately two GUIDs is all that the event
contains. So, to get the state you have to explicitly check it, for example,
with zpool status. It could be that the scrub is simply re-opening the devices,
so the state "changes" from VDEV_STATE_HEALTHY to VDEV_STATE_CLOSED to
VDEV_STATE_HEALTHY. You can simply ignore those reports if you don't see any
Maybe lower priority of those messages in /etc/devd/zfs.conf...
More information about the freebsd-fs