vdev state changed & zfs scrub

Johan Hendriks joh.hendriks at gmail.com
Thu Apr 20 09:39:52 UTC 2017


Op 19/04/2017 om 16:56 schreef Dan Langille:
> I see this on more than one system:
>
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3558867368789024889
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=3597532040953426928
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8095897341669412185
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=15391662935041273970
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=8194939911233312160
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=4885020496131451443
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=14289732009384117747
> Apr 19 03:12:22 slocum ZFS: vdev state changed, pool_guid=15387115135938424988 vdev_guid=7564561573692839552
>
> zpool status output includes:
>
> $ zpool status
>   pool: system
>  state: ONLINE
>   scan: scrub in progress since Wed Apr 19 03:12:22 2017
>         2.59T scanned out of 6.17T at 64.6M/s, 16h9m to go
>         0 repaired, 41.94% done
>
> The timing of the scrub is not coincidental.
>
> Why is vdev status changing?
>
> Thank you.
>
I have the same "issue", I asked this in the stable list but did not got
any reaction.
https://lists.freebsd.org/pipermail/freebsd-stable/2017-March/086883.html

In my initial mail it was only one machine running 11.0, the rest was
running 10.x.
Now I have upgraded other machines to 11.0 and I see it there also.

regards
Johan Hendriks





More information about the freebsd-fs mailing list