Prioritize resilvering priority
Paul Kraus
paul at kraus-haus.org
Wed Jul 22 20:06:03 UTC 2015
On Jul 22, 2015, at 14:52, javocado <javocado at gmail.com> wrote:
> But I do have:
> vfs.zfs.vdev.max_pending: 10 (dynamic)
> vfs.zfs.scrub_limit: 10 (loader)
>
> So, I think I would want to lower one or both of these to increase I/O
> responsiveness on the system. Correct? How would the 2 play together in
> terms of which to adjust to achieve the best system performance at the
> expense of a longer resilver?
vfs.zfs.vdev.max_pending is the limit on the number of disk I/O that can be outstanding for a drive (or, IIRC, in this case a given vdev). There has been great debate over tuning this one years ago on the zfs list. The general consensus is that 10 is a good value for modern SATA drives. When I was running 4 SATA drives behind a port multiplier (not a great configuration) I tuned this down to 4 to keep from overwhelming the port multiplier. Tuning it _down_ will reduce overall throughput to a drive. It does not differentiate between production I/O and scrub / resilver I/O.
This post: https://forums.freebsd.org/threads/how-to-limit-scrub-bandwidth-vfs-zfs-scrub_limit.31628/
Implies that the vfs.zfs.scrub_limit parameter limits the number of outstanding I/O but just for scrub / resilver operations. I would start by tuning it down to 5 or so and watch carefully with iostat -x to see the effect.
Note that newer ZFS code addresses the scrub operation starving the rest of the system from I/O. I have not had a problem on either my FBSD 9 or 10 systems.
--
Paul Kraus
paul at kraus-haus.org
More information about the freebsd-fs
mailing list