Scrub incredibly slow with 13.0-RC3 (as well as RC1 & 2)

Michael Gmelin freebsd at grem.de
Fri Mar 26 12:30:23 UTC 2021



On Fri, 26 Mar 2021 10:37:47 +0100
Mathieu Chouquet-Stringer <me+freebsd at mathieu.digital> wrote:

> On Thu, Mar 25, 2021 at 08:55:12AM +0000, Matt Churchyard wrote:
> > Just an a aside, I did post a message a few weeks ago with a similar
> > problem on 13 (as well as snapshot issues). Scrub seemed ok for a
> > short while, but then ground to a halt. It would take 10+ minutes to
> > go 0.01%, with everything appearing fairly idle. I finally gave up
> > and stopped it after about 20 hours. Moving to 12.2 and rebuilding
> > the pool, the system scrubbed the same data in an hour, and I've
> > just scrubbed the same system after a month of use with about 4
> > times the data in 3 hours 20. As far as I'm aware, both should be
> > using effectively the same "new" scrub code.
> >
> > Will be interesting if you find a cause as I didn't get any response
> > to what for me was a complete showstopper for moving to 13.  
> 
> Bear with me, I'm slowly resilvering now... But same thing, it's not
> even maxing out my slow drives... Looks like it'll take 2 days...
> 
> I did some flame graphs using dtrace. The first one is just the output
> of that:
> dtrace -x stackframes=100 -n 'profile-99 /arg0/ { @[stack()] =
> count(); } tick-60s { exit(0); }'
> 
> Clearly my machine is not busy at all.
> And the second is the output of pretty much the same thing except I'm
> only capturing pid 31 which is the one busy.
> dtrace -x stackframes=100 -n 'profile-99 /arg0 && pid == 31/ {
> @[stack()] = count(); } tick-60s { exit(0); }'
> 
> One striking thing is how many times hpet_get_timecount is present...

Does tuning of

- vfs.zfs.scrub_delay
- vfs.zfs.resilver_min_time_ms
- vfs.zfs.resilver_delay

make a difference?

Best,
Michael

-- 
Michael Gmelin


More information about the freebsd-fs mailing list