How to speed up slow zpool scrub?
Jeremy Faulkner
gldisater at gmail.com
Tue Apr 26 15:59:42 UTC 2016
On 2016-04-26 11:08 AM, Miroslav Lachman wrote:
> Jeremy Faulkner wrote on 04/26/2016 17:01:
>> zfs get all tank0
>
> I set checksum=fletcher4 and compression=lz4 (+ atime & exec to Off),
> anything else is in default state.
>
> There are 19 filesystems on tank0 and each have about 5 snapshots.
>
> I don't know how long scrub runs on some others system. If it is limited
> by CPU, or disk IOps... but for me 3 - 4 days are really long.
>
>
> # zfs get all tank0
> NAME PROPERTY VALUE SOURCE
> tank0 type filesystem -
> tank0 creation Thu Jul 23 1:37 2015 -
> tank0 used 7.85T -
> tank0 available 2.26T -
> tank0 referenced 140K -
> tank0 compressratio 1.86x -
> tank0 mounted no -
> tank0 quota none default
> tank0 reservation none default
> tank0 recordsize 128K default
> tank0 mountpoint none local
> tank0 sharenfs off default
> tank0 checksum fletcher4 local
> tank0 compression lz4 local
> tank0 atime off local
> tank0 devices on default
> tank0 exec off local
> tank0 setuid on default
> tank0 readonly off default
> tank0 jailed off default
> tank0 snapdir hidden default
> tank0 aclmode discard default
> tank0 aclinherit restricted default
> tank0 canmount on default
> tank0 xattr on default
> tank0 copies 1 default
> tank0 version 5 -
> tank0 utf8only off -
> tank0 normalization none -
> tank0 casesensitivity sensitive -
> tank0 vscan off default
> tank0 nbmand off default
> tank0 sharesmb off default
> tank0 refquota none default
> tank0 refreservation none default
> tank0 primarycache all default
> tank0 secondarycache all default
> tank0 usedbysnapshots 0 -
> tank0 usedbydataset 140K -
> tank0 usedbychildren 7.85T -
> tank0 usedbyrefreservation 0 -
> tank0 logbias latency default
> tank0 dedup off default
> tank0 mlslabel -
> tank0 sync standard default
> tank0 refcompressratio 1.00x -
> tank0 written 140K -
> tank0 logicalused 13.3T -
> tank0 logicalreferenced 9.50K -
> tank0 volmode default default
> tank0 filesystem_limit none default
> tank0 snapshot_limit none default
> tank0 filesystem_count none default
> tank0 snapshot_count none default
> tank0 redundant_metadata all default
>
Check the drive health with smartctl (part of sysutils/smartmontools).
Are the drives desktop drives or nas drives? In gstat is one drive
busier than the rest? If so, replace that drive.
More information about the freebsd-fs
mailing list