Do we want a periodic script for a zfs scrub?

Jeremy Chadwick freebsd at jdc.parodius.com
Thu Jun 10 10:29:22 UTC 2010


On Thu, Jun 10, 2010 at 12:24:00PM +0200, Bernd Walter wrote:
> On Thu, Jun 10, 2010 at 11:23:45AM +0200, Alexander Leidinger wrote:
> > 
> > Quoting Bernd Walter <ticso at cicely7.cicely.de> (from Wed, 9 Jun 2010  
> > 16:43:55 +0200):
> > 
> > >On Wed, Jun 09, 2010 at 04:26:27PM +0200, Alexander Leidinger wrote:
> > >>Hi,
> > >>
> > >>I noticed that we do not have an automatism to scrub a ZFS pool
> > >>periodically. Is there interest in something like this, or shall I
> > >>keep it local?
> > >
> > >For me scrub'ing takes several days without having a special big
> > >pool size and starting another scrub restarts everything.
> > >You should at least check if another one is still running.
> > 
> > Good point, I will have a look at this...
> > 
> > But I'm a little bit surprised, when I scrub a pool of 3 times 250 GB  
> > disks in RAIDZ configuration, it is finished fast (a fraction of a  
> > day... maybe an hour or two). Initially it displays a very long time  
> > (>400 hours), but this is reducing after a while drastically. The pool  
> > is filled up to 3/4 of the entire capacity.
> 
> Well - my system is not idle during scrub and I don't have very
> fast disks either.
> My system runs with 2x 4x500G RAIDZ.
> Disks are consumer grade sata.
> Controller are onboard Intel AHCI and SiI 3132.
> OS is 8.0RC1(r198183), therefor I'm still using ata driver.
> 
> That's at scrub start:
> [115]cicely14# zpool status
>   pool: data
>  state: ONLINE
>  scrub: scrub in progress for 0h0m, 0.00% done, 2275h55m to go
> config:
> 
>         NAME             STATE     READ WRITE CKSUM
>         data             ONLINE       0     0     0
>           raidz1         ONLINE       0     0     0
>             ad34         ONLINE       0     0     0
>             ad12         ONLINE       0     0     0
>             ad28         ONLINE       0     0     0
>             ad26         ONLINE       0     0     0
>           raidz1         ONLINE       0     0     0
>             ad4          ONLINE       0     0     0
>             ad6          ONLINE       0     0     0
>             ad36         ONLINE       0     0     0
>             ad10         ONLINE       0     0     0
>         cache
>           label/cache6   ONLINE       0     0     0
>           label/cache7   ONLINE       0     0     0
>           label/cache8   ONLINE       0     0     0
>           label/cache9   ONLINE       0     0     0
>           label/cache10  ONLINE       0     0     0
> 
> errors: No known data errors
> 
> ETA first increases:
> [116]cicely14# zpool status
>   pool: data
>  state: ONLINE
>  scrub: scrub in progress for 0h0m, 0.00% done, 2539h19m to go
> 
> Then gets smaller:
> [117]cicely14# zpool status
>   pool: data
>  state: ONLINE
>  scrub: scrub in progress for 0h1m, 0.00% done, 1551h38m to go
> 
> [120]cicely14# zpool status
>   pool: data
>  state: ONLINE
>  scrub: scrub in progress for 0h2m, 0.00% done, 1182h20m to go
> 
> But it may get higher again:
> [121]cicely14# zpool status
>   pool: data
>  state: ONLINE
>  scrub: scrub in progress for 0h6m, 0.01% done, 1346h41m to go
> 
> I dont remember the time it took for the last scrub, but IIRC
> it took something about 2-3 days, so initial ETA is much higher
> than reality too.

You're running an 8.0 release candidate.  There have been some changes
to scrubbing and other whatnots with ZFS between then and now.  I'd
recommend trying RELENG_8 and seeing if the behaviour remains.  You
don't have to use ahci.ko (you can stick with ataahci.ko).

By "behaviour" I'm referring to how long the scrub is taking.  The
variance you see in ETA is normal.  You can verify that things aren't
stalled blindly by using "zpool iostat" (there should be fairly
intensive I/O).

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-fs mailing list