ZFS: drive replacement performance
Freddie Cash
fjwcash at gmail.com
Tue Jul 7 22:32:29 UTC 2009
On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith <mahlon at martini.nu> wrote:
> On Tue, Jul 07, 2009, Freddie Cash wrote:
> >
> > This is why we've started using glabel(8) to label our drives, and then
> add
> > the labels to the pool:
> > # zpool create store raidz1 label/disk01 label/disk02 label/disk03
> >
> > That way, it does matter where the kernel detects the drives or what the
> > physical device node is called, GEOM picks up the label, and ZFS uses the
> > label.
>
> Ah, slick. I'll definitely be doing that moving forward. Wonder if I
> could do it piecemeal now via a shell game, labeling and replacing each
> individual drive? Will put that on my "try it" list.
>
Yes, this can be done piecemeal, after the fact, on an already configured
pool. That's how I did it on one of our servers. It was originally
configured using the device node names (da0, da1, etc). Then I set up the
second server, but used labels. Then I went back to the first server,
labelled the drives, and did "zpool replace storage da0 label/disk01" for
each drive. Doesn't take long to resilver, as it knows that it's the same
device.
>
>
> > > Once I swapped drives, I issued a 'zpool replace'.
> > >
> > See comment at the end: what's the replace command that you used?
>
>
> After the reboot that shuffled device order, the 'da2' changed to that
> ID number. To have it accept the replace command, I had to use the
> number itself -- I couldn't use 'da2' since that was now elsewhere, in
> use, on the raidz1. Surprisingly, it worked. Or at least, it appeared
> to.
>
> % zpool replace store 2025342973333799752 da8
>
Hmm, you might be able to use glabel here, to label this new drive, and then
do the replace command using the label.
I think (never tried) you can use "zpool scrub -s store" to stop the
resilver. If not, you should be able to re-do the replace command.
>
>
> > There's something wrong here. It definitely should be incrementing.
> Even
> > when we did the foolish thing of creating a 24-drive raidz2 vdev and had
> to
> > replace a drive, the progress bar did change. Never got above 39% as it
> > kept restarting, but it did increment.
>
> Strangely, the ETA is jumping all over the place, from 50 hours to 2000+
> hours. Never seen the percent complete over 0.01% done, but then it
> goes back to 0.00%.
>
Hrm, odd.
--
Freddie Cash
fjwcash at gmail.com
More information about the freebsd-stable
mailing list