ZFS: drive replacement performance

Dan Naumov dan.naumov at gmail.com
Tue Jul 7 22:40:06 UTC 2009


On Wed, Jul 8, 2009 at 1:32 AM, Freddie Cash<fjwcash at gmail.com> wrote:
> On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith <mahlon at martini.nu> wrote:
>
>> On Tue, Jul 07, 2009, Freddie Cash wrote:
>> >
>> > This is why we've started using glabel(8) to label our drives, and then
>> add
>> > the labels to the pool:
>> >   # zpool create store raidz1 label/disk01 label/disk02 label/disk03
>> >
>> > That way, it does matter where the kernel detects the drives or what the
>> > physical device node is called, GEOM picks up the label, and ZFS uses the
>> > label.
>>
>> Ah, slick.  I'll definitely be doing that moving forward.  Wonder if I
>> could do it piecemeal now via a shell game, labeling and replacing each
>> individual drive?  Will put that on my "try it" list.

Not to derail this discussion, but can anyone explain if the actual
glabel metadata is protected in any way? If I use glabel to label a
disk and then create a pool using /dev/label/disklabel, won't ZFS
eventually overwrite the glabel metadata in the last sector since the
disk in it's entirety is given to the pool? Or is every filesystem
used by FreeBSD (ufs, zfs, etc) hardcoded to ignore the last few
sectors of any disk and/or partition and not write data to it to avoid
such issues?


- Sincerely,
Dan Naumov


More information about the freebsd-stable mailing list