Converting a non-HAST ZFS pool to a HAST pool

Freddie Cash fjwcash at gmail.com
Tue Nov 2 22:29:55 UTC 2010


On Tue, Nov 2, 2010 at 3:06 PM, Pawel Jakub Dawidek <pjd at freebsd.org> wrote:
> On Tue, Nov 02, 2010 at 02:52:45PM -0700, Freddie Cash wrote:
>> Okay, so converting a non-HAST ZFS setup to a HAST setup using the
>> same drives won't work.
>>
>> Any reason that it wouldn't work when replacing the drives with larger ones?
>>
>>  - zpool offline poolname label/disk01
>>  - physically replace drive
>>  - glabel drive as disk01
>>  - configure hast to use label/disk01
>>  - zpool replace poolname label/drive01 hast/drive01
>>
>> I can't think of any reason why it would fail, since the hast device
>> will be twice as large as the non-hast device it's replacing.  But
>> thought I'd double-check, just to be safe.  :)
>
> Yes, this should work.
>
>> Granted, doing it this way would required a *long* initial sync, as
>> there's currently 18 TB of data in the pool.  And more going in every
>> day.  So it might be better to start fresh.
>
> If you mean HAST initial sync, then this should be now improved in
> r214284:
>
>        Before this change on first connect between primary and
>        secondary we initialize all the data. This is huge waste of time
>        and resources if there were no writes yet, as there is no real
>        data to synchronize.
>
>        Optimize this by sending "virgin" argument to secondary, which
>        gives it a hint that synchronization is not needed.
>
>        In the common case (where noth nodes are configured at the same
>        time) instead of synchronizing everything, we don't synchronize
>        at all.
>
> The change is not yet merged to stable/8, AFAIR, but this will happen
> today or tomorrow.
>
> You still need to wait for ZFS to copy the data over to the new vdev.

Ah, I see what you mean.  I was originally thinking of replacing the
drives in the "master" server, then configuring the "slave" server,
and then syncing the two.  Which would take aeons to do.

Instead, it can be done in a piecemeal fashion:
  - configure slave server using new hardware (all 1 TB drives, for example)
  - replace 1 drive in master server, configure drive as a HAST device
      - use the HAST device to replace the non-HAST device in the ZFS
pool, which causes ZFS to resilver it
      -  this would then start a HAST sync *of just that one drive*
      - wait for resilver and sync to complete
  - replace next drive in master server
      - same process as above
  - repeat until all drives in master server are replaced with larger
drives, using /dev/hast/* devices

At that point, the master server should be using all HAST devices to
form the ZFS pool, and have a bunch of extra free space as a bonus.
*And*, the HAST devices should be all sync'd up with the slave server.

During this process, I'd have to make sure that no automatic fail-over
is setup, as it needs to sync master-->slave first.  But, once the
sync is done, then the CARP setup and fail-over can be done.

-- 
Freddie Cash
fjwcash at gmail.com


More information about the freebsd-fs mailing list