Converting a non-HAST ZFS pool to a HAST pool

Freddie Cash fjwcash at gmail.com
Tue Nov 2 21:52:46 UTC 2010


On Sat, Oct 16, 2010 at 3:28 PM, Pawel Jakub Dawidek <pjd at freebsd.org> wrote:
> On Fri, Oct 15, 2010 at 11:37:34AM -0700, Freddie Cash wrote:
>> Has anyone looked into, attempted, or considered converting a non-HAST
>> ZFS pool configuration into a HAST one?  While the pool is live and
>> the server is in use.  Would it even be possible?
>>
>> For example, would the following work (in a pool with a single raidz2
>> vdev, where the underlying GEOM provider is glabel)
>>   - zpool offline 1 drive  (pool is now running degraded)
>>   - configure hastd in master mode with a single provider using the
>> "offline" disk (hast metadata takes the place of glabel metadata)
>
> HAST metadata takes much more space than glabel metadata. The latter
> takes only one sector, while the former depends on provider size, but we
> have to keep entire extent bitmap there, so definitely more than one
> sector.

Okay, so converting a non-HAST ZFS setup to a HAST setup using the
same drives won't work.

Any reason that it wouldn't work when replacing the drives with larger ones?

 - zpool offline poolname label/disk01
 - physically replace drive
 - glabel drive as disk01
 - configure hast to use label/disk01
 - zpool replace poolname label/drive01 hast/drive01

I can't think of any reason why it would fail, since the hast device
will be twice as large as the non-hast device it's replacing.  But
thought I'd double-check, just to be safe.  :)

Granted, doing it this way would required a *long* initial sync, as
there's currently 18 TB of data in the pool.  And more going in every
day.  So it might be better to start fresh.

-- 
Freddie Cash
fjwcash at gmail.com


More information about the freebsd-fs mailing list