ZFS on HAST and reboot.

Johan Hendriks joh.hendriks at gmail.com
Wed Sep 7 20:35:07 UTC 2011


On maandag 5 september 2011 10:49:37, Pawel Jakub Dawidek wrote:
> On Fri, Sep 02, 2011 at 03:26:42PM +0200, Johan Hendriks wrote:
>> Hello all.
>>
>> I just started using ZFS on top of HAST.
>>
>> What i did was first glabel my disks like disk1 to disk3
>> Then I created my hast devices in /etc/hast.conf
>>
>> /etc/hast.conf looks like this.
>> i
>> resource disk1 {
>> on srv1 {
>> local /dev/label/disk1
>> remote 192.168.5.41
>>       }
>> on srv2 {
>> local /dev/label/disk1
>> remote 192.168.5.40
>>       }
>> }
>> resource disk2 {
>> on srv1 {
>> local /dev/label/disk2
>> remote 192.168.5.41
>>       }
>> on srv2 {
>> local /dev/label/disk2
>> remote 192.168.5.40
>>       }
>> }
>> resource disk3 {
>> on srv1 {
>> local /dev/label/disk3
>> remote 192.168.5.41
>>       }
>> on srv2 {
>> local /dev/label/disk3
>> remote 192.168.5.40
>>       }
>> }
>>
>> This works.
>> I can set srv 1 to primary and srv 2 to secondary and visa versa.
>> hastctl role primary all and hastctl role secondary all.
>>
>> Then i created  the raidz on the master srv1
>> zpool create storage raidz1 hast/disk1 hast/disk2 hast/disk3
>>
>> all looks good.
>> zpool status
>>     pool: storage
>>    state: ONLINE
>>    scan: scrub repaired 0 in 0h0m with 0 errors on Wed Aug 31 20:49:19 2011
>> config:
>>
>>           NAME            STATE     READ WRITE CKSUM
>>           storage         ONLINE       0     0     0
>>             raidz1-0      ONLINE       0     0     0
>>               hast/disk1  ONLINE       0     0     0
>>               hast/disk2  ONLINE       0     0     0
>>               hast/disk3  ONLINE       0     0     0
>>
>> errors: No known data errors
>>
>> then i created the mountpoint and created  zfs on it
>> # mkdir /usr/local/virtual
>> # zfs create storage/virtual
>> # zfs list
>> # zfs set mountpoint=/usr/local/virtual storage/virtual
>>
>> # /etc/rc.d/zfs start and whooop there is my /usr/local/virtual zfs
>> filesystem.
>> # mount
>> /dev/ada0p2 on / (ufs, local, journaled soft-updates)
>> devfs on /dev (devfs, local, multilabel)
>> storage on /storage (zfs, local, nfsv4acls)
>> storage/virtual on /usr/local/virtual (zfs, local, nfsv4acls)
>>
>> if i do a zfs export -f storage on srv1 change the hast role to
>> secondary and then set the hast role on srv2 to primary and do zfs
>> import -f storage, i can see the files on srv2.
>>
>> I am a happy camper :D
>>
>> So it works like advertised.
>> Now i rebooted both machines.
>> all is working fine.
>>
>> But if i reboot the server srv1 again, i can not import the pool
>> anymore, it tells me the pool is already imported.
>> I do load the carp-hast-switch master file with ifstated.
>> This does set the hast role to primary.
>> But can not import the pool.
>> Now this can be true because i did not export it.
>> if i do a /etc/rc.d/zfs start, than it gets mounted and the pool is
>> again available.
>>
>> Is there a way i can do this automaticly.
>> In my understanding after a reboot zfs try's to start, but fails because
>> my hast providers are not yet ready.
>> Or am i doing something wrong and should i not do it this way.
>> Can i tell zfs to start after the hast providers are primary at reboot.
>
> You can see the message that pool is already imported, because when you
> reboot primary there is still info about the pool in
> /boot/zfs/zpool.cache. Pools that are mentioned in this file are
> automatically imported on boot (by the kernel), so importing such a pool
> will fail. You should still be able to mount file systems (zfs mount -a).
>
> What I'd recommend is not to use /etc/rc.d/zfs to mount file systems
> from pools managed by HAST. Instead such pools should be imported by a
> script executed from HA software when it decides it should be primary.
>
> Also I'd recommend to avoid adding info about HAST pools to the
> /boot/zfs/zpool.cache file. You can do that by adding '-c none' option
> to 'zpool import'. This will tell ZFS not to cache info about the pool
> in zpool.cache.
>

Thanks for your answer.

One thing i can not seem to get done is the -c none option.
# zpool import -c none storage
failed to open cache file: No such file or directory

It looks like it is looking for a cache file named none and not as 
advertised do not search or do  not not cache the pool.

Gr
johan


More information about the freebsd-fs mailing list