ZFS on HAST and reboot.

Johan Hendriks joh.hendriks at gmail.com
Fri Sep 2 13:48:33 UTC 2011


Hello all.

I just started using ZFS on top of HAST.

What i did was first glabel my disks like disk1 to disk3
Then I created my hast devices in /etc/hast.conf

/etc/hast.conf looks like this.
i
resource disk1 {
on srv1 {
local /dev/label/disk1
remote 192.168.5.41
     }
on srv2 {
local /dev/label/disk1
remote 192.168.5.40
     }
}
resource disk2 {
on srv1 {
local /dev/label/disk2
remote 192.168.5.41
     }
on srv2 {
local /dev/label/disk2
remote 192.168.5.40
     }
}
resource disk3 {
on srv1 {
local /dev/label/disk3
remote 192.168.5.41
     }
on srv2 {
local /dev/label/disk3
remote 192.168.5.40
     }
}

This works.
I can set srv 1 to primary and srv 2 to secondary and visa versa.
hastctl role primary all and hastctl role secondary all.

Then i created  the raidz on the master srv1
zpool create storage raidz1 hast/disk1 hast/disk2 hast/disk3

all looks good.
zpool status
   pool: storage
  state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Aug 31 20:49:19 2011
config:

         NAME            STATE     READ WRITE CKSUM
         storage         ONLINE       0     0     0
           raidz1-0      ONLINE       0     0     0
             hast/disk1  ONLINE       0     0     0
             hast/disk2  ONLINE       0     0     0
             hast/disk3  ONLINE       0     0     0

errors: No known data errors

then i created the mountpoint and created  zfs on it
# mkdir /usr/local/virtual
# zfs create storage/virtual
# zfs list
# zfs set mountpoint=/usr/local/virtual storage/virtual

# /etc/rc.d/zfs start and whooop there is my /usr/local/virtual zfs 
filesystem.
# mount
/dev/ada0p2 on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
storage on /storage (zfs, local, nfsv4acls)
storage/virtual on /usr/local/virtual (zfs, local, nfsv4acls)

if i do a zfs export -f storage on srv1 change the hast role to 
secondary and then set the hast role on srv2 to primary and do zfs 
import -f storage, i can see the files on srv2.

I am a happy camper :D

So it works like advertised.
Now i rebooted both machines.
all is working fine.

But if i reboot the server srv1 again, i can not import the pool 
anymore, it tells me the pool is already imported.
I do load the carp-hast-switch master file with ifstated.
This does set the hast role to primary.
But can not import the pool.
Now this can be true because i did not export it.
if i do a /etc/rc.d/zfs start, than it gets mounted and the pool is 
again available.

Is there a way i can do this automaticly.
In my understanding after a reboot zfs try's to start, but fails because 
my hast providers are not yet ready.
Or am i doing something wrong and should i not do it this way.
Can i tell zfs to start after the hast providers are primary at reboot.

I hope i explained it correctly.
Thanks for your time.

regards
Johan Hendriks









More information about the freebsd-fs mailing list