zpool export/import on failover - The pool metadata is corrupted
Ronald Klop
ronald-freebsd8 at klop.yi.org
Fri Jun 7 08:03:51 UTC 2013
On Thu, 06 Jun 2013 21:24:34 +0200, mxb <mxb at alumni.chalmers.se> wrote:
>
> Hello list,
>
> I have two-head ZFS setup with external disk enclosure over SAS expander.
> This is a failover setup with CARP and devd triggering spool
> export/import.
> One of two nodes is preferred master.
>
> Then master is rebooted, devd kicks in as of CARP becomes master and the
> second node picks up ZFS-disks from external enclosure.
> Then master comes back, CARP becomes master, devd kicks in and pool gets
> exported from the second node and imported on the first one.
>
> However, I have experienced metadata corruption several times with this
> setup.
> Note, that ZIL(mirrored) resides on external enclosure. Only L2ARC is
> both local and external - da1,da2, da13s2, da14s2
>
> root at nfs2:/root # zpool import
> pool: jbod
> id: 17635654860276652744
> state: FAULTED
> status: The pool metadata is corrupted.
> action: The pool cannot be imported due to damaged devices or data.
> see: http://illumos.org/msg/ZFS-8000-72
> config:
>
> jbod FAULTED corrupted data
> raidz3-0 ONLINE
> da3 ONLINE
> da4 ONLINE
> da5 ONLINE
> da6 ONLINE
> da7 ONLINE
> da8 ONLINE
> da9 ONLINE
> da10 ONLINE
> da11 ONLINE
> da12 ONLINE
> cache
> da1
> da2
> da13s2
> da14s2
> logs
> mirror-1 ONLINE
> da13s1 ONLINE
> da14s1 ONLINE
>
> Any ideas what is going on?
>
> //mxb
I know the Oracle ZFS Appliance you can buy with clustering does a reboot
of the node which should release the pool.
The mechanism is called like this: http://en.wikipedia.org/wiki/STONITH
Ronald.
More information about the freebsd-fs
mailing list