Best practice for high availability ZFS pool

InterNetX - Juergen Gotteswinter jg at internetx.com
Tue May 17 07:47:17 UTC 2016


Hi,

Am 5/16/2016 um 12:08 PM schrieb Palle Girgensohn:
> Hi,
> 
> We need to set up a ZFS pool with redundance. The main goal is high availability - uptime.
> 
> I can see a few of paths to follow.
> 
> 1. HAST + ZFS

dont do this, this has already been discussed some time ago. afaik
nothing changed until this

https://lists.freebsd.org/pipermail/freebsd-fs/2014-October/020084.html

> 
> 2. Some sort of shared storage, two machines sharing a JBOD box.

take care when choosing sas hba and expander, avoid sata behind sas

with dual expander jbods you will be able to build an ha setup, but i
highly recommend to avoid any home brew solutions. go for rsf-1.


> 
> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
> 
> 4. using something else than ZFS, even a different OS if required.
> 
> My main concern with HAST+ZFS is performance. Google offer some insights here, I find mainly unsolved problems. Please share any success stories or other experiences.
> 

performance isnt the real problem, check the older discussion mentioned
above.

> Shared storage still has a single point of failure, the JBOD box. Apart from that, is there even any support for the kind of storage PCI cards that support dual head for a storage box? I cannot find any.
> 

the jbods are just a dumb piece of metal with an expander mounted. so
far, i never had a broken one.

> We are running with ZFS replication today, but it is just too slow for the amount of data.
> 

replicate more often to keep the delta between each snapshot as small as
possible? maybe even 10G crosslink if possible?


> We prefer to keep ZFS as we already have a rather big (~30 TB) pool and also tools, scripts, backup all is using ZFS, but if there is no solution using ZFS, we're open to alternatives. Nexenta springs to mind, but I believe it is using shared storage for redundance, so it does have single points of failure?
> 
> Any other suggestions? Please share your experience. :)
> 
> Palle
> 


More information about the freebsd-fs mailing list