Best practice for high availability ZFS pool

Palle Girgensohn girgen at FreeBSD.org
Tue May 17 16:20:04 UTC 2016


> 17 maj 2016 kl. 18:13 skrev Joe Love <joe at getsomewhere.net>:
> 
> 
>> On May 16, 2016, at 5:08 AM, Palle Girgensohn <girgen at FreeBSD.org> wrote:
>> 
>> Hi,
>> 
>> We need to set up a ZFS pool with redundance. The main goal is high availability - uptime.
>> 
>> I can see a few of paths to follow.
>> 
>> 1. HAST + ZFS
>> 
>> 2. Some sort of shared storage, two machines sharing a JBOD box.
>> 
>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
>> 
>> 4. using something else than ZFS, even a different OS if required.
>> 
>> My main concern with HAST+ZFS is performance. Google offer some insights here, I find mainly unsolved problems. Please share any success stories or other experiences.
>> 
>> Shared storage still has a single point of failure, the JBOD box. Apart from that, is there even any support for the kind of storage PCI cards that support dual head for a storage box? I cannot find any.
>> 
>> We are running with ZFS replication today, but it is just too slow for the amount of data.
>> 
>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool and also tools, scripts, backup all is using ZFS, but if there is no solution using ZFS, we're open to alternatives. Nexenta springs to mind, but I believe it is using shared storage for redundance, so it does have single points of failure?
>> 
>> Any other suggestions? Please share your experience. :)
>> 
>> Palle
>> 
> 
> I don’t know if this falls into the realm of what you want, but BSDMag just released an issue with an article entitled “Adding ZFS to the FreeBSD dual-controller storage concept.”
> https://bsdmag.org/download/reusing_openbsd/
> 
> My understanding in this setup is that the only single point of failure for this model is the backplanes that the drives would connect to.  Depending on your controller cards, this could be alleviated by simply using multiple drive shelves, and only using one drive/shelf as part of a vdev (then stripe or whatnot over your vdevs).
> 
> It might not be what you’re after, as it’s basically two systems with their own controllers, with a shared set of drives.  Some expansion from the virtual world to real physical systems will probably need additional variations.
> I think the TrueNAS system (with HA) is setup similar to this, only without the split between the drives being primarily handled by separate controllers, but someone with more in-depth knowledge would need to confirm/deny this.
> 
> -Joe
> 


This is actually very interesting IMO.

It is simple and easy to understand. Problem is I didn't find any proper controller cards for it. I think this is what Nexenta does as well as TrueNAS, with their HA versions.

I'll check out the article, thanks!

Palle



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20160517/8f0d668a/attachment.sig>


More information about the freebsd-fs mailing list