HAST + ZFS + NFS + CARP

Borja Marcos borjam at sarenet.es
Wed Aug 17 09:15:53 UTC 2016


> On 17 Aug 2016, at 11:11, krad <kraduk at gmail.com> wrote:
> 
> I totally agree here i would used some batch replication in general. Yes it doesnt provide the ha you require, but then if you need that maybe a different approach like a distributed file system is a better solution. Even then though I would still have my standard replication to a box not part of the distributed filesystem via rsync or something, just for ass covering. Admittedly this gets problematic when the datasets have large deltas and/or objects.

If your deltas are large you need a network with enough bandwidth to support it anyway. And rsync can be a nightmare depending on
the number of files you keep and their sizes. That’s an advantage of ZFS. In simple terms, an incremental send just copies a portion
of a transaction log together with its associated data blocks. The number of files does not hurt performance so much as it does
with rsync, which can be unusable.

And if you have real time requirements for replication (databases) using the built-in mechanisms in your DBMS
will be generally more robust.




Borja.




More information about the freebsd-fs mailing list