Best practice for high availability ZFS pool

Ben RUBSON ben.rubson at gmail.com
Wed May 18 07:27:52 UTC 2016


> On 17 may 2016 at 19:06, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:
> 
> On Tue, 17 May 2016, Ben RUBSON wrote:
> 
>>> On 17 may 2016 at 15:24, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:
>>> 
>>> There is at least one case of zfs send propagating a problem into the receiving pool. I don't know if it broke the pool.  Corrupt data may be sent from one pool to another if it passes checksums.
>> 
>> Do you have any link to this problem ? Would be interesting to know if it was possible to come-back to a previous snapshot / consistent pool.
> 
> I don't have a link but I recall that it had something to do with the ability to send file 'holes' in the stream.

OK, just for reference : https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=207714

>> I think that making ZFS send/receive has a higher security level than mirroring to a second (or third) JBOD box.
>> With mirroring you will still have only one ZFS pool.
> 
> This is a reasonable assumption.
> 
>> However, if send/receive makes the receiving pool the exact 1:1 copy of the sending pool, then the thing which made the sending pool to corrupt could reach (and corrupt) the receiving pool... I don't know whether or not this could occur, and if ever it occurs, if we have the chance to revert to a previous snapshot, at least on the receiving side...
> 
> Zfs receive does not result in a 1:1 copy.  The underlying data organization can be completely different and compression or other options can be changed.

Yes, so if we assume ZFS send/receive bug-free, having a second pool which receives data of the first one (mirrored to different JBOD boxes), makes sense.

For the first pool, we could think about the following :
- server1 with its JBOD as a iSCSI target ;
- server2 with the exact same JBOD, iSCSI initiator, hosts a ZFS pool which mirrors each of server2's disks with one of the server1's disks.
If ever server2 fails, server1 imports the pool and brings the service back up.
When server2 comes back, it acts as the new iSCSI target and gives its disks to server1 which reconstructs the mirror.
Disks redundancy, and hardware redundancy.

And regularly, this pool is sent/received to a different pool on server3, we never know...

Sounds good (to me at least :)

Ben


More information about the freebsd-fs mailing list