ZFS...

Michelle Sullivan michelle at sorbs.net
Tue Apr 30 09:05:56 UTC 2019



Michelle Sullivan
http://www.mhix.org/
Sent from my iPad

> On 30 Apr 2019, at 18:44, rainer at ultra-secure.de wrote:
> 
> Am 2019-04-30 10:09, schrieb Michelle Sullivan:
> 
>> Now, yes most production environments have multiple backing stores so
>> will have a server or ten to switch to whilst the store is being
>> recovered, but it still wouldn’t be a pleasant experience... not to
>> mention the possibility that if one store is corrupted there is a
>> chance that the other store(s) would also be affected in the same way
>> if in the same DC... (Eg a DC fire - which I have seen) .. and if you
>> have multi DC stores to protect from that.. size of the pipes between
>> DCs comes clearly into play.
> 
> 
> I have one customer with about 13T of ZFS - and because it would take a while to restore (actual backups), it zfs-sends delta-snapshots every hour to a standby-system.
> 
> It was handy when we had to rebuild the system with different HBAs.
> 
> 

I wonder what would happen if you scaled that up by just 10 (storage) and had the master blow up where it needs to be restored from backup.. how long would one be praying to higher powers that there is no problem with the backup...? (As in no outage or error causing a complete outAge.)... don’t get me wrong.. we all get to that position at sometime, but in my recent experience 2 issues colliding at the same time results in disaster.  13T is really not something I have issues with as I can usually cobble something together with 16T.. (at least until 6T drives became a viable (cost and availability at short notice) option...  even 10T is becoming easier to get a hold of now.. but I have a measly 96T here and it takes weeks even with gigabit bonded interfaces when I need to restore.


More information about the freebsd-stable mailing list