Daniel Kalchev daniel at digsys.bg
Tue Apr 30 13:53:01 UTC 2019

> On 30 Apr 2019, at 16:11, Karl Denninger <karl at denninger.net> wrote:
> My experience is that ZFS is materially more-resilient but there is no
> such thing as "can never be corrupted by any set of events."  Backup
> strategies for moderately large (e.g. many Terabytes) to very large
> (e.g. Petabytes and beyond) get quite complex but they're also very
> necessary.

I can only second that statement. Being paranoid with your data (keep many copies, have many backups) is never enough.

A colleague just complained the other day, that they lost a zpool and that ZFS didn’t save their data…. by not making a redundant pool and the hard drive  trashing heads. And no backups. The unreadable part of the drive happened in metadata and the pool can not be imported.

I keep an HDD around, that since it was brand new, runs perfectly under any OS. Rock solid, that is… and only ZFS complains that it reads things back it didn’t write. Before that, I would think UFS was ok… since then, I don’t build a single installation, that does not have at least a mirrored ZFS pool. And “archive servers” (stands for backup) have become the central focus of my work. These are never enough..


More information about the freebsd-stable mailing list