borjam at sarenet.es
Tue Apr 30 15:37:31 UTC 2019
> On 30 Apr 2019, at 15:30, Michelle Sullivan <michelle at sorbs.net> wrote:
>> I'm sorry, but that may well be what nailed you.
>> ECC is not just about the random cosmic ray. It also saves your bacon
>> when there are power glitches.
> No. Sorry no. If the data is only half to disk, ECC isn't going to save you at all... it's all about power on the drives to complete the write.
Not necessarily. Depending on the power outage things can get really funny during the power loss event. 25+ years ago I witnessed
a severe 2 second voltage drop and during that time the hard disk in our SCO Unix server got really crazy. Even the low level format
was corrupted, damage was way beyond mere filesystem corruption.
During the start of a power outage (especially when it’s not a clean power cut, but it comes preceded by some voltage swings) data
corruption can be extensive. As far as I know high end systems include power management elements to reduce the impact.
I have other war stories about UPS systems providing an extremely dirty waveform and causing format problems in disks. That happened
in 1995 or so.
>> Unfortunately however there is also cache memory on most modern hard
>> drives, most of the time (unless you explicitly shut it off) it's on for
>> write caching, and it'll nail you too. Oh, and it's never, in my
>> experience, ECC.
> No comment on that - you're right in the first part, I can't comment if there are drives with ECC.
Even with cache corruption, ZFS being transaction oriented should offer a reasonable guarantee of integrity. You may lose
1 miunte, 5 minutes of changes, but there should be stable, committed data on the disk.
Unless the electronics got insane for some milliseconds during the outage event (see above).
>> Oh that is definitely NOT true.... again, from hard experience,
>> including (but not limited to) on FreeBSD.
>> My experience is that ZFS is materially more-resilient but there is no
>> such thing as "can never be corrupted by any set of events."
> The latter part is true - and my blog and my current situation is not limited to or aimed at FreeBSD specifically, FreeBSD is my experience. The former part... it has been very resilient, but I think (based on this certain set of events) it is easily corruptible and I have just been lucky. You just have to hit a certain write to activate the issue, and whilst that write and issue might be very very difficult (read: hit and miss) to hit in normal every day scenarios it can and will eventually happen.
>> strategies for moderately large (e.g. many Terabytes) to very large
>> (e.g. Petabytes and beyond) get quite complex but they're also very
> and there in lies the problem. If you don't have a many 10's of thousands of dollars backup solutions, you're either:
> 1/ down for a looooong time.
> 2/ losing all data and starting again...
> ..and that's the problem... ufs you can recover most (in most situations) and providing the *data* is there uncorrupted by the fault you can get it all off with various tools even if it is a complete mess.... here I am with the data that is apparently ok, but the metadata is corrupt (and note: as I had stopped writing to the drive when it started resilvering the data - all of it - should be intact... even if a mess.)
The advantage of ZFS is that it makes it feasible to replicate data. If you keep a mirror storage server your disaster recovery actions won’t require the recovery of a full backup (which can take an inordinate amount of time) but reconfiguring the replica server to assume the role of the master one.
Again, being transaction based somewhat reduces the likelyhood of a software bug on the master to propagate to the slave causing extensive corruption. Rewinding to
the previous snapshot should help.
More information about the freebsd-stable