how to measure microsd wear

Karl Denninger karl at denninger.net
Sun Jan 22 04:29:37 UTC 2017


On 1/21/2017 18:24, Hal Murray wrote:
> karl at denninger.net said:
>> and this one is not a low-hour failure either, nor is it an off-brand --
>> it's a Sandisk Ultra 32Gb and the machine has roughly a year of 24x7x365
>> uptime on it.
> Any idea how many writes it did?
Offhand, no.  I did not expect this particular device to have a problem
given its workload, but it did.  It could have been a completely random
even (e.g. cosmic ray hits the "wrong" place in the controller's mapping
tables, damages the data in it a critical way, and the controller throws
up its hands and says "screw you, it's over.")  There's no real way to
know - the card is effectively junk as the controller has write-locked
it, so all I can do (and did) is get the config files and application it
runs under the OS off it and put them on the new one.

The other failures were less-surprising; in particular the box on my
desk, given that I compile on it frequently and that produces a lot of
small write I/O activity, didn't shock me all that much when it failed.

One of the big problems with NAND flash (in any form) is that it can
only be written to "zeros."  That is, a blank page is all "1s" at a bit
level, and a write actually just writes the zeros.  This leads to what
is called "write amplification" because changing one byte in a page
requires reading the page in and writing an entire new page out, then
(usually later) erasing the former page; you cannot update in-place.  If
a page is 4k in size then writing a single byte results in an actual
write of 4k bytes, or ~4,000 times as much as you think you wrote.  This
is also one of the reasons that random small-block write performance is
much slower than big writes; if you write an even multiple of an on-card
block the controller can simply lay down the new data onto pre-erased
space, where if you write small pieces of data it cannot do that and
winds up doing a lot of read/write cycling.  It gets worse (by a lot) if
there's file metadata to update with each write as well because that
metadata almost-certainly winds up carrying a (large) amount of write
amplification irrespective of the file data itself.  All of this is a
big part of why write I/O performance to these cards for actual
filesystem use is stinky in the general case compared against
pretty-much anything else.

The controller's internal logic has much voodoo in it from a user's
perspective; the manufacturers consider exactly how they do what they do
to be proprietary and simply present to you an opaque block-level
interface.  There are rumors that some controllers "know" about certain
filesystems (specifically exFAT) and are optimized for it, which implies
they may behave less-well if you're using something else.  How true this
actually might be is unknown but a couple of years ago I had a card that
appeared dead -- until it was reformatted with exFAT, at which point it
started working again.  I didn't trust it, needless to say.

SSDs typically have a published endurance rating and a reasonable
interface to get a handle on how much "wear" they have experienced. 
I've never seen either in any meaningful form for SD cards of any sort. 
In addition SSDs can (and do) "cheat" in that they all have RAM in them
and thus can collate writes together before physically committing them
in some instances, plus they typically will report that a write is
"complete" when it is in RAM (and not actually in NAND!)  Needless to
say if there's no proper power protection sufficient to flush that RAM
if the power fails unexpectedly very bad things will happen to your
data, and very few SSDs have said proper power protection (Intel 7xx and
3xxx series are two that are known to do this correctly; I have a bunch
of the 7xx series drives in service and have never had a problem with
any of them even under intentional cord-yank scenarios intended to test
their power-loss protection.)  I'm unaware of SD cards that do any of
this and I suspect their small size precludes it, never mind that they
were not designed for a workload where this would be terribly useful. 
The use envisioned for most SD cards, and their intent when designed, is
the sequential writing of anywhere from large to huge files (video or
still pictures) and the later sequential reading back of same, all under
some form of a FAT filesystem (exFAT for the larger cards now available.)

IMHO the best you can do with these cards in this application is to
minimize writes to the extent you can, especially small and frequent
writes of little actual value (e.g. mount with noatime!) and make sure
you can reasonably recover from failures in a rational fashion.

-- 
Karl Denninger
karl at denninger.net <mailto:karl at denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2993 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.freebsd.org/pipermail/freebsd-arm/attachments/20170121/dda2dcdd/attachment.bin>


More information about the freebsd-arm mailing list