Re: Re: Desperate with 870 QVO and ZFS

From: <egoitz_at_ramattack.net>
Date: Fri, 08 Apr 2022 17:41:11 UTC
Hi Stefan, 

Again extremely grateful. It's an absolute honor to receive your help..
really.... 

I have read this mail now but I need to read it slower and in a more
relaxed way.... When I do that I'll answer you (during the weekend or on
Monday at most). 

Don't worry I will keep you updated with news :) :) . I promise :) :) 

Cheers!

El 2022-04-08 13:14, Stefan Esser escribió:

> ATENCION: Este correo se ha enviado desde fuera de la organización. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro.
> 
> Am 07.04.22 um 14:30 schrieb egoitz@ramattack.net: El 2022-04-06 23:49, Stefan Esser escribió: 
> 
> El 2022-04-06 17:43, Stefan Esser escribió: 
> 
> Am 06.04.22 um 16:36 schrieb egoitz@ramattack.net: Hi Rainer!
> 
> Thank you so much for your help :) :)
> 
> Well I assume they are in a datacenter and should not be a power outage....
> 
> About dataset size... yes... our ones are big... they can be 3-4 TB easily each
> dataset.....
> 
> We bought them, because as they are for mailboxes and mailboxes grow and
> grow.... for having space for hosting them... 
> Which mailbox format (e.g. mbox, maildir, ...) do you use? 
> 
> I'M RUNNING CYRUS IMAP SO SORT OF MAILDIR... TOO MANY LITTLE FILES NORMALLY..... SOMETIMES DIRECTORIES WITH TONS OF LITTLE FILES....

Assuming that many mails are much smaller than the erase block size of
the SSD, this may cause issues. (You may know the following ...) 

For example, if you have message sizes of 8 KB and an erase block size
of 64 KB (just guessing), then 8 mails will be in an erase block. If
half the mails are deleted, then the erase block will still occupy 64
KB, but only hold 32 KB of useful data (and the SSD will only be aware
of this fact if TRIM has signaled which data is no longer relevant). The
SSD will copy several partially filled erase blocks together in a
smaller number of free blocks, which then are fully utilized. Later
deletions will repeat this game, and your data will be copied multiple
times until it has aged (and the user is less likely to delete further
messages). This leads to "write amplification" - data is internally
moved around and thus written multiple times. 

STEFAN!! YOU ARE NICE!! I THINK THIS COULD EXPLAIN ALL OUR PROBLEM. SO,
WHY WE ARE HAVING THE MOST RANDOMNESS IN OUR PERFORMANCE DEGRADATION AND
THAT DOES NOT NECESSARILY HAS TO MATCH WITH THE MOST IO PEAK HOURS...
THAT I COULD CAUSE THAT PERFORMANCE DEGRADATION JUST BY DELETING A
COUPLE OF HUGE (PERHAPS 200.000 MAILS) MAIL FOLDERS IN A MIDDLE TRAFFIC
HOUR TIME!! Yes, if deleting large amounts of data triggers performance
issues (and the disk does not have a deficient TRIM implementation),
then the issue is likely to be due to internal garbage collections
colliding with other operations.

>> THE PROBLEM IS THAT BY WHAT I KNOW, ERASE BLOCK SIZE OF AN SSD DISK IS SOMETHING FIXED IN THE DISK FIRMWARE. I DON'T REALLY KNOW IF PERHAPS IT COULD BE MODIFIED WITH SAMSUNG MAGICIAN OR THOSE KIND OF TOOL OF SAMSUNG.... ELSE I DON'T REALLY SEE THE MANNER OF IMPROVING IT... BECAUSE APART FROM THAT, YOU ARE DELETING A FILE IN RAIDZ-2 ARRAY... NO JUST IN A DISK... I ASSUME ALIGNING CHUNK SIZE, WITH RECORD SIZE AND WITH THE "SECRET" ERASE SIZE OF THE SSD, PERHAPS COULD BE SLIGHTLY COMPENSATED?.

The erase block size is a fixed hardware feature of each flash chip.
There is a block size for writes (e.g. 8 KB) and many such blocks are
combined in one erase block (of e.g. 64 KB, probably larger in todays
SSDs), they can only be returned to the free block pool all together.
And if some of these writable blocks hold live data, they must be
preserved by collecting them in newly allocated free blocks. 

An example of what might happen, showing a simplified layout of files 1,
2, 3 (with writable blocks 1a, 1b, ..., 2a, 2b, ... and "--" for stale
data of deleted files, ".." for erased/writable flash blocks) in an SSD
might be: 

erase block 1: |1a|1b|--|--|2a|--|--|3a| 

erase block 2; |--|--|--|2b|--|--|--|1c| 

erase block 3; |2c|1d|3b|3c|--|--|--|--| 

erase block 4; |..|..|..|..|..|..|..|..| 

This is just a random example how data could be laid out on the physical
storage array. It is assumed that the 3 erase blocks once were
completely occupied 

In this example, 10 of 32 writable blocks are occupied, and only one
free erase block exists. 

This situation must not persist, since the SSD needs more empty erase
blocks. 10/32 of the capacity is used for data, but 3/4 of the blocks
are occupied and not immediately available for new data. 

The garbage collection might combine erase blocks 1 and 3 into a
currently free one, e.g. erase block 4: erase block 1;
|..|..|..|..|..|..|..|..| 

erase block 2; |--|--|--|2b|--|--|--|1c| 

erase block 3; |..|..|..|..|..|..|..|..| 

erase block 4: |1a|1b|2a|3a|2c|1d|3b|3c| 

Now only 2/4 of the capacity is not available for new data (which is
still a lot more than 10/32, but better than before). 

Now assume file 2 is deleted:

erase block 1; |..|..|..|..|..|..|..|..| 

erase block 2; |--|--|--|--|--|--|--|1c| 

erase block 3; |..|..|..|..|..|..|..|..| 

erase block 4: |1a|1b|--|3a|--|1d|3b|3c| 

There is now a new sparsely used erase block 4, and it will soon need to
be garbage collected, too - in fact it could be combined with the live
data from erase block 2, but this may be delayed until there is demand
for more erased blocks (since e.g. file 1 or 3 might also have been
deleted by then). 

The garbage collection does not know which data blocks belong to which
file, and therefore it cannot collect the data belonging to a file into
a single erase block. Blocks are allocated as data comes in (as long as
enough SLC cells are available in this area, else directly in QLC
cells). Your many parallel updates will cause fractions of each larger
file to be spread out over many erase blocks. 

As you can see, a single file that is deleted may affect many erase
blocks, and you have to take redundancy into consideration, which will
multiply the effect by a factor of up to 3 for small files (one ZFS
allocation block). And remember: deleting a message in mdir format will
free the data blocks, but will also remove the directory entry, causing
additional meta-data writes (again multiplied by the raid redundancy). 

A consumer SSD would normally see only very few parallel writes, and
sequential writes of full files will have a high chance to put the data
of each file contiguously in the minimum number of erase blocks,
allowing to free multiple complete erase blocks when such a file is
deleted and thus obviating the need for many garbage collection copies
(that occur if data from several independent files is in one erase
block). 

Actual SSDs have many more cells than advertised. Some 10% to 20% may be
kept as a reserve for aging blocks that e.g. may have failed kind of a
"read-after-write test" (implemented in the write function, which adds
charges to the cells until they return the correct read-outs). 

BTW: Having an ashift value that is lower than the internal write block
size may also lead to higher write amplification values, but a large
ashift may lead to more wasted capacity, which may become an issue if
typical file length are much smaller than the allocation granularity
that results from the ashift value. 

>> Larger mails are less of an issue since they span multiple erase blocks, which will be completely freed when such a message is deleted. 
>> 
>> I SEE I SEE STEFAN... 
>> 
>> Samsung has a lot of experience and generally good strategies to deal with such a situation, but SSDs specified for use in storage systems might be much better suited for that kind of usage profile. 
>> 
>> YES... AND THE DISKS FOR OUR PURPOSE... PERHAPS WEREN'T QVOS....

You should have got (much more expensive) server grade SSDs, IMHO. 

But even 4 * 2 TB QVO (or better EVO) drives per each 8 TB QVO drive
would result in better performance (but would need a lot of extra SATA
ports). 

In fact, I'm not sure whether rotating media and a reasonable L2ARC
consisting of a fast M.2 SSD plus a mirror of small SSDs for a LOG
device would not be a better match for your use case. Reading the L2ARC
would be very fast, writes would be purely sequential and relatively
slow, you could choose a suitable L2ARC strategy (caching of file data
vs. meta data), and the LOG device would support fast fsync() operations
required for reliable mail systems (which confirm data is on stable
storage before acknowledging the reception to the sender).

> We knew they had some speed issues, but those speed issues, we thought (as
> Samsung explains in the QVO site) they started after exceeding the speeding
> buffer this disks have. We though that meanwhile you didn't exceed it's
> capacity (the capacity of the speeding buffer) no speed problem arises. Perhaps
> we were wrong?. 
> These drives are meant for small loads in a typical PC use case,
> i.e. some installations of software in the few GB range, else only
> files of a few MB being written, perhaps an import of media files
> that range from tens to a few hundred MB at a time, but less often
> than once a day. 
> 
> WE MOVE, YOU KNOW... LOTS OF LITTLE FILES... AND LOT'S OF DIFFERENT CONCURRENT MODIFICATIONS BY 1500-2000 CONCURRENT IMAP CONNECTIONS WE HAVE...

I do not expect the read load to be a problem (except possibly when the
SSD is moving data from SLC to QLC blocks, but even then reads will get
priority). But writes and trims might very well overwhelm the SSD,
especially when its getting full. Keeping a part of the SSD unused
(excluded from the partitions created) will lead to a large pool of
unused blocks. This will reduce the write amplification - there are many
free blocks in the "unpartitioned part" of the SSD, and thus there is
less urgency to compact partially filled blocks. (E.g. if you include
only 3/4 of the SSD capacity in a partition used for the ZPOOL, then 1/4
of each erase block could be free due to deletions/TRIM without any
compactions required to hold all this data.) 

Keeping a significant percentage of the SSD unallocated is a good
strategy to improve its performance and resilience. 

WELL, WE HAVE ALLOCATED ALL THE DISK SPACE... BUT NOT USED... JUST
ALLOCATED.... YOU KNOW... WE DO A ZPOOL CREATE WITH THE WHOLE DISKS.....


I think the only chance for a solution that does not require new
hardware is to make sure, only some 80% of the SSDs are used (i.e.
allocate only 80% for ZFS, leave 20% unallocated). This will
significantly reduce the rate of garbage collections and thus reduce the
load they cause. 

I'd use a fast encryption algorithm (zstd - choose a level that does not
overwhelm the CPU, there are benchmark results for ZFS with zstd, and I
found zstd-2 to be best for my use case). This will more than make up
for the space you left unallocated on the SSDs. 

A different mail box format might help, too - I'm happy with dovecot's
mdbox format, which is as fast but much more efficient than mdir.

> As the SSD fills, the space available for the single level write
> cache gets smaller 
> 
> THE SINGLE LEVEL WRITE CACHE IS THE CACHE THESE SSD DRIVERS HAVE, FOR COMPENSATING THE SPEED ISSUES THEY HAVE DUE TO USING QLC MEMORY?. DO YOU REFER TO THAT?. SORRY I DON'T UNDERSTAND WELL THIS PARAGRAPH.

Yes, the SSD is specified to hold e.g. 1 TB at 4 bits per cell. The SLC
cache has only 1 bit per cell, thus a 6 GB SLC cache needs as many cells
as 24 GB of data in QLC mode. 

OK, TRUE.... YES.... 

A 100 GB SLC cache would reduce the capacity of a 1 TB SSD to 700 GB
(600 GB in 150 tn QLC cells plus 100 GB in 100 tn SLC cells). 

AHH! YOU MEAN THAT SLC CAPACITY FOR SPEEDING UP THE QLC DISKS, IS
OBTAINED FROM EACH SINGLE LAYER OF THE QLC?. 

There are no specific SLC cells. A fraction of the QLC capable cells is
only written with only 1 instead of 4 bits. This is a much simpler
process, since there are only 2 charge levels per cell that are used,
while QLC uses 16 charge levels, and you can only add charge (must not
overshoot), therefore only small increments are added until the correct
value can be read out). 

But since SLC cells take away specified capacity (which is calculated
assuming all cells hold 4 bits each, not only 1 bit), their number is
limited and shrinks as demand for QLC cells grows. 

The advantage of the SLC cache is fast writes, but also that data in it
may have become stale (trimmed) and thus will never be copied over into
a QLC block. But as the SSD fills and the size of the SLC cache shrinks,
this capability will be mostly lost, and lots of very short lived data
is stored in QLC cells, which will quickly become partially stale and
thus needing compaction as explained above.

> Therefore, the fraction of the cells used as an SLC cache is reduced when it gets full (e.g. ~1 TB in ~250 tn QLC cells, plus 6 GB in 6 tn SLC cells). 
> 
> SORRY I DON'T GET THIS LAST SENTENCE... DON'T UNDERSTAND IT BECAUSE I DON'T REALLY KNOW THE MEANING OF TN... 
> 
> BUT I THINK I'M GETTING THE IDEA IF YOU SAY THAT EACH QLC LAYER, HAS IT'S OWN SLC CACHE OBTAINED FROM THE DISK SPACE AVAIABLE FOR EACH QLC LAYER.... 
> 
> And with less SLC cells available for short term storage of data the probability of data being copied to QLC cells before the irrelevant messages have been deleted is significantly increased. And that will again lead to many more blocks with "holes" (deleted messages) in them, which then need to be copied possibly multiple times to compact them. 
> 
> IF I CORRECT ABOVE, I THINK I GOT THE IDEA YES.... (on many SSDs, I have no numbers for this
> particular device), and thus the amount of data that can be
> written at single cell speed shrinks as the SSD gets full. 
> 
> I have just looked up the size of the SLC cache, it is specified
> to be 78 GB for the empty SSD, 6 GB when it is full (for the 2 TB
> version, smaller models will have a smaller SLC cache). 
> 
> ASSUMING YOU WERE TALKING ABOUT THE CACHE FOR COMPENSATING SPEED WE PREVIOUSLY COMMENTED, I SHOULD SAY THESE ARE THE 870 QVO BUT THE 8TB VERSION. SO THEY SHOULD HAVE THE BIGGEST CACHE FOR COMPENSATING THE SPEED ISSUES...

I have looked up the data: the larger versions of the 870 QVO have the
same SLC cache configuration as the 2 TB model, 6 GB minimum and up to
72 GB more if there are enough free blocks. 

OURS ONE IS THE 8TB MODEL SO I ASSUME IT COULD HAVE BIGGER LIMITS. THE
DISKS ARE MOSTLY EMPTY, REALLY.... SO... FOR INSTANCE.... 

ZPOOL LIST
NAME             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP 
DEDUP  HEALTH  ALTROOT
ROOT_DATASET  448G  2.29G   446G        -         -     1%     0%  1.00X
 ONLINE  -
MAIL_DATASET  58.2T  11.8T  46.4T        -         -    26%    20% 
1.00X  ONLINE  - 

Ok, seems you have got 10 * 8 TB in a raidz2 configuration. 

Only 20% of the mail dataset is in use, the situation will become much
worse when the pool will fill up!

>> I SUPPOSE FRAGMENTATION AFFECTS TOO....

On magnetic media fragmentation means that a file is spread out over the
disk in a non-optimal way, causing access latencies due to seeks and
rotational delay. That kind of fragmentation is not really relevant for
SSDs, which allow for fast random access to the cells. 

And the FRAG value shown by the "zpool list" command is not about
fragmentation of files at all, it is about the structure of free space.
Anyway less relevant for SSDs than for classic hard disk drives.

> But after writing those few GB at a speed of some 500 MB/s (i.e.
> after 12 to 150 seconds), the drive will need several minutes to
> transfer those writes to the quad-level cells, and will operate
> at a fraction of the nominal performance during that time.
> (QLC writes max out at 80 MB/s for the 1 TB model, 160 MB/s for the
> 2 TB model.) 
> 
> WELL WE ARE IN THE 8TB MODEL. I THINK I HAVE UNDERSTOOD WHAT YOU WROTE IN PREVIOUS PARAGRAPH. YOU SAID THEY CAN BE FAST BUT NOT CONSTANTLY, BECAUSE LATER THEY HAVE TO WRITE ALL THAT TO THEIR PERPETUAL STORAGE FROM THE CACHE. AND THAT'S SLOW. AM I WRONG?. EVEN IN THE 8TB MODEL YOU THINK STEFAN?.

The controller in the SSD supports a given number of channels (e.g 4),
each of which can access a Flash chip independently of the others. Small
SSDs often have less Flash chips than there are channels (and thus a
lower throughput, especially for writes), but the larger models often
have more chips than channels and thus the performance is capped. 

THIS IS TOTALLY LOGICAL. IF A QVO DISK WOULD OUTPERFORM BEST OR SIMILAR
THAN AN INTEL WITHOUT CONSEQUENCES.... WHO WAS GOING TO BUY A EXPENSIVE
INTEL ENTERPRISE?. The QVO is bandwidth limited due to the SATA data
rate of 6 Mbit/s anyway, and it is optimized for reads (which are not
significantly slower than offered by the TLC models). This is a viable
concept for a consumer PC, but not for a server.

> In the case of the 870 QVO, the controller supports 8 channels, which allows it to write 160 MB/s into the QLC cells. The 1 TB model apparently has only 4 Flash chips and is thus limited to 80 MB/s in that situation, while the larger versions have 8, 16, or 32 chips. But due to the limited number of channels, the write rate is limited to 160 MB/s even for the 8 TB model. 
> 
> TOTALLY LOGICAL STEFAN... 
> 
> If you had 4 * 2 TB instead, the throughput would be 4 * 160 MB/s in this limit. 
> THE MAIN PROBLEM WE ARE FACING IS THAT IN SOME PEAK MOMENTS, WHEN THE MACHINE SERVES CONNECTIONS FOR ALL THE INSTANCES IT HAS, AND ONLY AS SAID IN SOME PEAK MOMENTS... LIKE THE 09AM OR THE 11AM.... IT SEEMS THE MACHINE BECOMES SLOWER... AND LIKE IF THE DISKS WEREN'T ABLE TO SERVE ALL THEY HAVE TO SERVE.... IN THESE MOMENTS, NO BIG FILES ARE MOVED... BUT AS WE HAVE 1800-2000 CONCURRENT IMAP CONNECTIONS... NORMALLY THEY ARE DOING EACH ONE... LITTLE CHANGES IN THEIR MAILBOX. DO YOU THINK PERHAPS THIS DISKS THEN ARE NOT APPROPRIATE FOR THIS KIND OF USAGE?-

I'd guess that the drives get into a state in which they have to recycle
lots of partially free blocks (i.e. perform kind of a garbage
collection) and then three kinds of operations are competing with each
other: 

 	* reads (generally prioritized)
 	* writes (filling the SLC cache up to its maximum size)
 	* compactions of partially filled blocks (required to make free blocks
available for re-use)

Writes can only proceed if there are sufficient free blocks, which on a
filled SSD with partially filled erase blocks means that operations of
type 3. need to be performed with priority to not stall all writes. 

My assumption is that this is what you are observing under peak load. 

IT COULD BE ALTHOUGH THE DISKS ARE NOT FILLED.... THE POOL ARE AT 20 OR
30% OF CAPACITY AND FRAGMENTATION FROM 20%-30% (AS ZPOOL LIST STATES).
Yes, and that means that your issues will become much more critical over
time when the free space shrinks and garbage collections will be
required at an even faster rate, with the SLC cache becoming less and
less effective to weed out short lived files as an additional factor
that will increase write amplification.

> And cheap SSDs often have no RAM cache (not checked, but I'd be
> surprised if the QVO had one) and thus cannot keep bookkeeping date
> in such a cache, further limiting the performance under load. 
> 
> THIS BROCHURE (HTTPS://SEMICONDUCTOR.SAMSUNG.COM/RESOURCES/BROCHURE/870_SERIES_BROCHURE.PDF AND THE DATASHEET HTTPS://SEMICONDUCTOR.SAMSUNG.COM/RESOURCES/DATA-SHEET/SAMSUNG_SSD_870_QVO_DATA_SHEET_REV1.1.PDF) SAIS IF I HAVE READ PROPERLY, THE 8TB DRIVE HAS 8GB OF RAM?. I ASSUME THAT IS WHAT THEY CALL THE TURBO WRITE CACHE?.

No, the turbo write cache consists of the cells used in SLC mode (which
can be any cells, not only cells in a specific area of the flash chip). 

I SEE I SEE.... 

The RAM is needed for fast lookup of the position of data for reads and
of free blocks for writes. 

OUR ONES... SEEM TO HAVE 8GB LPDDR4 OF RAM.... AS DATASHEET STATES.... 

Yes, and it makes sense that the RAM size is proportional to the
capacity since a few bytes are required per addressable data block. 

If the block size was 8 KB the RAM could hold 8 bytes (e.g. a pointer
and some status flags) for each logically addressable block. But there
is no information about the actual internal structure of the QVO that I
know of. [...]

>> I SEE.... IT'S EXTREMELY MISLEADING YOU KNOW... BECAUSE... YOU CAN COPY FIVE MAILBOXES OF 50GB CONCURRENTLY FOR INSTANCE.... AND YOU FLOOD A GIGABIT INTERFACE COPYING (OBVIOUSLY BECAUSE DISKS CAN KEEP THAT THROUGHPUT)... BUT LATER.... YOU SEE... YOU ARE IN AN HOUR THAT YESTERDAY, AND EVEN 4 DAYS BEFORE YOU HAVE NOT HAD ANY ISSUES... AND THAT DAY... YOU SEE THE COMMENTED ISSUE... EVEN NOT BEING EXACTLY AT A PEAK HOUR (PERHAPS IS TWO HOURS LATER THE PEAK HOUR EVEN)... OR... BUT I WASN'T NOTICING ABOUT ALL THINGS YOU SAY IN THIS EMAIL.... 
>> 
>> I have seen advice to not use compression in a high load scenario in some other reply. 
>> 
>> I tend to disagree: Since you seem to be limited when the SLC cache is exhausted, you should get better performance if you compress your data. I have found that zstd-2 works well for me (giving a significant overall reduction of size at reasonable additional CPU load). Since ZFS allows to switch compressions algorithms at any time, you can experiment with different algorithms and levels. 
>> 
>> I SEE... YOU SAY COMPRESSION SHOULD BE ENABLED.... THE MAIN REASON BECAUSE WE HAVE NOT ENABLED IT YET, IS FOR KEEPING THE SYSTEM THE MOST NEAR POSSIBLE TO CONFIG DEFAULTS... YOU KNOW... FOR LATER BEING ABLE TO ASK IN THIS MAILING LISTS IF WE HAVE AN ISSUE... BECAUSE YOU KNOW... IT WOULD BE FAR MORE EASIER TO ASK ABOUT SOMETHING STRANGE YOU ARE SEEING WHEN THAT STRANGE THING IS NEAR TO A WELL TESTED CONFIG, LIKE THE CONFIG BY DEFAULT.... 
>> 
>> BUT NOW YOU SAY STEFAN... IF YOU SWITCH BETWEEN COMPRESSION ALGORITHMS YOU WILL END UP WITH A MIX OF DIFFERENT FILES COMPRESSED IN A DIFFERENT MANNER... THAT IS NOT A BIT DISASTER LATER?. DOESN'T AFFECT PERFORMANCE IN SOME MANNER?.
 The compression used is stored in the per file information, each file
in a dataset could have been written with a different compression method
and level. Blocks are independently compressed - a file level
compression may be more effective. Large mail files will contain
incompressible attachments (already compressed), but in base64 encoding.
This should allow a compression ratio of ~1,3. Small files will be plain
text or HTML, offering much better compression factors.

>> One advantage of ZFS compression is that it applies to the ARC, too. And a compression factor of 2 should easily be achieved when storing mail (not for .docx, .pdf, .jpg files though). Having more data in the ARC will reduce the read pressure on the SSDs and will give them more cycles for garbage collections (which are performed in the background and required to always have a sufficient reserve of free flash blocks for writes). 
>> 
>> WE WOULD USE I ASSUME THE LZ4... WHICH IS THE LESS "EXPENSIVE" COMPRESSION ALGORITHM FOR THE CPU... AND I ASSUME TOO FOR AVOIDING DELAY ACCESSING DATA... DO YOU RECOMMEND ANOTHER ONE?. DO YOU ALWAYS RECOMMEND COMPRESSION THEN?.

I'd prefer zstd over lz4 since it offers a much higher compression
ratio. 

Zstd offers higher compression ratios than lz4 at similar or better
decompression speed, but may be somewhat slower compressing the data.
But in my opinion this is outweighed by the higher effective amount of
data in the ARC/L2ARC possible with zstd. 

For some benchmarks of different compression algorithms available for
ZFS and compared to uncompressed mode see the extensive results
published by Jude Allan:

https://docs.google.com/spreadsheets/d/1TvCAIDzFsjuLuea7124q-1UtMd0C9amTgnXm2yPtiUQ/edit?usp=sharing

The SQL benchmarks might best resemble your use case - but remember that
a significant reduction of the amount of data being written to the SSDs
might be more important than the highest transaction rate, since your
SSDs put a low upper limit on that when highly loaded.

>> I'd give it a try - and if it reduces your storage requirements by 10% only, then keep 10% of each SSD unused (not assigned to any partition). That will greatly improve the resilience of your SSDs, reduce the write-amplification, will allow the SLC cache to stay at its large value, and may make a large difference to the effective performance under high load. 
>> 
>> BUT WHEN YOU ENABLE COMPRESSION... ONLY GETS COMPRESSED THE NEW DATA MODIFIED OR ENTERED. AM I WRONG?.
 Compression is per file system data block (at most 1 MB if you set the
blocksize to that value). Each such block is compressed independently of
all others, to not require more than 1 block to be read and decompressed
when randomly reading a file. If a block does not shrink when compressed
(it may contain compressed file data) the block is written to disk as-is
(uncompressed).

>> BY THE WAY, WE HAVE MORE OR LESS 1/4 OF EACH DISK USED (12 TB ALLOCATED IN A POLL STATED BY ZPOOL LIST, DIVIDED BETWEEN 8 DISKS OF 8TB...)... DO YOU THINK WE COULD BE SUFFERING ON WRITE AMPLIFICATION AND SO... HAVING A SO LITTLE DISK SPACE USED IN EACH DISK?.
 Your use case will cause a lot of garbage collections and this
particular high write amplification values.

>> Regards, STefan 
>> 
>> HEY MATE, YOUR MAIL IS INCREDIBLE. IT HAS HELPED AS A LOT. CAN WE INVITE YOU A CUP OF COFFEE OR A BEER THROUGH PAYPAL OR SIMILAR?. CAN I HELP YOU IN SOME MANNER?.

Thanks, I'm glad to help, and I'd appreciate to hear whether you get
your setup optimized for the purpose (and how well it holds up when you
approach the capacity limits of your drives). 

I'm always interested in experience of users with different use cases
than I have (just being a developer with too much archived mail and
media collected over a few decades). 

Regards, STefan