From nobody Wed Apr 06 15:48:31 2022 X-Original-To: freebsd-hackers@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id AEEFF1A8A975; Wed, 6 Apr 2022 15:56:47 +0000 (UTC) (envelope-from egoitz@ramattack.net) Received: from cu01208b.smtpx.saremail.com (cu01208b.smtpx.saremail.com [195.16.151.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4KYTf621Wdz4W0C; Wed, 6 Apr 2022 15:56:44 +0000 (UTC) (envelope-from egoitz@ramattack.net) Received: from www.saremail.com (unknown [194.30.0.183]) by sieve-smtp-backend01.sarenet.es (Postfix) with ESMTPA id 1F6F860C6A8; Wed, 6 Apr 2022 17:48:31 +0200 (CEST) List-Id: Technical discussions relating to FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-hackers List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-hackers@freebsd.org MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="=_a4ce21118f18db79ad9328c5f2cebb5a" Date: Wed, 06 Apr 2022 17:48:31 +0200 From: egoitz@ramattack.net To: freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org, Freebsd performance Cc: owner-freebsd-fs@freebsd.org Subject: Re: Re: Desperate with 870 QVO and ZFS In-Reply-To: <28e11d7ec0ac5dbea45f9f271fc28f06@ramattack.net> References: <4e98275152e23141eae40dbe7ba5571f@ramattack.net> <665236B1-8F61-4B0E-BD9B-7B501B8BD617@ultra-secure.de> <0ef282aee34b441f1991334e2edbcaec@ramattack.net> <28e11d7ec0ac5dbea45f9f271fc28f06@ramattack.net> Message-ID: <29f0eee5b502758126bf4cfa2d8e3517@ramattack.net> X-Sender: egoitz@ramattack.net User-Agent: Saremail webmail X-Rspamd-Queue-Id: 4KYTf621Wdz4W0C X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=pass (policy=reject) header.from=ramattack.net; spf=pass (mx1.freebsd.org: domain of egoitz@ramattack.net designates 195.16.151.183 as permitted sender) smtp.mailfrom=egoitz@ramattack.net X-Spamd-Result: default: False [-3.79 / 15.00]; RCVD_TLS_LAST(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; XM_UA_NO_VERSION(0.01)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:195.16.151.0/24]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; ARC_NA(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; NEURAL_HAM_SHORT(-1.00)[-1.000]; DMARC_POLICY_ALLOW(-0.50)[ramattack.net,reject]; FROM_NO_DN(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; MLMMJ_DEST(0.00)[freebsd-fs,freebsd-hackers,freebsd-performance]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ASN(0.00)[asn:3262, ipnet:195.16.128.0/19, country:ES]; RCVD_COUNT_TWO(0.00)[2]; MID_RHS_MATCH_FROM(0.00)[] X-ThisMailContainsUnwantedMimeParts: N --=_a4ce21118f18db79ad9328c5f2cebb5a Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 I have been thinking and.... I got the following tunables now : vfs.zfs.arc_meta_strategy: 0 vfs.zfs.arc_meta_limit: 17179869184 kstat.zfs.misc.arcstats.arc_meta_min: 4294967296 kstat.zfs.misc.arcstats.arc_meta_max: 19386809344 kstat.zfs.misc.arcstats.arc_meta_limit: 17179869184 kstat.zfs.misc.arcstats.arc_meta_used: 16870668480 vfs.zfs.arc_max: 68719476736 and top sais : ARC: 19G Total, 1505M MFU, 12G MRU, 6519K Anon, 175M Header, 5687M Other When using even 128GB of vfs.zfs.arc_max (instead of 64GB I have now set) the ARC wasn't approximating to it's max usable size.... Can perhaps that could have something to do with that fact that arc meta values are almost at the limit set?. Perhaps increasing vfs.zfs.arc_meta_limit or kstat.zfs.misc.arcstats.arc_meta_limit (I suppose the first one is the one to increase) could cause a better performance and perhaps a better usage and better take advantage of having 64GB max of ARC set?. I say it because now it doesn't use more than 19GB in total ARC memory.... As always said, any opinion or idea would be very highly appreciated. Cheers, El 2022-04-06 17:30, egoitz@ramattack.net escribió: > ATENCION: Este correo se ha enviado desde fuera de la organización. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro. > > One perhaps important note!! > > When this happens... almost all processes appear in top in the following state: > > txg state or > > txg-> > > bio.... > > perhaps should the the vfs.zfs.dirty_data_max, vfs.zfs.txg.timeout, vfs.zfs.vdev.async_write_active_max_dirty_percent be increased, decreased.... I'm afraid of doing some chage ana finally ending up with an inestable server.... I'm not an expert in handling these values.... > > Any recommendation?. > > Best regards, > > El 2022-04-06 16:36, egoitz@ramattack.net escribió: > > ATENCION: Este correo se ha enviado desde fuera de la organización. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro. > > Hi Rainer! > > Thank you so much for your help :) :) > > Well I assume they are in a datacenter and should not be a power outage.... > > About dataset size... yes... our ones are big... they can be 3-4 TB easily each dataset..... > > We bought them, because as they are for mailboxes and mailboxes grow and grow.... for having space for hosting them... > > We knew they had some speed issues, but those speed issues, we thought (as Samsung explains in the QVO site) they started after exceeding the speeding buffer this disks have. We though that meanwhile you didn't exceed it's capacity (the capacity of the speeding buffer) no speed problem arises. Perhaps we were wrong?. > > Best regards, > > El 2022-04-06 14:56, Rainer Duffner escribió: > > Am 06.04.2022 um 13:15 schrieb egoitz@ramattack.net: > I don't really know if, perhaps the QVO technology could be the guilty here.... because... they say are desktop computers disks... but later. > > Yeah, they are. > > Most likely, they don't have some sort of super-cap. > > A power-failure might totally toast the filesystem. > > These disks are - IMO - designed to accelerate read-operations. Their sustained write-performance is usually mediocre, at best. > > They might work well for small data-sets - because that is really written to some cache and the firmware just claims it's „written", but once the data-set becomes big enough, they are about as fast as a fast SATA-disk. > > https://www.tomshardware.com/reviews/samsung-970-evo-plus-ssd,5608.html --=_a4ce21118f18db79ad9328c5f2cebb5a Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=UTF-8

I have been thinking and.... I got the following tunables now :

vfs.zfs.arc_meta_strategy: 0
vfs.zfs.arc_meta_limit: 17179869184kstat.zfs.misc.arcstats.arc_meta_min: 4294967296
kstat.zfs.misc.arc= stats.arc_meta_max: 19386809344
kstat.zfs.misc.arcstats.arc_meta_limit= : 17179869184
kstat.zfs.misc.arcstats.arc_meta_used: 16870668480
= vfs.zfs.arc_max: 68719476736

and top sais :

ARC: 19G Total, 1505M MFU, 12G MRU, 6519K Anon, 175M Header, 5687M Other=


When using even 128GB of vfs.zfs.arc_max (instead of 64GB I have now set= ) the ARC wasn't approximating to it's max usable size.... Can perhaps that= could have something to do with that fact that arc meta values are almost = at the limit set?. Perhaps increasing vfs.zfs.arc_meta_limit or kstat.zfs= =2Emisc.arcstats.arc_meta_limit (I suppose the first one is the one to incr= ease) could cause a better performance and perhaps a better usage and bette= r take advantage of having 64GB max of ARC set?. I say it because now it do= esn't use more than 19GB in total ARC memory....


As always said, any opinion or idea would be very highly appreciated.


Cheers,


 


El 2022-04-06 17:30, egoitz@ramattack.net escribió:


ATENCION: Este correo se ha enviado = desde fuera de la organización. No pinche en los enlaces ni abra los= adjuntos a no ser que reconozca el remitente y sepa que el contenido es se= guro.

One perhaps important note!!


When this happens... almost all processes appear in top in the following= state:


txg state or

txg->

bio....


perhaps should the the vfs.zfs.dirty_data_max, vfs.zfs.txg.timeout, vfs= =2Ezfs.vdev.async_write_active_max_dirty_percent be increased, decreased.= =2E.. I'm afraid of doing some chage ana finally ending up with an inestabl= e server.... I'm not an expert in handling these values....


Any recommendation?.


Best regards,

 


El 2022-04-06 16:36, egoitz@ramattack.net escribió:


ATENCION: Este correo se ha enviado = desde fuera de la organización. No pinche en los enlaces ni abra los= adjuntos a no ser que reconozca el remitente y sepa que el contenido es se= guro.

Hi Rainer!


Thank you so much for your help :) :)

Well I assume they are in a datacenter and should not be a power outage= =2E...

About dataset size... yes... our ones are big... they can be 3-4 TB easi= ly each dataset.....

We bought them, because as they are for mailboxes and mailboxes grow and= grow.... for having space for hosting them...

We knew they had some speed issues, but those speed issues, we thought (= as Samsung explains in the QVO site) they started after exceeding the speed= ing buffer this disks have. We though that meanwhile you didn't exceed it's= capacity (the capacity of the speeding buffer) no speed problem arises. Pe= rhaps we were wrong?.


Best regards,



El 2022-04-06 14:56, Rainer Duffner escribió:



Am 06.04.2022 um 13:15 schrieb egoitz@ramattack.net:

I don't really know if, perhaps the QVO technolo= gy could be the guilty here.... because... they say are desktop computers d= isks... but later.

 
Yeah, they are.
 
Most likely, they don't have some sort of super-cap.
 
A power-failure might totally toast the filesystem.
 
These disks are - IMO -  designed to accelerate read-operations= =2E Their sustained write-performance is usually mediocre, at best.
 
They might work well for small data-sets - because that is really writ= ten to some cache and the firmware just claims it's „written", but on= ce the data-set becomes big enough, they are about as fast as a fast SATA-d= isk.
 
 
 
 
--=_a4ce21118f18db79ad9328c5f2cebb5a--