Re: {* 05.00 *}Re: Re: Desperate with 870 QVO and ZFS

From: <egoitz_at_ramattack.net>
Date: Thu, 07 Apr 2022 08:49:15 UTC
Good morning Eugene!! 

Thank you so much for your help mate :) :) really :) :) 

Ok I take good notes of all you have replied me below :) :) 

Very very thankful for your help really :) 

Cheers,

El 2022-04-06 20:10, Eugene Grosbein escribió:

> ATENCION
> ATENCION
> ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro.
> 
> 06.04.2022 23:51, egoitz@ramattack.net wrote:
> 
>> About your recommendations... Eugene, if some of them wouldn't be working as expected,
>> could we revert some or all of them
> 
> Yes, it all can be reverted.
> Just write down original sysctl values if you are going to change it.
> 
>> 1) Make sure the pool has enough free space because ZFS can became crawling slow otherwise.
>> 
>> *This is just an example... but you can see all similarly....*
>> 
>> *zpool list*
>> *NAME             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT*
>> *zroot             448G  2.27G   446G        -         -     1%     0%  1.00x  ONLINE  -*
>> *mail_dataset  58.2T  19.4T  38.8T        -         -    32%    33%  1.00x  ONLINE  -*
> 
> It's all right.
> 
>> 2) Increase recordsize upto 1MB for file systems located in the pool
>> so ZFS is allowed to use bigger request sizes for read/write operations
>> 
>> *We have the default... so 128K...*
> 
> It will not hurt increasing it upto 1MB.
> 
>> 5) If you have good power supply and stable (non-crashing) OS, try increasing
>> sysctl vfs.zfs.txg.timeout from defaule 5sec, but do not be extreme (f.e. upto 10sec).
>> Maybe it will increase amount of long writes and decrease amount of short writes, that is good.
>> 
>> *Well I have sync in disabled in the datasets... do you still think it's good to change it?
> 
> Yes, try it. Disabling sync makes sense if you have lots of fsync() operations
> but other small writes are not affected unless you raise vfs.zfs.txg.timeout
> 
>> *What about the vfs.zfs.dirty_data_max and the vfs.zfs.dirty_data_max_max, would you increase them from 4GB it's set now?.*
> 
> Never tried that and cannot tell.