Re: Desperate with 870 QVO and ZFS

From: Steven Hartland <killing_at_multiplay.co.uk>
Date: Wed, 06 Apr 2022 15:28:59 UTC
What does gstat -pd report?

On Wed, 6 Apr 2022 at 15:59, John F Carr <jfc@mit.edu> wrote:

> On Apr 6, 2022, at 07:15 , egoitz@ramattack.net wrote:
> >
> > Good morning,
> >
> > I write this post with the expectation that perhaps someone could help
> me <d8974688.gif>
> >
> > I am running some mail servers with FreeBSD and ZFS. They use 870 QVO
> (not EVO or other Samsung SSD disks) disks as storage. They can easily have
> from 1500 to 2000 concurrent connections. The machines have 128GB of ram
> and the CPU is almost absolutely idle. The disk IO is normally at 30 or 40%
> percent at most.
> >
> > The problem I'm facing is that they could be running just fine and
> suddenly at some peak hour, the IO goes to 60 or 70% and the machine
> becomes extremely slow. ZFS is all by default, except the sync parameter
> which is set disabled. Apart from that the ARC is limited to 64GB. But even
> this is extremely odd. The used ARC is near 20GB. I have seen, that meta
> cache in arc is very near to the limit that FreeBSD automatically sets
> depending on the size of the ARC you set. It seems that almost all ARC is
> used by meta cache. I have seen this effect in all my mail servers with
> this hardware and software config.
> >
>
> My system with
>
> nvd0: <Samsung SSD 970 EVO 1TB> NVMe namespace
>
> has a problem with high write volume.  If I build llvm with debugging
> symbols, which writes about 70 GB, the filesystem nearly grinds to a halt.
> I have to use a spinning disk to get decent performance on this workload.
> There is some old talk on the mailing lists about certain drives handling
> TRIM commands badly.  See comment by Ted Ts'o here:
> https://forums.freebsd.org/threads/ssd-trim-maintenance.56951/
>
> Unfortunately the documentation for adjusting trim settings is out of date.
>
>
>