NVME aborting outstanding i/o

Patrick M. Hausen hausen at punkt.de
Fri Apr 5 07:33:20 UTC 2019


Hi all,

> Am 04.04.2019 um 17:11 schrieb Warner Losh <imp at bsdimp.com>:
> There's a request that was sent down to the drive. It took longer than 30s to respond. One of them, at least, was a trim request.
> […]

Thanks for the explanation.

This further explains why I was seeing a lot more of those and the system
occasionally froze for a couple of seconds after I increased these:

vfs.zfs.vdev.async_write_max_active: 10
vfs.zfs.vdev.async_read_max_active: 3
vfs.zfs.vdev.sync_write_max_active: 10
vfs.zfs.vdev.sync_read_max_active: 10

as recommended by Allan Jude reasoning that NVME devices could work on
up to 64 requests in parallel. I have since reverted that change and I am
running with the defaults.

If I understand correctly, this:

>         hw.nvme.per_cpu_io_queues=0

essentially limits the rate at which the system throws commands at the devices. Correct?

So it’s not a real fix and there’s nothing fundamentally wrong with the per CPU queue or
interrupt implementation. I will look into new firmware for my Intel devices and
try tweaking the vfs.zfs.vdev.trim_max_active and related parameters.

Out of curiosity: what happens if I disable TRIM? My knowledge is rather superficial
and I just filed that under „TRIM is absolutely essential lest performance will
suffer severely and your devices die - plus bad karma, of course …“ ;-)

Kind regards,
Patrick
-- 
punkt.de GmbH			Internet - Dienstleistungen - Beratung
Kaiserallee 13a			Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe			info at punkt.de	http://punkt.de
AG Mannheim 108285		Gf: Juergen Egeling



More information about the freebsd-stable mailing list