svn commit: r292074 - in head/sys/dev: nvd nvme

Warner Losh imp at bsdimp.com
Fri Mar 11 04:34:19 UTC 2016


Some Intel NVMe drives behave badly when the LBA range crosses a 128k
boundary. Their
performance is worse for those transactions than for ones that don't cross
the 128k boundary.

Warner

On Thu, Mar 10, 2016 at 11:01 AM, Alan Somers <asomers at freebsd.org> wrote:

> Are you saying that Intel NVMe controllers perform poorly for all I/Os
> that are less than 128KB, or just for I/Os of any size that cross a 128KB
> boundary?
>
> On Thu, Dec 10, 2015 at 7:06 PM, Steven Hartland <smh at freebsd.org> wrote:
>
>> Author: smh
>> Date: Fri Dec 11 02:06:03 2015
>> New Revision: 292074
>> URL: https://svnweb.freebsd.org/changeset/base/292074
>>
>> Log:
>>   Limit stripesize reported from nvd(4) to 4K
>>
>>   Intel NVMe controllers have a slow path for I/Os that span a 128KB
>> stripe boundary but ZFS limits ashift, which is derived from d_stripesize,
>> to 13 (8KB) so we limit the stripesize reported to geom(8) to 4KB.
>>
>>   This may result in a small number of additional I/Os to require
>> splitting in nvme(4), however the NVMe I/O path is very efficient so these
>> additional I/Os will cause very minimal (if any) difference in performance
>> or CPU utilisation.
>>
>>   This can be controller by the new sysctl
>> kern.nvme.max_optimal_sectorsize.
>>
>>   MFC after:    1 week
>>   Sponsored by: Multiplay
>>   Differential Revision:        https://reviews.freebsd.org/D4446
>>
>> Modified:
>>   head/sys/dev/nvd/nvd.c
>>   head/sys/dev/nvme/nvme.h
>>   head/sys/dev/nvme/nvme_ns.c
>>   head/sys/dev/nvme/nvme_sysctl.c
>>
>>


More information about the svn-src-head mailing list