svn commit: r355831 - head/sys/cam/nvme

Warner Losh imp at bsdimp.com
Tue Dec 17 02:53:28 UTC 2019


On Mon, Dec 16, 2019, 5:28 PM Steven Hartland <
steven.hartland at multiplay.co.uk> wrote:

> Be aware that ZFS already does a pretty decent job of this already, so
> the statement
> about upper layers isn't true for all. It even has different priorities
> for different request
> types so I'm a little concerned that doing it at both layers could cause
> issues.
>

ZFS' BIO_DELETE scheduling works well for enterprise drives, but needs
tuning the further away you get from enterprise performance. I don't
anticipate any effect on performance here since this is not enabled by
default, unless I've messed something up (and if I have screwed this up,
please let me know). I've honestly not tried to enable these things on ZFS.

In addition to this if its anything like SSD's numbers of requests are
> only a small part
> of the story with total trim size being the other one. I this case you
> could hit total
> desired size with just one BIO_DELETE request.
>
> With this code what's the impact of this?
>

You're correct.  It tends to be the number of segments and/or the size of
the segment. This steers cases where the number of segments dominates. For
cases where total size dominates, you're often better off using the I/O
scheduler to rate limit the size of the trims.

This feature is designed to allow a large number of files to be deleted at
once while doing the trims from them a little at a time to even the load
out.

Warner


> On 17/12/2019 00:11, Warner Losh wrote:
> > Author: imp
> > Date: Tue Dec 17 00:11:48 2019
> > New Revision: 355831
> > URL: https://svnweb.freebsd.org/changeset/base/355831
> >
> > Log:
> >    NVME trim stuff.
> >
> >    Add two sysctls to control pacing of nvme
> >    trims. kern.cam.nda.X.goal_trim is the number of upper layer
> >    BIO_DEELETE requests to try to collecet before sending TRIM down too
> >    the nvme drive. trim_ticks is the number of ticks, at mosot, to wait
> >    for at least goal_trim BIOS_DELEETE requests to come in.
> >
> >    Trim pacing is useful when a large number off disjoint trims are
> >    comoing in from the upper layers. Since we have no way to chain
> >    toogether trims from the upper layers that are sent down, this acts as
> >    a hueristic to group trims into reasonable sized chunks. What's
> >    reasonable varies from drive to drive.
> >
> >    Sponsored by: Netflix
> >
> > Modified:
> >    head/sys/cam/nvme/nvme_da.c
> >
> > Modified: head/sys/cam/nvme/nvme_da.c
> >
> ==============================================================================
> > --- head/sys/cam/nvme/nvme_da.c       Tue Dec 17 00:10:19 2019
> (r355830)
> > +++ head/sys/cam/nvme/nvme_da.c       Tue Dec 17 00:11:48 2019
> (r355831)
> > @@ -177,6 +177,14 @@ static int nda_max_trim_entries =
> NDA_MAX_TRIM_ENTRIES
> >   SYSCTL_INT(_kern_cam_nda, OID_AUTO, max_trim, CTLFLAG_RDTUN,
> >       &nda_max_trim_entries, NDA_MAX_TRIM_ENTRIES,
> >       "Maximum number of BIO_DELETE to send down as a DSM TRIM.");
> > +static int nda_goal_trim_entries = NDA_MAX_TRIM_ENTRIES / 2;
> > +SYSCTL_INT(_kern_cam_nda, OID_AUTO, goal_trim, CTLFLAG_RDTUN,
> > +    &nda_goal_trim_entries, NDA_MAX_TRIM_ENTRIES / 2,
> > +    "Number of BIO_DELETE to try to accumulate before sending a DSM
> TRIM.");
> > +static int nda_trim_ticks = 50;      /* 50ms ~ 1000 Hz */
> > +SYSCTL_INT(_kern_cam_nda, OID_AUTO, trim_ticks, CTLFLAG_RDTUN,
> > +    &nda_trim_ticks, 50,
> > +    "Number of ticks to hold BIO_DELETEs before sending down a trim");
> >
> >   /*
> >    * All NVMe media is non-rotational, so all nvme device instances
> > @@ -741,6 +749,9 @@ ndaregister(struct cam_periph *periph, void *arg)
> >               free(softc, M_DEVBUF);
> >               return(CAM_REQ_CMP_ERR);
> >       }
> > +     /* Statically set these for the moment */
> > +     cam_iosched_set_trim_goal(softc->cam_iosched,
> nda_goal_trim_entries);
> > +     cam_iosched_set_trim_ticks(softc->cam_iosched, nda_trim_ticks);
> >
> >       /* ident_data parsing */
> >
>
>


More information about the svn-src-all mailing list