svn commit: r353429 - in head: share/man/man4 sys/kern sys/vm
Conrad Meyer
cem at freebsd.org
Fri Oct 11 06:03:46 UTC 2019
Thanks Cy, you’re totally right. That explains the crazy cachefree/xfree
numbers I was seeing. Should be fixed in 353430.
Thanks,
Conrad
On Thu, Oct 10, 2019 at 21:20 Cy Schubert <Cy.Schubert at cschubert.com> wrote:
> In message <201910110131.x9B1VV1R047982 at repo.freebsd.org>, Conrad Meyer
> writes:
> > Author: cem
> > Date: Fri Oct 11 01:31:31 2019
> > New Revision: 353429
> > URL: https://svnweb.freebsd.org/changeset/base/353429
> >
> > Log:
> > ddb: Add CSV option, sorting to 'show (malloc|uma)'
> >
> > Add /i option for machine-parseable CSV output. This allows ready
> copy/
> > pasting into more sophisticated tooling outside of DDB.
> >
> > Add total zone size ("Memory Use") as a new column for UMA.
> >
> > For both, sort the displayed list on size (print the largest
> zones/types
> > first). This is handy for quickly diagnosing "where has my memory
> gone?" a
> > t
> > a high level.
> >
> > Submitted by: Emily Pettigrew <Emily.Pettigrew AT isilon.com>
> (earlie
> > r version)
> > Sponsored by: Dell EMC Isilon
> >
> > Modified:
> > head/share/man/man4/ddb.4
> > head/sys/kern/kern_malloc.c
> > head/sys/vm/uma_core.c
> >
> > Modified: head/share/man/man4/ddb.4
> >
> =============================================================================
> > =
> > --- head/share/man/man4/ddb.4 Fri Oct 11 00:02:00 2019 (r353428)
> > +++ head/share/man/man4/ddb.4 Fri Oct 11 01:31:31 2019 (r353429)
> > @@ -60,7 +60,7 @@
> > .\"
> > .\" $FreeBSD$
> > .\"
> > -.Dd September 9, 2019
> > +.Dd October 10, 2019
> > .Dt DDB 4
> > .Os
> > .Sh NAME
> > @@ -806,11 +806,15 @@ is included in the kernel.
> > .It Ic show Cm locktree
> > .\"
> > .Pp
> > -.It Ic show Cm malloc
> > +.It Ic show Cm malloc Ns Op Li / Ns Cm i
> > Prints
> > .Xr malloc 9
> > memory allocator statistics.
> > -The output format is as follows:
> > +If the
> > +.Cm i
> > +modifier is specified, format output as machine-parseable
> comma-separated
> > +values ("CSV").
> > +The output columns are as follows:
> > .Pp
> > .Bl -tag -compact -offset indent -width "Requests"
> > .It Ic Type
> > @@ -1076,11 +1080,15 @@ Currently, those are:
> > .Xr rmlock 9 .
> > .\"
> > .Pp
> > -.It Ic show Cm uma
> > +.It Ic show Cm uma Ns Op Li / Ns Cm i
> > Show UMA allocator statistics.
> > -Output consists five columns:
> > +If the
> > +.Cm i
> > +modifier is specified, format output as machine-parseable
> comma-separated
> > +values ("CSV").
> > +The output contains the following columns:
> > .Pp
> > -.Bl -tag -compact -offset indent -width "Requests"
> > +.Bl -tag -compact -offset indent -width "Total Mem"
> > .It Cm "Zone"
> > Name of the UMA zone.
> > The same string that was passed to
> > @@ -1094,9 +1102,18 @@ Number of slabs being currently used.
> > Number of free slabs within the UMA zone.
> > .It Cm "Requests"
> > Number of allocations requests to the given zone.
> > +.It Cm "Total Mem"
> > +Total memory in use (either allocated or free) by a zone, in bytes.
> > +.It Cm "XFree"
> > +Number of free slabs within the UMA zone that were freed on a different
> NUMA
> > +domain than allocated.
> > +(The count in the
> > +.Cm "Free"
> > +column is inclusive of
> > +.Cm "XFree" . )
> > .El
> > .Pp
> > -The very same information might be gathered in the userspace
> > +The same information might be gathered in the userspace
> > with the help of
> > .Dq Nm vmstat Fl z .
> > .\"
> >
> > Modified: head/sys/kern/kern_malloc.c
> >
> =============================================================================
> > =
> > --- head/sys/kern/kern_malloc.c Fri Oct 11 00:02:00 2019
> (r35342
> > 8)
> > +++ head/sys/kern/kern_malloc.c Fri Oct 11 01:31:31 2019
> (r35342
> > 9)
> > @@ -1205,35 +1205,90 @@ restart:
> > }
> >
> > #ifdef DDB
> > +static int64_t
> > +get_malloc_stats(const struct malloc_type_internal *mtip, uint64_t
> *allocs,
> > + uint64_t *inuse)
> > +{
> > + const struct malloc_type_stats *mtsp;
> > + uint64_t frees, alloced, freed;
> > + int i;
> > +
> > + *allocs = 0;
> > + frees = 0;
> > + alloced = 0;
> > + freed = 0;
> > + for (i = 0; i <= mp_maxid; i++) {
> > + mtsp = zpcpu_get_cpu(mtip->mti_stats, i);
> > +
> > + *allocs += mtsp->mts_numallocs;
> > + frees += mtsp->mts_numfrees;
> > + alloced += mtsp->mts_memalloced;
> > + freed += mtsp->mts_memfreed;
> > + }
> > + *inuse = *allocs - frees;
> > + return (alloced - freed);
> > +}
> > +
> > DB_SHOW_COMMAND(malloc, db_show_malloc)
> > {
> > - struct malloc_type_internal *mtip;
> > - struct malloc_type_stats *mtsp;
> > + const char *fmt_hdr, *fmt_entry;
> > struct malloc_type *mtp;
> > - uint64_t allocs, frees;
> > - uint64_t alloced, freed;
> > - int i;
> > + uint64_t allocs, inuse;
> > + int64_t size;
> > + /* variables for sorting */
> > + struct malloc_type *last_mtype, *cur_mtype;
> > + int64_t cur_size, last_size;
> > + int ties;
> >
> > - db_printf("%18s %12s %12s %12s\n", "Type", "InUse", "MemUse",
> > - "Requests");
> > - for (mtp = kmemstatistics; mtp != NULL; mtp = mtp->ks_next) {
> > - mtip = (struct malloc_type_internal *)mtp->ks_handle;
> > - allocs = 0;
> > - frees = 0;
> > - alloced = 0;
> > - freed = 0;
> > - for (i = 0; i <= mp_maxid; i++) {
> > - mtsp = zpcpu_get_cpu(mtip->mti_stats, i);
> > - allocs += mtsp->mts_numallocs;
> > - frees += mtsp->mts_numfrees;
> > - alloced += mtsp->mts_memalloced;
> > - freed += mtsp->mts_memfreed;
> > + if (modif[0] == 'i') {
> > + fmt_hdr = "%s,%s,%s,%s\n";
> > + fmt_entry = "\"%s\",%ju,%jdK,%ju\n";
> > + } else {
> > + fmt_hdr = "%18s %12s %12s %12s\n";
> > + fmt_entry = "%18s %12ju %12jdK %12ju\n";
> > + }
> > +
> > + db_printf(fmt_hdr, "Type", "InUse", "MemUse", "Requests");
> > +
> > + /* Select sort, largest size first. */
> > + last_mtype = NULL;
> > + last_size = INT64_MAX;
> > + for (;;) {
> > + cur_mtype = NULL;
> > + cur_size = -1;
> > + ties = 0;
> > +
> > + for (mtp = kmemstatistics; mtp != NULL; mtp =
> mtp->ks_next) {
> > + /*
> > + * In the case of size ties, print out mtypes
> > + * in the order they are encountered. That is,
> > + * when we encounter the most recently output
> > + * mtype, we have already printed all preceding
> > + * ties, and we must print all following ties.
> > + */
> > + if (mtp == last_mtype) {
> > + ties = 1;
> > + continue;
> > + }
> > + size = get_malloc_stats(mtp->ks_handle, &allocs,
> > + &inuse);
> > + if (size > cur_size && size < last_size + ties) {
> > + cur_size = size;
> > + cur_mtype = mtp;
> > + }
> > }
> > - db_printf("%18s %12ju %12juK %12ju\n",
> > - mtp->ks_shortdesc, allocs - frees,
> > - (alloced - freed + 1023) / 1024, allocs);
> > + if (cur_mtype == NULL)
> > + break;
> > +
> > + size = get_malloc_stats(cur_mtype->ks_handle, &allocs,
> &inuse);
> > + db_printf(fmt_entry, cur_mtype->ks_shortdesc, inuse,
> > + howmany(size, 1024), allocs);
> > +
> > if (db_pager_quit)
> > break;
> > +
> > + last_mtype = cur_mtype;
> > + last_size = cur_size;
> > }
> > }
> >
> >
> > Modified: head/sys/vm/uma_core.c
> >
> =============================================================================
> > =
> > --- head/sys/vm/uma_core.c Fri Oct 11 00:02:00 2019 (r353428)
> > +++ head/sys/vm/uma_core.c Fri Oct 11 01:31:31 2019 (r353429)
> > @@ -4341,39 +4341,100 @@ uma_dbg_free(uma_zone_t zone, uma_slab_t slab,
> void
> > *i
> > #endif /* INVARIANTS */
> >
> > #ifdef DDB
> > +static int64_t
> > +get_uma_stats(uma_keg_t kz, uma_zone_t z, uint64_t *allocs, uint64_t
> *used,
> > + uint64_t *sleeps, uint64_t *xdomain, long *cachefree)
>
> xdomain and cachefree are reversed by callers of this function. Probably
> simpler to change the definition here than the two use instances below.
>
> > +{
> > + uint64_t frees;
> > + int i;
> > +
> > + if (kz->uk_flags & UMA_ZFLAG_INTERNAL) {
> > + *allocs = counter_u64_fetch(z->uz_allocs);
> > + frees = counter_u64_fetch(z->uz_frees);
> > + *sleeps = z->uz_sleeps;
> > + *cachefree = 0;
> > + *xdomain = 0;
> > + } else
> > + uma_zone_sumstat(z, cachefree, allocs, &frees, sleeps,
> > + xdomain);
> > + if (!((z->uz_flags & UMA_ZONE_SECONDARY) &&
> > + (LIST_FIRST(&kz->uk_zones) != z)))
> > + *cachefree += kz->uk_free;
> > + for (i = 0; i < vm_ndomains; i++)
> > + *cachefree += z->uz_domain[i].uzd_nitems;
> > + *used = *allocs - frees;
> > + return (((int64_t)*used + *cachefree) * kz->uk_size);
> > +}
> > +
> > DB_SHOW_COMMAND(uma, db_show_uma)
> > {
> > + const char *fmt_hdr, *fmt_entry;
> > uma_keg_t kz;
> > uma_zone_t z;
> > - uint64_t allocs, frees, sleeps, xdomain;
> > + uint64_t allocs, used, sleeps, xdomain;
> > long cachefree;
> > - int i;
> > + /* variables for sorting */
> > + uma_keg_t cur_keg;
> > + uma_zone_t cur_zone, last_zone;
> > + int64_t cur_size, last_size, size;
> > + int ties;
> >
> > - db_printf("%18s %8s %8s %8s %12s %8s %8s %8s\n", "Zone", "Size",
> "Used"
> > ,
> > - "Free", "Requests", "Sleeps", "Bucket", "XFree");
> > - LIST_FOREACH(kz, &uma_kegs, uk_link) {
> > - LIST_FOREACH(z, &kz->uk_zones, uz_link) {
> > - if (kz->uk_flags & UMA_ZFLAG_INTERNAL) {
> > - allocs = counter_u64_fetch(z->uz_allocs);
> > - frees = counter_u64_fetch(z->uz_frees);
> > - sleeps = z->uz_sleeps;
> > - cachefree = 0;
> > - } else
> > - uma_zone_sumstat(z, &cachefree, &allocs,
> > - &frees, &sleeps, &xdomain);
> > - if (!((z->uz_flags & UMA_ZONE_SECONDARY) &&
> > - (LIST_FIRST(&kz->uk_zones) != z)))
> > - cachefree += kz->uk_free;
> > - for (i = 0; i < vm_ndomains; i++)
> > - cachefree += z->uz_domain[i].uzd_nitems;
> > + /* /i option produces machine-parseable CSV output */
> > + if (modif[0] == 'i') {
> > + fmt_hdr = "%s,%s,%s,%s,%s,%s,%s,%s,%s\n";
> > + fmt_entry = "\"%s\",%ju,%jd,%ld,%ju,%ju,%u,%jd,%ju\n";
> > + } else {
> > + fmt_hdr = "%18s %6s %7s %7s %11s %7s %7s %10s %8s\n";
> > + fmt_entry = "%18s %6ju %7jd %7ld %11ju %7ju %7u %10jd
> %8ju\n";
> > + }
> >
> > - db_printf("%18s %8ju %8jd %8ld %12ju %8ju %8u
> %8ju\n",
> > - z->uz_name, (uintmax_t)kz->uk_size,
> > - (intmax_t)(allocs - frees), cachefree,
> > - (uintmax_t)allocs, sleeps, z->uz_count,
> xdomain);
> > - if (db_pager_quit)
> > - return;
> > + db_printf(fmt_hdr, "Zone", "Size", "Used", "Free", "Requests",
> > + "Sleeps", "Bucket", "Total Mem", "XFree");
> > +
> > + /* Sort the zones with largest size first. */
> > + last_zone = NULL;
> > + last_size = INT64_MAX;
> > + for (;;) {
> > + cur_zone = NULL;
> > + cur_size = -1;
> > + ties = 0;
> > + LIST_FOREACH(kz, &uma_kegs, uk_link) {
> > + LIST_FOREACH(z, &kz->uk_zones, uz_link) {
> > + /*
> > + * In the case of size ties, print out
> zones
> > + * in the order they are encountered.
> That is,
> > + * when we encounter the most recently
> output
> > + * zone, we have already printed all
> preceding
> > + * ties, and we must print all following
> ties.
> > + */
> > + if (z == last_zone) {
> > + ties = 1;
> > + continue;
> > + }
> > + size = get_uma_stats(kz, z, &allocs, &used,
> > + &sleeps, &cachefree, &xdomain);
>
> cachefree and xdomain are reversed from the function header above.
>
> > + if (size > cur_size && size < last_size +
> ties)
> > + {
> > + cur_size = size;
> > + cur_zone = z;
> > + cur_keg = kz;
> > + }
> > + }
> > }
> > + if (cur_zone == NULL)
> > + break;
> > +
> > + size = get_uma_stats(cur_keg, cur_zone, &allocs, &used,
> > + &sleeps, &cachefree, &xdomain);
>
> cachefree and xdomain are reversed from the function header above.
>
> > + db_printf(fmt_entry, cur_zone->uz_name,
> > + (uintmax_t)cur_keg->uk_size, (intmax_t)used, cachefree,
> > + (uintmax_t)allocs, (uintmax_t)sleeps,
> > + (unsigned)cur_zone->uz_count, (intmax_t)size, xdomain);
> > +
> > + if (db_pager_quit)
> > + return;
> > + last_zone = cur_zone;
> > + last_size = cur_size;
> > }
> > }
> >
> >
>
>
>
> --
> Cheers,
> Cy Schubert <Cy.Schubert at cschubert.com>
> FreeBSD UNIX: <cy at FreeBSD.org> Web: http://www.FreeBSD.org
>
> The need of the many outweighs the greed of the few.
>
>
>
More information about the svn-src-all
mailing list