Accounting for mbufs and clusters assigned to a socket buffer

gnn at freebsd.org gnn at freebsd.org
Mon Apr 28 00:19:31 UTC 2008


At Fri, 25 Apr 2008 15:58:28 +0200,
andre wrote:
> 
> gnn at freebsd.org wrote:
> > Howdy,
> > 
> > The following patch updates the kernel (CURRENT as of 23 April or so)
> > and netstat(1) to show not only the bytes in the receive and send
> > queues but also the mbuf and cluster usage per socket buffer.  I'd be
> > interested in people's comments on this.  I'd like to extend such
> > counting to show more information, in particular how much of a cluster
> > or mbuf is actually in use.
> 
> The intent of tracking that information is good.  However there are some
> problems with your approach: M_EXT does not mean the mbuf has a 2k cluster
> attached.  It could by any external storage.  That is a 2k (classic) cluster,
> a 4k (page size) cluster, a 9k cluster, a VM page (sendfile), and so on.

Yup, this is a first cut.

> The field sb_mbcnt already gives you the aggregated gross storage
> space in use for the socket.  And sb_cc tells how much actual
> payload it contains.

Right but it does not tell us the mix of clusters vs. mbufs, which is
something useful for tuning.  

For instance, if you find you have a high use of clsuters but only
because mbufs are 256 bytes, that is your application's normal
traffic is on average 300 byte packets, that might lead you to push
mbufs to 512 bytes.

> Just printing the already available sb_mbcnt in netstat is probably
> sufficient to get a good real memory usage picture.  sb_mbcnt is
> already exported in xsb and doesn't require a KPI change.
> 

Well, it is perhaps interesting as it shows space wastage, but I don't
think it's as complete a solution for showing the mix of cluster and
mbuf usage.

I do think I should come up with a way to print it though, which
probably requires a new flag to netstat so that the normal usage is
not polluted.

Best,
George


More information about the freebsd-arch mailing list