Re: AF_UNIX socketpair dgram queue sizes

From: Jan Schaumann via freebsd-net <>
Date: Wed, 10 Nov 2021 05:05:33 UTC
Mark Johnston <> wrote:

> There is an additional factor: wasted space.  When writing data to a
> socket, the kernel buffers that data in mbufs.  All mbufs have some
> amount of embedded storage, and the kernel accounts for that storage,
> whether or not it's used.  With small byte datagrams there can be a lot
> of overhead;

I'm observing two mbufs being allocated for each
datagram for small datagrams, but only one mbuf for
larger datagrams.

That seems counter-intuitive to me?

> The kern.ipc.sockbuf_waste_factor sysctl controls the upper limit on
> total bytes (used or not) that may be enqueued in a socket buffer.  The
> default value of 8 means that we'll waste up to 7 bytes per byte of
> data, I think.  Setting it higher should let you enqueue more messages.

Ah, this looks like something relevant.

Setting kern.ipc.sockbuf_waste_factor=1, I can only
write 8 1-byte datagrams.  For any increase of the
waste factor by one, I get another 8 1-byte datagrams,
up until waste factor > 29, at which point we hit
recvspace: 30 * 8 = 240, so 240 1-byte datagrams with
16 bytes dgram overhead means we get 240*17 = 4080
bytes, which just fits (well, with room for one empty
16-byte dgram) into the recvspace = 4096.

But I still don't get the direct relationship between
the waste factor and the recvspace / buffer queue:
with a waste_factor of 1 and a datagram with 1972
bytes, I'm able to write one dgram with 1972 bytes +
1 dgram with 1520 bytes = 3492 bytes (plus 2 * 16
bytes overhead = 3524 bytes).  There'd still have been
space for 572 more bytes in the second dgram.

Liekwise, trying to write a single 1973 dgram fills
the queue and no additional bytes can be written in a
second dgram, but I can write a single 2048 byte

Still confused...