BeagleBone slow inbound net I/O

Tim Kientzle tim at kientzle.com
Sat Mar 14 18:12:34 UTC 2015


Paul’s data looks more like I expect from a healthy network:
a few explanations below:

> On Mar 14, 2015, at 8:42 AM, Paul Mather <paul at gromit.dlib.vt.edu> wrote:
> 
> Here is another data point from my BBB:
> 
> pmather at beaglebone:~ % sysctl dev.cpsw
> dev.cpsw.0.stats.GoodRxFrames: 4200799
> dev.cpsw.0.stats.RxStartOfFrameOverruns: 1708

In Paul's case, the only non-zero “error” count was the
RxStartOfFrameOverruns, which impacted only
0.04% of all RX frames.

This is comparable to what I see on my network.

> dev.cpsw.0.queue.tx.totalBuffers: 128
> dev.cpsw.0.queue.tx.maxActiveBuffers: 7
> dev.cpsw.0.queue.tx.longestChain: 4

Paul’s stress tests managed to get 7 mbufs onto
the hardware TX queue at the same time (out of 128
slots reserved for the hardware TX queue).  At
some point, there was a single TX packet that required
4 mbufs.


> dev.cpsw.0.queue.rx.totalBuffers: 384
> dev.cpsw.0.queue.rx.maxAvailBuffers: 55

Paul managed to stress the RX side a little harder:
At one point, there were 55 unprocessed mbufs
on the hardware RX queue.

If you managed to saturate the RX queue, that could
also lead to packet loss, though TCP should adapt
automatically; I wouldn’t expect a saturated queue
to cause the kind of throughput degradation you
would get from more random errors.

Tim



More information about the freebsd-arm mailing list