if_em, legacy nic and GbE saturation

Harald Schmalzbauer h.schmalzbauer at omnilan.de
Mon Aug 26 09:28:58 UTC 2013


 Bezüglich Adrian Chadd's Nachricht vom 26.08.2013 10:34 (localtime):
> Hi,
>
> There's bus limits on how much data you can push over a PCI bus. You
> can look around online to see what 32/64 bit, 33/66MHz PCI throughput
> estimates are.
>
> It changes massively if you use small versus large frames as well.
>
> The last time I tried it i couldn't hit gige on PCI; I only managed to
> get to around 350mbit doing TCP tests.

Thanks, I'm roughly aware about the PCI bus limit, but I guess it should
be good for almost GbE: 33*10^6*32=1056, so if one considers overhead
and other bus-blocking things (nothing of significance is active on the
PCI bus in this case), I'd expect at least 800Mbis/s, which is what I
get with jumbo frames.
I also know that lagg won't help in regard of concurrent throughput
because of the PCI limit. But it's the redundancy why I also use 2 nics
in that parking-maschine.

I just have no explanation why I see that noticable difference between
mtu 1500 and 9000 on legacy if_em nic, which doesn't show up with the
second on-board nic (82566), which uses different if_em code.
I can imagine that it's related to PCI transfer limits (the 82566 is
ICH9 integrated which connects via DMI to the CPU, so no PCI
constraint), but if someone has more than an imagination, an explanation
was highly appreciated :-)

But if you saw similar constraints on other (non-if_em?) PCI-connected
nics, I'll leave it as it is. Just wanted some kind of confirmation that
it's normal that single-GbE doesn't play well with PCI.

Thank you,

-Harry


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 196 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20130826/4c6cdfa4/attachment.sig>


More information about the freebsd-stable mailing list