Handling 100.000 packets/sec or more

Andre Oppermann andre at freebsd.org
Thu Jan 15 03:00:30 PST 2004


Vlad Galu wrote:
> 
> Adrian Penisoara <ady at freebsd.ady.ro> writes:
> 
> |Hi again,
> |
> |  Thanks for all your answers.
> |
> |  A small comment though.
> |
> |Vlad Galu wrote:
> |
> |>      Try fxp. It has better polling support, and there's the
> |>advantage of
> |>the link0 flag. When it's set, the interface won't send interrupts to
> |
> | The man page sais that only some versions of the chipset supports this
> |(microcode download). Do you (or anyone else) know the exact version(s)
> |of the EtherExpress chip that supports this (and perhaps you have
> |tried it) ?
> |
> | Oh well, looking at the source code it seems you can discern the
> |enabled versions from here: sys/dev/fxp/rcvbundl.h (Intel source) and
> |sys/dev/fxp/if_fxp.c (to the end of file).
> |
> | Resumed:
> |
> |   FXP_REV_82558_A4
> |   FXP_REV_82558_B0
> |   FXP_REV_82559_A0
> |   FXP_REV_82559S_A
> |   FXP_REV_82550
> |   FXP_REV_82550_C
> |
> | Or by Intel revision codes:
> |
> |D101 A-step, D101 B-step, D101M (B-step only), D101S, D102 B-step,
> |D102 B-step with TCO work around and D012 C-step.
> |
> |  I did not quite understand wether the embedded ICH3/4 network
> |interfaces are also "link0" enabled.
> |
> |>the kernel for each packet it catches from the wire, but instead will
> |>wait until its own buffer is full, and generate an interrupt
> |>afterwards.
> |>It should be a great deal of improvement when asociated with device
> |>polling. As you surely know, when the kernel receives an interrupt
> |from>an interface, it masks all further interrupts and schedules a
> |polling>task instead.
> |
> |[...]
> |
> |>|  On a side note: what would be a adequate formula to calculate the
> |>|NMBCLUSTERS and MBUFS we should set on this server (via boot-time
> |>|kern.ipc.nmbclusters and kern.ipc.nmbufs) ?
> |>|
> |>
> |>      I'm still thinking about that ...
> |>
> |
> |  Did you come up with anything ?
> 
> 
>         In the mbuf man page they say that a packet can span across multiple
> mbuf structs. The mbuf memory is difided in mbuf clusters, each of them
> of MCLBYTES size, which is 2048. OK, now try to allocate as many
> NMBCLUSTERS you can, while reserving some memory for the userspace. If
> you want to reserve, let's say 256 MB of KVM for this, you could then
> have 131072 mbuf clusters. Scaled 4 times, this is 1073741824 - the
> total number of mbufs available to the system. The larger this number,
> the more packets your system can process.

I'm sorry but this assumtion is incorrect.

In all packet forwarding applications it is not the amount of packet
buffering memory that matter as such (it does, but only to a certain
extent) but how fast the machine can actually process (receive/send)
the packets.  If you fall behind in one of them then no amount of
packet buffer memory can save you.

-- 
Andre


More information about the freebsd-net mailing list