FBSD 1GBit router?

Erik Trulsson ertr1013 at student.uu.se
Sun Mar 2 02:12:15 UTC 2008


On Sat, Mar 01, 2008 at 04:39:57PM -0800, Barney Cordoba wrote:
> 
> --- Erik Trulsson <ertr1013 at student.uu.se> wrote:
> 
> > On Sat, Mar 01, 2008 at 01:27:46PM -0800, Barney
> > Cordoba wrote:
> > > 
> > > --- Ingo Flaschberger <if at xip.at> wrote:
> > > 
> > > > Dear Barney,
> > > > 
> > > > > It seems absolutely ridiculous to buy such
> > > > hardware
> > > > > and not install a PCIx or 4x PCIe card for
> > another
> > > > > $100. or less. Saying a 1x is "fast enough" is
> > > > like
> > > > > saying a Celeron is "fast enough".
> > > > 
> > > > The box is a small 1HE appliance and can boot
> > from a
> > > > CF-Card.
> > > > I trust them more than a "al cheapo" pc.
> > > > 1x axiomtek NA-820
> > > > 1x P4 3Ghz cpu
> > > > 1x 1gb ddr2
> > > > ---
> > > > 850eur without taxes.
> > > > 
> > > > A good chipset, good cpu, good ram, good
> > harddisk,
> > > > god powersupply has 
> > > > same price.
> > > > And don't forget that in exchanges you pay for
> > each
> > > > HE.
> > > > 
> > > > And back to 1x is not fast enough:
> > > > There are no 1gbit single port network cards
> > that
> > > > support more than 1 
> > > > lane, even if you plug it into a 16 lane slot.
> > > > (and I'm not talking about 10gbit cards; if you
> > have
> > > > 10gbit upstream you 
> > > > have enough $$ to buy good gear)
> > > 
> > > Ok, well I've never seen a router with 1 port.  I
> > > thought we were talking about building a router? 
> > 
> > He did not say anything about a single port router.
> > He talked about single port network cards.  You can
> > use more than one of them when building a router.
> 
> His argument is that there are only 1x PCIe cards that
> have 1 port. Since he needs 2 ports, and there are 2
> port
> PCIe cards, then his argument makes no sense. But the
> point is that PCIe NICs are implemented with 1 port
> per lane in the chip. So a 2 port card will use 2
> lanes

No, the argument was that all 1-port PCI-E cards are only
x1, ergo a x1 card must be fast enough for 1 port, else there
would be 1-port cards manufactured that used more than
one lane.



> 
> > 
> > > 
> > > The lack of PCIe cards is a good reason to
> > consider a
> > > PCIX machine.
> > 
> > What lack of PCI-E cards?  These days there are
> > quite a
> > few to choose between.
> 
> Yes, but they are all 1x, while there are many 1 and 2
> port PCIx cards which are twice as fast.

There are dual- and quad-port PCI-E cards available too, and they are
generally x4 lane models.

Today there really is very little reason to use PCI-X instead
of PCI-E when one is putting together a brand new system.



> 
> > 
> > > On the systems that we have, the 1x PCIe
> > > ports are a lot slower than a PCI-X card in the
> > slot.
> > > 
> > > You need 4Gb/s of throughput to handle a gigablt
> > > router. (1 GB/s full duplex times 2).  1x is 4Gb/s
> > > maximum. In my view, you always need twice the
> > > bandwidth on the bus to avoid contention issues.
> > 
> > What contention issues?  With PCI-E each device is
> > essentially on its own
> > bus and does not need to contend with other devices
> > for bandwidth on that
> > bus.
> > 
> > A PCI-E 1x connection provides more bandwidth than
> > one gigabit ethernet
> > connection can use.
> 
> Does each PCIe slot have its own dedicated memory
> controller?  The concept that there is some sort of
> mutually exclusive, independent path for each
> controller is simply not the case in practice. You're
> accessing the same memory;  you're going through
> shared hubs and bridges. You're doing I/O on the same
> bus. North bridges typically have a 512 Byte payload
> maximum, so you can't even burst a full packet.  You
> have transaction overhead for each transfer. There are
> many factors that will chip away at your realizable
> bandwidth. Its not like a hose gushing a continuos
> stream of water.

Sure, but those limitations apply equally regardless of 
what kind of slots you use.

A x8 lane PCI-E card will suffer just as much of contention
on the FSB between the CPU and the chipset as a x1 lane card
will.  Having a wider channel between the NIC and the chipset
will not help if the bottleneck is elswehere.



> 
> Another factor is that "server" chipsets do PCIe
> better than "desktop" chipsets. Server chipsets are
> optimized in the chipset to handle multiple devices
> more efficiently. So don't expect your "desktop"
> chipset to be as efficient as a server chipset at the
> same task. There's a reason that intel has desktop and
> server chipsets. 

Intel uses the same southbridge chips for all their single-CPU
chipsets, regardless of whether the nortbridge is
labeled as a server or desktop part.  Most I/O-devices
are normally connected to the southbridge.


> 
> I've tested dual port cards based on the 82546 and
> 82571 parts on the same system that has both PCIX and
> PCIe slots. The parts are essentially the same, and
> the driver is essentially the same, with the bus being
> the only real difference. The PCIe card is a 2x card.

PCI-E cards and slots only come in 4 physical sizes:
x1, x4, x8, and x16.
It is not uncommon for a slot to have fewer lanes
attached to it than the physical size allows, so there
do exist some 'x2' slots.  I have however never seen any 
PCI-E cards being sold as x2, only as one of the standard
widths.


> so its the equivalent of 2 PCIe cards with dedicated
> 1x lanes. The results are the that PCIx card is simply
> more efficient, in that is uses less CPU with the same
> load (an indication of less contention for I/O), and
> the PCIX card has a higher capacity (that is, the
> point in which the cpu is saturated is higher by about
> 10%)

On that particular system, with those particular cards, it
might well be the case that the PCI-X card performed better
than the PCI-E card.  That says essentially nothing about
the relative performance of PCI-X vs PCI-E in general, and
I doubt that there was any lack of bandwidth between either
of the cards and the chipset anyway.

How the different buses and chips are connected on the motherboard
can cause quite a bit of difference.

If, for example, the PCI-X bus was connected to the Northbridge of the
chipset, while the PCI-E x1 slots were connected to the Southbridge, and
there was a fairly narrow connection (equivalent to a x4 lane PCI-E bus)
between the north- and south-bridge, then you could well get the results you
saw.  On another motherboard, with a different topology, you might get the
opposite results from those same cards.




> 
> In practice your little box is never going to be
> routing bi-direction, full gigabit traffic, but if you
> only have "just enough" bandwidth, your CPU is going
> to get less and less efficient as you get higher and
> higher usage. You'll need more cpu to do the same task
> with less bus.

But a x1 lane PCI-E connection does not provide "just enough"
bandwidth for a gigabit ethernet connection.  It provides
"more than enough" bandwidth. (The theoretical bandwidth
of such a x1 lane is twice what is needed for the ethernet
connection, which is quite enough even taking into account
various overheads.)


> 
> I'm not claiming that PCIX is faster than pcie at the
> same "speed", but that the limitation of 1x per NIC
> causes PCIX to be a better choice for a 2 port system
> in most cases.

There is no inherent "limitation of 1x per NIC", but even
if there was this is not a problem as long as you do not
put more than 1 port per NIC.



-- 
<Insert your favourite quote here.>
Erik Trulsson
ertr1013 at student.uu.se


More information about the freebsd-net mailing list