ng_fec and cisco 2931
Lister
lister at primetime.com
Mon Feb 28 20:17:05 GMT 2005
David Gilbert wrote:
>lister> I have setup ng_fec on a machine with a quad ethernet NIC :
>
>lister> de0: <Digital 21140A Fast Ethernet> port 0xd000-0xd07f mem
>
>Our own testing with this card (not using fec ... just traffic on the
>4 ports) has determined that it appears to have a 100 megabit limit to
>the total of 4 ports on the card. Now... this could be a FreeBSD
>driver issue ... or a PCI bus issue, but in all our tests with several
>motherboards and many versions of FreeBSD (from 3.2 or so through
>about 4.5) we were never able to achieve more than 100 megabit on the
>card in total.
>
>Our application was an NFS server that had 100's of diskless nodes
>running from it. We suspected that this could be some interaction
>between the speed of the disks (and their pci cost) and the card, so
>we isolated the card by doing straight packet tests (no meaningful
>data) and still found the card maxxing out at 100 megabit total over
>the 4 ports.
>
>Dave.
>
>
>
This might explain why I got it for $50 :) Did you go with
another quad that performed better?
Some more observations on ng_fec : it _appears_ that it
(along with the cisco you plug it into) 'balances' on a 'host
per port' basis. It seems geared for a server more than a
client.
I am trying one2many next ala :
http://bsdvault.net/sections.php?op=viewarticle&artid=98
With similar results. I have a 3com (xl0) and a davicom 9102
(dc0) one2many-ed thus :
ifconfig xl0 up
ifconfig dc0 up
kldload /modules/ng_ether.ko
ngctl mkpeer xl0: one2many upper one
ngctl connect xl0: xl0:upper lower many0
ngctl connect dc0: xl0:upper lower many1
ngctl msg dc0: setpromisc 1
ngctl msg dc0: setautosrc 0
ngctl msg xl0:upper setconfig "{ xmitAlg=1 failAlg=1 enabledLinks=[ 1 1 ] }"
I have an NFS mount to a GB NIC equipt machine with a
rocketraid 646 controller and a maxtor ata-133 7200 rpm
drive on it. I nfs mount a directory and I still can't crack
100mbs ... :\
The card in the server is an intel pro 1000 :
em0: <Intel(R) PRO/1000 Network Connection, Version - 1.7.35> port
0x1800-0x183f mem 0xf0000000-0xf001ffff irq 5 at device 9.0 on pci0
em0: Speed:N/A Duplex:N/A
Should I enable polling for the server? I am using 4.11
on all.
TIA for any thoughts ...
More information about the freebsd-performance
mailing list