Polling For 100 mbps Connections? (Was Re: Freebsd Theme Song)

Sasa Stupar sasa at stupar.homelinux.net
Thu Dec 15 12:53:35 PST 2005



--On 15. december 2005 6:33 -0800 Drew Tomlinson <drew at mykitchentable.net> 
wrote:

> On 12/15/2005 12:33 AM Sasa Stupar wrote:
>
>>
>>
>> --On 14. december 2005 20:01 -0800 Ted Mittelstaedt
>> <tedm at toybox.placo.com> wrote:
>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Danial Thom [mailto:danial_thom at yahoo.com]
>>>> Sent: Wednesday, December 14, 2005 11:14 AM
>>>> To: Ted Mittelstaedt; Drew Tomlinson
>>>> Cc: freebsd-questions at freebsd.org
>>>> Subject: RE: Polling For 100 mbps Connections? (Was Re: Freebsd Theme
>>>> Song)
>>>>
>>>
>>>>> Well, if polling does no good for fxp, due to
>>>>> the
>>>>> hardware doing controlled interrupts, then why
>>>>> does
>>>>> the fxp driver even let you set it as an
>>>>> option?
>>>>> And why have many people who have enabled it on
>>>>> fxp seen an improvement?
>>>>
>>>>
>>>> They haven't, freebsd accounting doesn't work
>>>> properly with polling enabled, and "they" don't
>>>> have the ability to "know" if they are getting
>>>> better performance, because "they", like you,
>>>> have no clue what they're doing. How about all
>>>> the idiots running MP with FreeBSD 4.x, when we
>>>> know its just a waste of time? "they" all think
>>>> they're getting worthwhile performance, because
>>>> "they" are clueless.
>>>>
>>>
>>> I would call them idiots if they are running MP under
>>> FreeBSD and assuming that they are getting better
>>> performance without actually testing for it.  But
>>> if they are just running MP because they happen to be
>>> using an MP server, and they want to see if it will
>>> work or not, who cares?
>>>
>>>> Maybe its tunable because they guy who wrote the
>>>> driver made it a tunable? duh. I've yet to see
>>>> one credible, controlled test that shows polling
>>>> vs properly tuned interrupt-driven.
>>>>
>>>
>>> Hm, OK I believe that.  As I recall I asked you earlier to
>>> post the test setup you used for your own tests
>>> "proving" that polling is worse, and you haven't
>>> done so yet.  Now you are saying you have never seen
>>> a credible controlled test that shows polling vs
>>> interrupt-driven.  So I guess either you were blind
>>> when you ran your own tests, or your own tests
>>> are not credible, controlled polling vs properly
>>> tuned interrupt-driven.  As I have been saying
>>> all along.  Now your agreeing with me.
>>>
>>>> The only advantage of polling is that it will
>>>> drop packets instead of going into livelock. The
>>>> disadvantage is that it will drop packets when
>>>> you have momentary bursts that would harmlessly
>>>> put the machine into livelock. Thats about it.
>>>>
>>>
>>> Ah, now I think suddenly I see what the chip on your
>>> shoulder is.  You would rather have your router based
>>> on FreeBSD go into livelock while packets stack up,
>>> than drop anything.  You tested the polling code and found
>>> that yipes, it drops packets.
>>>
>>> What may I ask do you think that a Cisco or other
>>> router does when you shove 10Mbt of traffic into it's
>>> Ethernet interface destined for a host behind a T1 that
>>> is plugged into the other end?  (and no, source-quench
>>> is not the correct answer)
>>>
>>> I think the scenario of it being better to momentary go into
>>> livelock during an overload is only applicable to one scenario,
>>> where the 2 interfaces in the router are the same capacity.
>>> As in ethernet-to-ethernet routers.  Most certainly not
>>> Ethernet-to-serial routers, like what most routers are
>>> that aren't on DSL lines.
>>>
>>> If you have a different understanding then please explain.
>>>
>>>>>
>>>>> I've read those datasheets as well and the
>>>>> thing I
>>>>> don't understand is that if you are pumping
>>>>> 100Mbt
>>>>> into an Etherexpress Pro/100 then if the card
>>>>> will
>>>>> not interrupt more than this throttled rate you
>>>>> keep
>>>>> talking about, then the card's interrupt
>>>>> throttling
>>>>> is going to limit the inbound bandwidth to
>>>>> below
>>>>> 100Mbt.
>>>>
>>>>
>>>> Wrong again, Ted. It scares me that you consider
>>>> yourself knowlegable about this. You can process
>>>> # interrupts X ring_size packets; not one per
>>>> interrupt. You're only polling 1000x per second
>>>> (or whatever you have hz set to), so why do you
>>>> think that you have to interrupt for every packet
>>>> to do 100Mb/s?
>>>
>>>
>>> I never said anything about interrupting for every
>>> packet, did I?  Of course not since I know what
>>> your talking about.  However, it is you who are throwing
>>> around the numbers - or were in your prior post -
>>> regarding the fxp driver and hardware.  Why should
>>> I have to do the work digging around in the datasheets
>>> and doing the math?
>>>
>>> Since you seem to be wanting to argue this from a
>>> theory standpoint, then your only option is to do the
>>> math.  Go ahead, look up the datasheet for the 82557.
>>> I'm sure it's online somewhere, and tell us what it says
>>> about throttled interrupts, and run your numbers.
>>>
>>>> Do you not understand that packet
>>>> processing is the same whether its done on a
>>>> clock tick or a hardware interrupt? Do you not
>>>> understand that a clock tick has more overhead
>>>> (because of other assigned tasks)? Do you not
>>>> understand that getting exactly 5000 hardware
>>>> interrupts is much more efficient than having
>>>> 5000 clock tick interrupts per second? What part
>>>> of this don't you understand?
>>>>
>>>
>>> Well, one part I don't understand is why when
>>> one of those 5000 clock ticks happens and the fxp driver
>>> finds no packets to take off the card, that it takes
>>> the same amount of time for the driver to process
>>> as when the fxp driver finds packets to process.
>>> At least, that seems to be what your arguing.
>>>
>>> As I've stated before once, probably twice, polling
>>> is obviously less efficient at lower bandwidth.  In interrupt
>>> driven mode, to get 5000 interrupts per second you are most
>>> likely going to be having a lot of traffic coming in,
>>> whereas you could get no traffic at all with polling mode
>>> in 5000 clock ticks.  So clearly, the comparison is always
>>> stacked towards polling being only a competitor at high bandwidth.
>>> Why you insist on using scenarios as examples that are low
>>> bandwidth scenarios I cannot understand because nobody
>>> in this debate so far has claimed that polling is better
>>> at low bandwidth.
>>>
>>> I am as suspicious of testimonials as the next guy and
>>> it is quite true that so far everyone promoting polling
>>> in this thread has posted no test suites that are any better
>>> than yours - you basically are blowing air at each other.
>>> But there are a lot of others on the Internet that seem to
>>> think it works great.  I gave you some openings to
>>> discredit them and you haven't taken them.
>>>
>>> I myself have never tried polling, so I
>>> am certainly not going to argue against a logical, reasoned
>>> explanation of why it's no good at high bandwidth.  So
>>> far, however, you have not posted anything like this.  And
>>> I am still waiting for the test suites you have used for
>>> your claim that the networking in 5.4 and later is worse,
>>> and I don't see why you want to diverge into this side issue
>>> on polling when the real issue is the alleged worse networking
>>> in the newer FreeBSD versions.
>>>
>>> Ted
>>
>>
>> Hmmm, here is test with iperf what I have done with and without polling:
>> **************
>> ------------------------------------------------------------
>> Client connecting to 192.168.1.200, TCP port 5001
>> TCP window size: 8.00 KByte (default)
>> ------------------------------------------------------------
>> [1816] local 192.168.10.249 port 1088 connected with 192.168.1.200
>> port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [1816]  0.0-10.0 sec   108 MBytes  90.1 Mbits/sec
>>
>> This is when I use Device polling option on m0n0.
>>
>> If I disable this option then my transfer is worse:
>> ------------------------------------------------------------
>> Client connecting to 192.168.1.200, TCP port 5001
>> TCP window size: 8.00 KByte (default)
>> ------------------------------------------------------------
>> [1816] local 192.168.10.249 port 1086 connected with 192.168.1.200
>> port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [1816]  0.0-10.0 sec  69.7 MBytes  58.4 Mbits/sec
>> ***************
>>
>> BTW: my router is m0n0wall (FBSD 4.11).
>>
>
> Thanks for your post.  Can you please tell me what network card and
> driver your machine uses?
>
>
> Thanks,
>
> Drew

I have 3 Intel Pro/100S NICs and using fxp driver.

-- 
Sasa Stupar


More information about the freebsd-questions mailing list