Interrupt performance

Bruce Evans brde at optusnet.com.au
Sat Jan 29 12:54:17 UTC 2011


On Sat, 29 Jan 2011, Slawa Olhovchenkov wrote:

> On Sat, Jan 29, 2011 at 02:43:07PM +1100, Bruce Evans wrote:
>
>> On Sat, 29 Jan 2011, Slawa Olhovchenkov wrote:
>>
>>> On Sat, Jan 29, 2011 at 07:52:11AM +1100, Bruce Evans wrote:
>>>>
>>>> To see how much CPU is actually available, run something else and see how
>>>> fast it runs.  A simple counting loops works well on UP systems.
>>>
>>> ===
>>> #include <stdio.h>
>>> #include <sys/time.h>
>>>
>>> int Dummy;
>>>
>>> int
>>> main(int argc, char *argv[])
>>> {
>>> long int count,i,dt;
>>> struct timeval st,et;
>>>
>>> count = atol(argv[1]);
>>>
>>> gettimeofday(&st, NULL);
>>> for(i=count;i;i--) Dummy++;
>>> gettimeofday(&et, NULL);
>>> dt = (et.tv_sec-st.tv_sec)*1000000 + et.tv_usec-st.tv_usec;
>>> printf("Elapsed %d us\n",dt);
>>> }
>>> ===
>>>
>>> This is ok?
>>
>> It's better not to compete with the interrupt handler in the kernel by
>> spinning making syscalls, but that will do for a start.
>
> In this programm inner loop don't contain any syscall.
> Better varian -- loop with syscalls?

Oops.  It is like I meant already.  You could try it with syscalls and/or
heavy memory accesses to see if there is a problem with memory resource
contention (maybe more cache misses).

>>> ./loop 2000000000
>>>
>>> FreeBSD
>>> 1 process: Elapsed 7554193 us
>>> 2 process: Elapsed 14493692 us
>>> netperf + 1 process: Elapsed 21403644 us
>>
>> This shows about 35% user 65% network.
>>
>>> Linux
>>> 1 process: Elapsed 7524843 us
>>> 2 process: Elapsed 14995866 us
>>> netperf + 1 process: Elapsed 14107670 us
>>
>> This shows about 53% user 47% network.
>>
>> So FreeBSD has about 18% more network overhead (absolute: 65-47), or
>> about 38% more network overhead (relative: (65-47)/47).  Not too
>> surprising -- the context switches alone might cost that.
>
> For only 14K vs 56K interrupt. 152% more network overhead per one interrupt.

No, FreeBSD does 4 times as much work per interrupt.  4 times as much
(300%) "overhead" per interrupt is to be expected, since most (hopefully
more than half :-) of the "overhead" is actual work.

> And I see drammaticaly less number of context switches in linux stats
> (by dstat).

FreeBSD uses ithreds for most interrupts, so of course it does many
more context switches (at least 2 per interrupt).  This doesn't make
much difference provided there are not too many.  I think the version
of re that you are using actually uses "fast" interrupts and a task
queue.  This also seems to be making little difference.  You get a
relatively lightweight "fast" interrupt following by followed by a
context switch to and from the task.  IIRC, your statistics showed 
about twice as many context switches as interrupts, so the task queue
isn't doing much to reduce the "interrupt overhead" -- it just gives
context switches to the task instead of to an ithread.

>>> I think next server will be support PMC.
>>> Report from PMC still poorly?
>>
>> I should be adequate, but I prefer my version of perfmon which can
>> count cache misses precisely for every function.  But without patches,
>> perfmon is even more broken than high resolution kernel profiling.
>
> Can I use your version of perfmon? How? I don't have expirence with
> any kernel profiling.

Not the place to start.

>> [FILTER] means "fast".  re used to unconditionally use "fast" interrupts
>> and a task queue, which IMO is a bad way to program an interrupt
>> handler, but yongari@ recently overhauled re (again :-) so that it now
>> doesn't use fast interrupts by default for the MSI/MSIX case .  (BTW,
>
> Ineresting, but I don't think this help for this case -- old PCI
> chip, old CPU, old RAM, old matherboard -- all old. I don't try to get
> wirespeed gigabit performance from this old box, I try to compare
> relative performance FreeBSD vs Linux (in last time I got many
> feedback about poor network performance FreeBSD vs Linux).

Old hardware will certainly amplify any overheads.  50% overhead becomes
100% if the system is 2 times slower...

Bruce


More information about the freebsd-performance mailing list