Interrupt performance
Slawa Olhovchenkov
slw at zxy.spb.ru
Tue Feb 1 14:46:33 UTC 2011
On Tue, Feb 01, 2011 at 04:15:01PM +0300, Slawa Olhovchenkov wrote:
> On Tue, Feb 01, 2011 at 02:23:32PM +0200, Stefan Lambrev wrote:
>
> > >> Also in the past ENOBUF was not handled properly in linux.
> > >>
> > >> http://wiki.freebsd.org/AvoidingLinuxisms - Do not rely on Linux-specific socket behaviour. In particular, default socket buffer sizes are different (call setsockopt() with SO_SNDBUF and SO_RCVBUF), and while Linux's send() blocks when the socket buffer is full, FreeBSD's will fail and set ENOBUFS in errno.
> > >
> > > Yes, about ENOBUFS with udp socket I told.
> > > And this behaviour (block on udp socket send) in Solaris too.
> > > I don't know what behaviour is right.
> >
> > Well, according to the man pages in linux and fbsd the bsd behavior is right. I was looking into this long time ago with some red hat linux.
>
> I have't any idea for blocking UDP socket, other then benchmarks.
>
> Now I test TCP and see strange result.
>
> # netperf -H 10.200.0.1 -t TCP_STREAM -C -c -l 60 -- -s 128K -S 128K
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.200.0.1 (10.200.0.1) port 0 AF_INET
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % U % U us/KB us/KB
>
> 131072 131072 131072 60.00 522.08 -1.00 -1.00 0.000 -0.314
>
> Now I run ./loop 2000000000 and see stoping transmit.
>
> procs memory page disk faults cpu
> r b w avm fre flt re pi po fr sr ad0 in sy cs us sy id
> 2 0 0 107M 435M 0 0 0 0 0 0 0 15939 618 39502 0 77 23
> 1 0 0 107M 435M 0 0 0 0 0 0 0 15904 619 39355 0 75 25
> 1 0 0 107M 435M 0 0 0 0 0 0 0 16193 615 40085 0 79 21
> 1 0 0 107M 435M 0 0 0 0 0 0 0 16028 623 39708 1 74 26
> 1 0 0 107M 435M 0 0 0 0 0 0 0 15965 615 39475 0 77 23
> 1 0 0 107M 435M 0 0 0 0 0 0 0 16012 636 39666 0 84 16 <-- run ./loop 2000000000
> 2 0 0 109M 435M 46 0 0 0 9 0 0 9632 507 24041 48 51 1
> 2 0 0 109M 435M 0 0 0 0 0 0 0 6592 319 16419 73 27 0
> 2 0 0 109M 435M 0 0 0 0 0 0 0 455 136 1250 100 0 0
> 2 0 0 109M 435M 0 0 0 0 0 0 0 420 127 1170 99 1 0
> 2 0 0 109M 435M 0 0 0 0 0 0 0 395 127 1127 100 0 0
> 2 0 0 109M 435M 0 0 0 0 0 0 0 428 127 1209 100 0 0
> 2 0 0 109M 435M 0 0 0 0 0 0 0 537 130 1434 99 1 0
> 2 0 0 109M 435M 0 0 0 0 0 0 0 449 136 1255 100 0 0
> 1 0 0 107M 435M 14 0 0 0 37 0 0 7634 400 19044 56 30 14 <- end ./loop (Elapsed 8470990 us)
> 1 0 0 107M 435M 0 0 0 0 0 0 0 14893 579 37088 0 75 25
> 1 0 0 107M 435M 0 0 0 0 0 0 0 16123 615 40163 0 78 22
> 1 0 0 107M 435M 0 0 0 0 0 0 0 15220 582 37939 0 72 28
>
> Wtf?
Only with ULE sheduler.
No effect with 4BSD sheduler (./loop Elapsed 30611224 us).
w/o CPU load:
x# netperf -H 10.200.0.1 -t TCP_STREAM -C -c -l 60 -- -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.200.0.1 (10.200.0.1) port 0 AF_INET
Recv Send Send Utilization Service Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local remote
bytes bytes bytes secs. 10^6bits/s % U % U us/KB us/KB
131072 131072 131072 60.00 520.15 -1.00 -1.00 0.000 -0.315
with CPU load:
x# netperf -H 10.200.0.1 -t TCP_STREAM -C -c -l 60 -- -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.200.0.1 (10.200.0.1) port 0 AF_INET
Recv Send Send Utilization Service Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local remote
bytes bytes bytes secs. 10^6bits/s % U % U us/KB us/KB
131072 131072 131072 60.00 519.58 -1.00 -1.00 0.000 -0.315
w/o CPU load and with TOE enabled on re0:
x# netperf -H 10.200.0.1 -t TCP_STREAM -C -c -l 60 -- -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.200.0.1 (10.200.0.1) port 0 AF_INET
Recv Send Send Utilization Service Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local remote
bytes bytes bytes secs. 10^6bits/s % U % U us/KB us/KB
131072 131072 131072 60.00 634.03 -1.00 -1.00 0.000 -0.258
(Maximum on linux 576.27).
More information about the freebsd-performance
mailing list