Interrupt performance
Slawa Olhovchenkov
slw at zxy.spb.ru
Thu Mar 31 15:31:04 UTC 2011
On Tue, Feb 01, 2011 at 05:46:29PM +0300, Slawa Olhovchenkov wrote:
> On Tue, Feb 01, 2011 at 04:15:01PM +0300, Slawa Olhovchenkov wrote:
>
> > On Tue, Feb 01, 2011 at 02:23:32PM +0200, Stefan Lambrev wrote:
> >
> > > >> Also in the past ENOBUF was not handled properly in linux.
> > > >>
> > > >> http://wiki.freebsd.org/AvoidingLinuxisms - Do not rely on Linux-specific socket behaviour. In particular, default socket buffer sizes are different (call setsockopt() with SO_SNDBUF and SO_RCVBUF), and while Linux's send() blocks when the socket buffer is full, FreeBSD's will fail and set ENOBUFS in errno.
> > > >
> > > > Yes, about ENOBUFS with udp socket I told.
> > > > And this behaviour (block on udp socket send) in Solaris too.
> > > > I don't know what behaviour is right.
> > >
> > > Well, according to the man pages in linux and fbsd the bsd behavior is right. I was looking into this long time ago with some red hat linux.
> >
> > I have't any idea for blocking UDP socket, other then benchmarks.
> >
> > Now I test TCP and see strange result.
> >
> > # netperf -H 10.200.0.1 -t TCP_STREAM -C -c -l 60 -- -s 128K -S 128K
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.200.0.1 (10.200.0.1) port 0 AF_INET
> > Recv Send Send Utilization Service Demand
> > Socket Socket Message Elapsed Send Recv Send Recv
> > Size Size Size Time Throughput local remote local remote
> > bytes bytes bytes secs. 10^6bits/s % U % U us/KB us/KB
> >
> > 131072 131072 131072 60.00 522.08 -1.00 -1.00 0.000 -0.314
> >
> > Now I run ./loop 2000000000 and see stoping transmit.
> >
> > procs memory page disk faults cpu
> > r b w avm fre flt re pi po fr sr ad0 in sy cs us sy id
> > 2 0 0 107M 435M 0 0 0 0 0 0 0 15939 618 39502 0 77 23
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 15904 619 39355 0 75 25
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 16193 615 40085 0 79 21
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 16028 623 39708 1 74 26
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 15965 615 39475 0 77 23
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 16012 636 39666 0 84 16 <-- run ./loop 2000000000
> > 2 0 0 109M 435M 46 0 0 0 9 0 0 9632 507 24041 48 51 1
> > 2 0 0 109M 435M 0 0 0 0 0 0 0 6592 319 16419 73 27 0
> > 2 0 0 109M 435M 0 0 0 0 0 0 0 455 136 1250 100 0 0
> > 2 0 0 109M 435M 0 0 0 0 0 0 0 420 127 1170 99 1 0
> > 2 0 0 109M 435M 0 0 0 0 0 0 0 395 127 1127 100 0 0
> > 2 0 0 109M 435M 0 0 0 0 0 0 0 428 127 1209 100 0 0
> > 2 0 0 109M 435M 0 0 0 0 0 0 0 537 130 1434 99 1 0
> > 2 0 0 109M 435M 0 0 0 0 0 0 0 449 136 1255 100 0 0
> > 1 0 0 107M 435M 14 0 0 0 37 0 0 7634 400 19044 56 30 14 <- end ./loop (Elapsed 8470990 us)
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 14893 579 37088 0 75 25
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 16123 615 40163 0 78 22
> > 1 0 0 107M 435M 0 0 0 0 0 0 0 15220 582 37939 0 72 28
> >
> > Wtf?
>
> Only with ULE sheduler.
> No effect with 4BSD sheduler (./loop Elapsed 30611224 us).
With last ULE patches
===
ts->ts_rltick = ticks;
td->td_lastcpu = td->td_oncpu;
td->td_oncpu = NOCPU;
- td->td_flags &= ~TDF_NEEDRESCHED;
+ if ((flags & SW_PREEMPT) == 0)
+ td->td_flags &= ~TDF_NEEDRESCHED;
td->td_owepreempt = 0;
tdq->tdq_switchcnt++;
===
Strange effect:
0 0 0 97724K 436M 0 0 0 0 0 0 0 4 115 305 0 0 100
0 0 0 97724K 436M 0 0 0 0 0 0 0 5 123 318 0 1 99
0 0 0 97724K 436M 0 0 0 0 0 0 0 3 115 303 0 0 100
0 0 0 97724K 436M 0 0 0 0 0 0 0 3 115 335 0 0 100
0 0 0 97724K 436M 0 0 0 0 0 0 0 3 115 305 0 0 100
0 0 0 97724K 436M 0 0 0 0 0 0 0 4 115 305 0 0 100
0 0 0 97724K 436M 0 0 0 0 0 0 0 4 123 310 0 0 100
0 0 0 97724K 436M 0 0 0 0 0 0 0 5 136 342 0 0 100
0 0 0 97724K 436M 0 0 0 0 0 0 0 4 119 315 0 0 100
1 0 0 99M 436M 144 0 0 0 16 0 0 16866 732 42530 0 65 35 <-- start netperf
1 0 0 99M 436M 0 0 0 0 0 0 0 18059 614 45335 0 77 23
1 0 0 99M 436M 0 0 0 0 0 0 0 18202 622 45790 0 74 26
1 0 0 99M 436M 0 0 0 0 0 0 0 18152 615 45894 0 77 23
1 0 0 99M 436M 0 0 0 0 0 0 0 18098 615 45393 0 76 24
./loop 2000000000
1 0 0 99M 436M 2 0 0 0 0 0 0 17936 635 44907 0 84 16 <-- start loop
2 0 0 100M 436M 46 0 0 0 9 0 0 10522 499 26780 50 47 3
2 0 0 100M 436M 0 0 0 0 0 0 0 18207 622 45415 24 76 0
2 0 0 100M 436M 0 0 0 0 0 0 0 16818 584 42217 30 70 0
2 0 0 100M 436M 0 0 0 0 0 0 0 18090 615 45412 19 81 0
2 0 0 100M 436M 0 0 0 0 0 0 0 18005 614 44891 17 83 0
2 0 0 100M 436M 0 0 0 0 0 0 0 18221 614 45560 23 77 0
2 0 0 100M 436M 0 0 0 0 0 0 0 18062 623 44903 27 73 0
2 0 0 100M 436M 0 0 0 0 0 0 0 18165 607 45104 26 74 0 <-- begin degradation
2 0 0 100M 436M 0 0 0 0 0 0 0 522 127 1449 100 0 0
2 0 0 100M 436M 0 0 0 0 0 0 0 478 128 1311 100 0 0
procs memory page disk faults cpu
r b w avm fre flt re pi po fr sr ad0 in sy cs us sy id
3 0 0 100M 436M 0 0 0 0 0 0 0 453 128 1267 100 0 0
2 0 0 100M 436M 0 0 0 0 0 0 0 503 136 1383 100 0 0
2 0 0 100M 436M 0 0 0 0 0 0 0 551 129 1521 100 0 0 <-- end loop
Elapsed 13665127 us
More information about the freebsd-performance
mailing list