freebsd tuning

Коньков Евгений kes-kes at yandex.ru
Fri Nov 18 21:04:50 UTC 2011


CO> Kes,

CO> First, understand that the Realtek (re0) cards have significant
CO> network problems when trying to saturate a network. If you have the
CO> ability try switching to a Intel card (em0) for a lot better
CO> performance, lower interrupts and less CPU usage.
I know that problems with realtek.

CO> Why interrupts are not handled by more CPUs than one? This is probably
CO> the way the driver was built. It is a single processes which is using
CO> the "big lock" method. This keeps all activity for the drive bound to
CO> a single CPU core.
# sysctl net.isr
net.isr.maxthreads: 3
net.isr.direct: 0
net.isr.direct_force: 0

# sysctl -a | grep HZ
options HZ=4000
# sysctl -a | grep hz
kern.clockrate: { hz = 4000, tick = 250, profhz = 8128, stathz = 127 }
kern.hz: 4000

#top -SIHP
last pid: 54308;  load averages:  1.08,  1.43,  1.55     up 0+13:17:32  22:49:42
211 processes: 5 running, 187 sleeping, 19 waiting
CPU 0:  4.8% user,  0.0% nice, 14.3% system, 22.2% interrupt, 58.7% idle
CPU 1:  0.0% user,  0.0% nice,  6.3% system, 22.2% interrupt, 71.4% idle
CPU 2:  0.0% user,  0.0% nice, 11.1% system, 20.6% interrupt, 68.3% idle
CPU 3:  0.0% user,  0.0% nice,  9.5% system, 17.5% interrupt, 73.0% idle
Mem: 242M Active, 1731M Inact, 200M Wired, 316K Cache, 112M Buf, 1725M Free
Swap: 4096M Total, 4096M Free

  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11 root       155 ki31     0K    32K CPU1    1 539:41 80.71% {idle: cpu1}
   11 root       155 ki31     0K    32K RUN     2 541:42 79.39% {idle: cpu2}
   11 root       155 ki31     0K    32K CPU3    3 546:52 78.81% {idle: cpu3}
   11 root       155 ki31     0K    32K CPU0    0 532:04 77.39% {idle: cpu0}
   12 root       -72    -     0K   152K WAIT    1 184:33 24.56% {swi1: netisr 2}
   12 root       -72    -     0K   152K WAIT    2 281:46 22.07% {swi1: netisr 0}
   12 root       -72    -     0K   152K WAIT    3  89:43 13.96% {swi1: netisr 3}
   12 root       -92    -     0K   152K WAIT    0 112:43 13.67% {irq256: re0}
   13 root       -16    -     0K    32K sleep   1  50:04  4.93% {ng_queue3}
   13 root       -16    -     0K    32K sleep   1  50:01  4.93% {ng_queue2}
   13 root       -16    -     0K    32K sleep   3  49:59  4.93% {ng_queue1}
   13 root       -16    -     0K    32K sleep   2  50:02  4.88% {ng_queue0}
 6989 root        21    0 13408K  5576K select  0  17:37  2.39% snmpd
 5523 root        20    0 76928K 52252K select  2  11:31  0.05% {mpd5}

in this case I get *all* cpu work.

I will watch for it: will it can process speed over 400Mbit. Before
that was limit.

I think this notices will be usefull for people with this NIC


CO> or One CPU handle interrupts from one card, so I need two NICs?... Two
CO> nics would be a very good idea. You will see better performance a less
CO> IRQ splitting.

CO> Why it is lowered by twice? The CPU load is when the CPU is busy and
CO> can not be used by any other processes. This does _not_ mean that
CO> processing is going on, just that the CPU is unavailable. IRQ's are
CO> like locks and they keep the cpu from being use and hold on to the
CO> cpu. So, irq256 is holding onto the cpu, but not actually processing
CO> any data. This is not very efficient as you can see.
this router process 300Mbit ease, but when it rise to 400Mbit and hold
about 5 min it fall to 200Mbit. At 200Mbit can work easy! but it is
like hooked and still 100% loaded. other 3CPUs have many idle time.
twice fall I think binded to TCP stack: it see loses and try to send
data twice slower

CO> Try changing cards to an Intel variety and use two nics in total; one
CO> for incoming connections and one for outgoing. On the network
CO> performace page we specify the cards we are currently using. Intel
CO> PRO/1000 GT PCI PWLA8391GT can be found on newegg for as little as $31
CO> each.

I have Intel, but I you say: it is unnecessary to buy expensive hard
in many cases budged solution work very well. =)

CO> Hope this helps.
Thank you. Hope to see this notices in the article and will happy if
that help to other peoples. Thank you again.

CO> --
CO>    Calomel @ https://calomel.org
CO>    Open Source Research and Reference


CO> On Fri, Nov 18, 2011 at 02:41:15AM -0500, ??????? ??????? wrote:
>>Hi.
>>
>>FreeBSD 9.0-CURRENT FreeBSD 9.0-CURRENT #4: Fri Jun 10 01:30:12 UTC 2011 :/usr/obj/usr/src/sys/PAE_KES  i386
>>
>>
>>I have some quiestions about FreeBSD tunig https://calomel.org/network_performance.html
>>
>>I have re0 Gigabit Ethernet NIC(NDIS 6.0) (RTL8168/8111/8111c) and core i3 2100
>>and two vlans on it: the one for incoming and the other for outgoing packets.
>>
>>#top -SIHP
>>last pid: 14902;  load averages:  1.92,  2.12,  1.96    up 0+17:47:31  19:59:04
>>226 processes: 12 running, 197 sleeping, 17 waiting
>>CPU 0:  0.6% user,  0.0% nice,  1.2% system, 88.3% interrupt,  9.8% idle
>>CPU 1:  1.8% user,  0.0% nice, 29.4% system,  0.0% interrupt, 68.7% idle
>>CPU 2:  3.7% user,  0.0% nice, 30.7% system,  0.0% interrupt, 65.6% idle
>>CPU 3:  3.1% user,  0.0% nice, 25.8% system,  0.0% interrupt, 71.2% idle
>>Mem: 264M Active, 1641M Inact, 272M Wired, 832K Cache, 112M Buf, 1721M Free
>>Swap: 4096M Total, 4096M Free
>>
>>  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
>>   12 root       -92    -     0K   152K CPU0    0 354:30 96.78% {irq256: re0}
>>   11 root       155 ki31     0K    32K RUN     1 929:16 77.83% {idle: cpu1}
>>   11 root       155 ki31     0K    32K RUN     3 922:41 72.95% {idle: cpu3}
>>   11 root       155 ki31     0K    32K RUN     2 904:02 71.63% {idle: cpu2}
>>   13 root       -16    -     0K    32K CPU3    1  71:11 18.65% {ng_queue1}
>>   13 root       -16    -     0K    32K RUN     1  71:10 18.36% {ng_queue3}
>>   13 root       -16    -     0K    32K RUN     3  71:18 17.63% {ng_queue0}
>>   13 root       -16    -     0K    32K RUN     1  71:11 17.14% {ng_queue2}
>>   11 root       155 ki31     0K    32K RUN     0 682:25 10.55% {idle: cpu0}
>>55709 root        20    0 13408K  5840K select  2  15:50  1.71% snmpd
>>14902 cacti       33    0 11960K  3480K select  1   0:00  1.12% snmpget
>>14864 cacti       46    0 11116K  2836K piperd  3   0:00  1.12% perl5.10.1
>>14867 root        46    0  9728K  1956K select  3   0:00  1.12% sudo
>>
>>as you can see irq256 take all CPU0 time and packets that travel
>>through router have a lose about 5-10%, CPU100% loaded when trafic
>>achive 400Mbit/s and then lower as twice
>>
>>Now questions
>>1. Why interrupts are not handled by more CPUs than one?
>>
>>2. or One CPU handle interrupts from one card, so I need two NICs?...
>>
>>3. Why it is lowered by twice?
>>
>>Thank you.
>>



-- 
С уважением,
 Коньков                          mailto:kes-kes at yandex.ru



More information about the freebsd-questions mailing list