iperf results

Dave+Seddon dave-sender-1932b5 at seddon.ca
Wed Sep 21 17:27:06 PDT 2005


Greetings, 

We would all be very interested to see the complete report.  Particularly if 
you fix up the results for FreeBSD :) 

Chucks right, we need waaay more info.  We don't even know what version of 
FreeBSD your running. 

There are lots of sysctl variables to adjust.  Here's a bunch I played with, 
importantly, you don't need to recompile the kernel to adjust most of the 
settings.  /etc/sysctl.conf & /boot/loader.conf should do it.  See defaults 
in /boot/defaults/loader.conf
 ---------------------------------
> cat /etc/sysctl.conf
#kern.polling.enable=1
kern.polling.enable=1 

#kern.polling.user_frac: 50
#kern.polling.reg_frac: 20
##kern.polling.user_frac=70
##kern.polling.reg_frac=40 

#kern.polling.burst: 5
#kern.polling.each_burst: 5
#kern.polling.burst_max: 150  #default for 100MB/s 

##kern.polling.burst=50
kern.polling.each_burst=50
kern.polling.burst_max=1500 

#example I found on the web
#kern.polling.burst: 1000
#kern.polling.each_burst: 80
#kern.polling.burst_max: 1000 

#net.inet.tcp.sendspace: 32768
#net.inet.tcp.recvspace: 65536
#net.inet.tcp.sendspace=65536
#net.inet.tcp.recvspace=65536
#DO NOT SET THIS HIGHER THAN 65536 * 2 (FREEBSD BUG_
net.inet.tcp.sendspace=131072
net.inet.tcp.recvspace=131072 

#sysctl net.inet.tcp.rfc1323=1  Activate window scaling and timestamp 
options according to RFC 1323.
#net.inet.tcp.rfc1323=1
net.inet.tcp.delayed_ack=0
net.inet.icmp.icmplim=1000 

#kern.ipc.maxsockbuf: 262144
###kern.ipc.maxsockbuf=20480000 

#The kern.ipc.somaxconn sysctl variable limits the size of the listen queue 
for accepting new TCP connections. The default value of 128 is typically too 
low for robust handling of new connections in a heavily loaded web server 
environment.
#kern.ipc.somaxconn: 128
kern.ipc.somaxconn=1024 

#The TCP Bandwidth Delay Product Limiting is similar to TCP/Vegas in NetBSD. 
It can be enabled by setting net.inet.tcp.inflight.enable sysctl variable to 
1. The system will attempt to calculate the bandwidth delay product for each 
connection and limit the amount of data queued to the network to just the 
amount required to maintain optimum throughput.
#This feature is useful if you are serving data over modems, Gigabit 
Ethernet, or even high speed WAN links (or any other link with a high 
bandwidth delay product), especially if you are also using window scaling or 
have configured a large send window. If you enable this option, you should 
also be sure to set net.inet.tcp.inflight.debug to 0 (disable debugging), 
and for production use setting net.inet.tcp.inflight.min to at least 6144 
may be beneficial. 

#these are the defaults
#net.inet.tcp.inflight.enable: 1
#net.inet.tcp.inflight.debug: 0
#net.inet.tcp.inflight.min: 6144
#net.inet.tcp.inflight.max: 1073725440
#net.inet.tcp.inflight.stab: 20 

#Disable entropy harvesting for ethernet devices and interrupts.  There are 
optimizations present in 6.x that have not yet been backported that improve 
the overhead of entropy harvesting, but you can get the same benefits by 
disabling it.  In your environment, it's likely not needed. I hope to 
backport these changes in a couple of weeks to 5-STABLE.
kern.random.sys.harvest.ethernet=0
kern.random.sys.harvest.interrupt=0 


#################################################3
#/boot/loader stuff 

#kern.ipc.maxsockets: 131072
#sysctl: Tunable values are set in /boot/loader.conf 

#sysctl kern.ipc.nmbclusters    View maximum number of mbuf clusters. Used 
for storage of data packets to/from the network interface. Can only be set 
att boot time - see above.
#kern.ipc.nmbclusters: 25600
 --------------------------------- 

 

Regards,
Dave 


Chuck Swiger writes: 

> Matthew Jakeman wrote:
>> Some colleagues and myself have performed some simple tests on various 
>> OS's using iperf to simply fire packets from one pc to another over 
>> ethernet to test a few characteristics such as packet loss, jitter etc 
>> between IPv4 and IPv6. The configuration for all three OS's were 'out of 
>> the box' installs. The results we got back from that are strange for 
>> FreeBSD with regards to the packet loss iperf reports and I was wondering 
>> if anyone has any ideas why they might be as they are. The image at the 
>> link below shows the packet loss results for windows, Linux and FreeBSD 
>> for comparison! As you can see the packet loss for v6 is substantially 
>> less than v4 on FreeBSD, however this is still substantially larger than 
>> for the other two OS's, does anyone have any idea why this might be? 
>> 
>> http://www.mjakeman.co.uk/images/4v6tests.jpg
> 
> You're probably getting packet loss either because you are filling up the 
> network buffer space without pausing until it drains, or are running into 
> ICMP response limits.  If you're going to be testing latency around the 
> millisecond level, you'll need to increase HZ to at least 1000, if not 
> better. 
> 
> For example, set "sysctl net.inet.icmp.icmplim=20" on a machine called 
> shot. 
> 
> # ping -c 1000 -i 0.01 -s 1280 shot
> PING shot (199.103.21.228): 1280 data bytes
> 1288 bytes from 199.103.21.228: icmp_seq=0 ttl=64 time=0.935 ms
> [ ... ]
> --- shot ping statistics ---
> 1000 packets transmitted, 220 packets received, 78% packet loss
> round-trip min/avg/max/stddev = 0.842/0.877/1.234/0.077 ms 
> 
> With "sysctl net.inet.icmp.icmplim=2000": 
> 
> [ ... ]
> 1288 bytes from 199.103.21.228: icmp_seq=999 ttl=64 time=0.870 ms 
> 
> --- shot ping statistics ---
> 1000 packets transmitted, 1000 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.838/0.858/1.068/0.020 ms 
> 
> ...or even: 
> 
> # ping -c 1000 -i 0.001 -s 1280 shot
> [ ... ]
> 1288 bytes from 199.103.21.228: icmp_seq=999 ttl=64 time=0.849 ms 
> 
> --- shot ping statistics ---
> 1000 packets transmitted, 1000 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.839/0.856/1.010/0.015 ms 
> 
> 	----- 
> 
> You haven't provided a test methodology.  You haven't provided the source 
> code for the benchmark program you are using.  You also haven't provided 
> any details about the hardware being used, the network topology, or even 
> what some of the values in this .jpg image mean.  (For example, what is 
> the first column, "duration", measuring?) 
> 
> -- 
> -Chuck 
> 
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
 



More information about the freebsd-net mailing list