NFS Performance issue against NetApp
Marc G. Fournier
scrappy at hub.org
Tue Apr 23 23:26:08 UTC 2013
Morning …
I'm trying to figure out where performance issues are arising, and I suspect its a lack of tuning on the FreeBSD side …
Hardware wise, I have an HP Proliant DL360p Gen8 Server, 16G of RAM, bge ethernet … I have two ethernet ports in use, one used as a private backend for the NFS filer, the other for the public IP front end.
The switch is an HP 2910al-24G.
The NetApp is a 3xxx series machine, with the private IP assigned to a two 1G port trunk into the HP switch.
The network itself is pretty much dead, since we haven't gone production yet … the only time you see any traffic on the switch is when running tests, and even then, *max* is ~45%.
The application is jboss … on a standalone machine, startup takes <60s … on the NFS mounted, it takes >4m … I expect some discrepancy, but 4x?
I talked to NetApp first, and they got me to run perfstat to gather information when I'm running through the jboss start up, and the numbers / graphs show *very* low … read latency down around 0.6ms, as an example
I did a search of NetApps kb, and found an article that talks about several kernel settings, but they are fine FreeBSD 4.x … do any (or all) of them still apply?
===
It is recommended that the latest stable release of the FreeBSD kernel be used, that is, currently version 4.11, which is also the last of the 4-STABLE branch releases.
The latest and final FreeBSD release from the 5-STABLE branch is 5.5, and was released in May 2006. As for the 6-STABLE release, FreeBSD version 6.1 was released on May 8, 2006.
Make the following kernel parameter changes below. These can be added to /etc/sysctl.conf and the system rebooted or they can be set temporarily with sysctl -w <option> = <value>. If user does the latter, user will need to run killall -9 nfsd and restart NFS.
vfs.vmiodirenable=1
kern.maxfiles=65536
kern.maxfilesperproc=32768
kern.ipc.maxsockbuf=2097152
kern.ipc.somaxconn=8192
kern.ipc.maxsockets=16424
net.inet.tcp.rfc1323=1
net.inet.tcp.delayed_ack=0
net.inet.tcp.sendspace=65535
net.inet.tcp.recvspace=65535
net.local.stream.recvspace=65535
net.local.stream.sendspace=65535
kern.ipc.somaxconn=4096
In many instances, using net.inet.tcp.rfc1323=0 instead of net.inet.tcp.rfc1323=1 results in better performance.
===
My current settings for the above list are:
vfs.vmiodirenable: 1
kern.maxfiles: 131068
kern.maxfilesperproc: 11095
kern.ipc.maxsockbuf: 2097152
kern.ipc.somaxconn: 128
kern.ipc.maxsockets: 25600
net.inet.tcp.rfc1323: 1
net.inet.tcp.delayed_ack: 1
net.inet.tcp.sendspace: 32768
net.inet.tcp.recvspace: 65536
net.local.stream.recvspace: 8192
net.local.stream.sendspace: 8192
kern.ipc.somaxconn: 128
Thoughts / suggestions?
Thank you …
More information about the freebsd-fs
mailing list