NFS poor performance in ipfw_nat
Rick Macklem
rmacklem at uoguelph.ca
Wed Sep 19 13:57:06 UTC 2018
KIRIYAMA Kazuhiko wrote:
[good stuff snipped]
>
> Thanks for your advice. Add '-lro' and '-tso' to ifconfig,
> transfer rate up to almost native NIC speed:
>
> # dd if=/dev/zero of=/.dake/tmp/foo.img bs=1k count=1m
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes transferred in 10.688162 secs (100460852 bytes/sec)
> #
>
> BTW in VM on behyve, transfer rate to NFS mount of VM server
> (bhyve) is appreciably low level:
>
> # dd if=/dev/zero of=/.dake/tmp/foo.img bs=1k count=1m
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes transferred in 32.094448 secs (33455687 bytes/sec)
>
>This was limited by disk transfer speed:
>
># dd if=/dev/zero of=/var/tmp/foo.img bs=1k count=1m
>1048576+0 records in
>1048576+0 records out
>1073741824 bytes transferred in 21.692358 secs (49498623 bytes/sec)
>#
It sounds like this is resolved, thanks to Andrey.
If you have more problems like this, another thing to try is reducing the I/O
size with mount options at the client.
For example, you might try adding "rsize=4096,wsize=4096" to your mount and
then increase the size by powers of 2 (8192, 16384,32768) and see which size
works best. (This is another way to work around TSO problems. It also helps
when a net interface or packet filter can't keep up with a burst of 40+ ethernet
packets, which is what is generated when 64K I/O is used.)
Btw, doing "nfsstat -m" on the client will show you what mount options are
actually being used. This can be useful information.
Good to hear it has been resolved, rick
[more stuff snipped]
More information about the freebsd-net
mailing list