Slow disk write speeds over network

Terry Lambert tlambert2 at mindspring.com
Wed Jun 11 02:44:08 PDT 2003


Sean Chittenden wrote:
> > >...and yet more sysctl's for this:
> > >
> > >     kern.polling.enable=1
> > >     kern.polling.user_frac=50       # 0..100; whatever works best
> > >
> > >If you've got a really terrible Gigabit Ethernet card, then
> > >you may be copying all your packets over again (e.g. m_pullup()),
> > >and that could be eating your bus, too.
> >
> > Ok, so the end result is that after playing around with sysctl's,
> > I've found that the tcp transfers are doing 20MB/s over FTP, but my
> > NFS is around 1-2MB/s - still slow.. So we've cleared up some tcp
> > issues, but 2yet still NFS is stinky..
> >
> > Any more ideas?
> 
> I'm using UDP NFS over a 100Mbit/FD link with the following settings
> and get about 12-14Mbps:


Numbers taken in context of original poster... YMMV:

> net.inet.tcp.recvspace=65536

This is most important for writes.  The sendspace is pretty well
not going to help you out, unless you are starvation deadlocked;
it didn't look like you were from your previous posting.  BTW: I
believe this is the default.

> net.inet.tcp.sendspace=65536

Double the default.  Might not be a good idea, unless you have a
ton of memory.  You will potentially use 64K send + 64K receive
times number of sockets.  Assuming 4G and near-perfect tuning, you
will be limited to 16384 simultaneous connections fully packed
before memory pressure causes your machine to crash.  I tend to
like smaller buffers and more connections.  If you only have 512M,
drop this number to 2048 simultaneous connections if all buffers
are full.

> kern.maxfiles=65536

Seems kind of overkill for the number of connections you can support
without overcommit, and the number of client machines you say you
have.

> kern.ipc.maxsockbuf=2097152
> kern.ipc.somaxconn=8192

IPC numbers; not relevent.

> net.inet.tcp.delayed_ack=0

This will make it more responsive, at some cost in overhead.

> net.inet.udp.recvspace=65536
> net.inet.udp.maxdgram=57344

These are important for UDP NFS.  I do not reccomend it.

> net.local.stream.sendspace=65536
> net.local.stream.recvspace=65536

IPC numbers; not relelvent.

> vfs.nfs.async=1

This is very dangerous, if you care about your data.  It permits
NFS to ACK writes before they have been committed to stable
storage.  With a large enough window size, this should not be
necessary.


> net.inet.udp.log_in_vain=1

This is just overhead; I reccomend turning it off.

> net.inet.icmp.icmplim=20000

This is only useful for TCP; but it can be very useful.  Basically,
this is "connection rate limiting".  If you have a ton of clients,
or trying to "netbench" the system, then set this number up.  For
100 NFS clients, it likely does not matter.


> I'm not taking into account jumbo frames or anything like that, so you
> may want to increase the size of some of these values where
> appropriate, but some of these may be a start.  -sc

In my experience, Intel GigE cards do not play nice with others
when it comes to jumbo frames or negotiation.  I much prefer the
Tigon/Alteon/Broadcom/whoever-they-are-this-week-still-no-firmware,
though I would obviously like the same firmware access to the Tigon
III's as they used to give us to the Tigon II's.

-- Terry


More information about the freebsd-performance mailing list