[Bug 208130] smbfs is slow because it (apparently) doesn't do any caching/buffering

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Fri Mar 18 22:55:44 UTC 2016


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208130

            Bug ID: 208130
           Summary: smbfs is slow because it (apparently) doesn't do any
                    caching/buffering
           Product: Base System
           Version: 10.2-RELEASE
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Only Me
          Priority: ---
         Component: kern
          Assignee: freebsd-bugs at FreeBSD.org
          Reporter: noah.bergbauer at tum.de
                CC: freebsd-amd64 at FreeBSD.org
                CC: freebsd-amd64 at FreeBSD.org

I set up an smbfs mount on FreeBSD 10.2-RELEASE today and was noticed that it's
very slow. How slow? Some numbers: Reading a 600MB file from the share with dd
reports around 1 MB/s while doing the same in a Linux VM running inside bhyve
on this very same machine yields a whopping 100 MB/s. I conclude that the SMB
server is irrelevant in this case.

There's a recent
[discussion](https://lists.freebsd.org/pipermail/freebsd-hackers/2015-November/048597.html)
about this on freebsd-hackers which reveals an interesting detail: The
situation can be improved massivley up to around 60MB/s on the FreeBSD side
just by using a larger dd buffer size (e.g. 1MB). Interestingly, using very
small buffers has only negligible impact on Linux (until the whole affair gets
CPU-bottlenecked of course).

I know little about SMB but a quick network traffic analysis gives some
insights: FreeBSD's smbfs seems to translate every read() call from dd directly
into an SMB request. So with a small buffer size of e.g. 1k, something like
this seems to happen:

* client requests 1k of data
* client waits for a response (network round-trip)
* client receives response
* client hands data to dd which then issues another read()
* client requests 1k of data
* ...

Note how we're spending most of our time waiting for network round-trips.
Because a bigger buffer means larger SMB requests, this obviously leads to
higher network saturation and less wasted time.

I'm unable to spot a similar pattern on Linux. Here, a steady flow of data is
maintained even with small buffer sizes, so apparently some caching/buffering
must be happening. Linux's cifs has a "cache" option and indeed, disabling it
produces exactly the same performance (and network) behavior I'm seeing on
FreeBSD.


So to sum things up: The fact that smbfs doesn't have anything like Linux's
cache causes a 100-fold performance hit. Obviously, that's a problem.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the freebsd-amd64 mailing list