ZFS - abysmal performance with samba since upgrade to
8.2-RELEASE
Bartosz Stec
bartosz.stec at it4pro.pl
Mon Feb 28 20:16:37 UTC 2011
W dniu 2011-02-24 08:55, Jeremy Chadwick pisze:
> (...snip...)
> Samba
> =======================
> Rebuild the port (ports/net/samba35) with AIO_SUPPORT enabled. To use
> AIO you will need to load the aio.ko kernel module (kldload aio) first.
>
> Relevant smb.conf tunings:
>
> [global]
> socket options = TCP_NODELAY SO_SNDBUF=131072 SO_RCVBUF=131072
> use sendfile = no
> min receivefile size = 16384
> aio read size = 16384
> aio write size = 16384
> aio write behind = yes
>
>
>
> ZFS pools
> =======================
> pool: backups
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> backups ONLINE 0 0 0
> ada2 ONLINE 0 0 0
>
> errors: No known data errors
>
> pool: data
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> data ONLINE 0 0 0
> ada1 ONLINE 0 0 0
>
> errors: No known data errors
>
>
>
> ZFS tunings
> =======================
> Your tunings here are "wild" (meaning all over the place). Your use
> of vfs.zfs.txg.synctime="1" is probably hurting you quite badly, in
> addition to your choice to enable prefetching (every ZFS FreeBSD system
> I've used has benefit tremendously from having prefetching disabled,
> even on systems with 8GB RAM and more). You do not need to specify
> vm.kmem_size_max, so please remove that. Keeping vm.kmem_size is fine.
> Also get rid of your vdev tunings, I'm not sure why you have those.
>
> My relevant /boot/loader.conf tunings for 8.2-RELEASE (note to readers:
> the version of FreeBSD you're running, and build date, matters greatly
> here so do not just blindly apply these without thinking first):
>
> # We use Samba built with AIO support; we need this module!
> aio_load="yes"
>
> # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory.
> vm.kmem_size="8192M"
> vfs.zfs.arc_max="6144M"
>
> # Disable ZFS prefetching
> # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html
> # Increases overall speed of ZFS, but when disk flushing/writes occur,
> # system is less responsive (due to extreme disk I/O).
> # NOTE: Systems with 8GB of RAM or more have prefetch enabled by
> # default.
> vfs.zfs.prefetch_disable="1"
>
> # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. This
> # should increase throughput and decrease the "bursty" stalls that
> # happen during immense I/O with ZFS.
> # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html
> # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html
> vfs.zfs.txg.timeout="5"
>
>
>
> sysctl tunings
> =======================
> Please note that the below kern.maxvnodes tuning is based on my system
> usage, and yours may vary, so you can remove or comment out this option
> if you wish. The same goes for vfs.ufs.dirhash_maxmem. As for
> vfs.zfs.txg.write_limit_override, I strongly suggest you keep this
> commented out for starters; it effectively "rate limits" ZFS I/O, and
> this smooths out overall performance (otherwise I was seeing what
> appeared to be incredible network transfer speed, then the system would
> churn hard for quite some time on physical I/O, then fast network speed,
> physical I/O, etc... very "bursty", which I didn't want).
>
> # Increase send/receive buffer maximums from 256KB to 16MB.
> # FreeBSD 7.x and later will auto-tune the size, but only up to the max.
> net.inet.tcp.sendbuf_max=16777216
> net.inet.tcp.recvbuf_max=16777216
>
> # Double send/receive TCP datagram memory allocation. This defines the
> # amount of memory taken up by default *per socket*.
> net.inet.tcp.sendspace=65536
> net.inet.tcp.recvspace=131072
>
> # dirhash_maxmem defaults to 2097152 (2048KB). dirhash_mem has reached
> # this limit a few times, so we should increase dirhash_maxmem to
> # something like 16MB (16384*1024).
> vfs.ufs.dirhash_maxmem=16777216
>
> #
> # ZFS tuning parameters
> # NOTE: Be sure to see /boot/loader.conf for additional tunings
> #
>
> # Increase number of vnodes; we've seen vfs.numvnodes reach 115,000
> # at times. Default max is a little over 200,000. Playing it safe...
> kern.maxvnodes=250000
>
> # Set TXG write limit to a lower threshold. This helps "level out"
> # the throughput rate (see "zpool iostat"). A value of 256MB works well
> # for systems with 4GB of RAM, while 1GB works well for us w/ 8GB on
> # disks which have 64MB cache.
> vfs.zfs.txg.write_limit_override=1073741824
>
>
>
> Good luck.
>
Jeremy, you're just invaluable! :)
In short - I applied tips suggested above (only difference was
vfs.zfs.txg.write_limit_override set to 128MB, and sendfile, which I
still have enabled) and it's first time _ever_ I see samba performing so
fast on FreeBSD (on 100Mb link)!
long story:
I'm using old, crappy, low memory desktop PC as home router/test
server/(very little) storage:
FreeBSD 9.0-CURRENT #2 r219090: Mon Feb 28 03:06:13 CET 2011
CPU: mobile AMD Athlon(tm) XP 2200+ (1800.10-MHz 686-class CPU)
real memory = 1610612736 (1536 MB)
avail memory = 1562238976 (1489 MB)
ad0: 39205MB <Maxtor 6E040L0 NAR61590> at ata0-master UDMA133
ad1: 38166MB <SAMSUNG SP0411N TW100-08> at ata0-slave UDMA100
ad2: 39205MB <Maxtor 6E040L0 NAR61590> at ata1-master UDMA133
xl0: <3Com 3c905B-TX Fast Etherlink XL>
It's ZFS only (just updated to v28) system in RAIDZ1 configuration,
attached to cheap belkin 100Mb switch used for home network.
From couple of months I experienced pathetic SMB transfer - from 20kB/s
to 200kB/s. Especially when system was idle, because the most funny
thing about that - transfer was much better when system was busy (csup
or make world for instance). SMB throughput jumped to 2-4MB/s then
(well, from time to time at least).
I've been using following settings and tunings while I was experiencing
this issue:
smb.conf:
[global]
socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536
use sendfile = yes
min receivefile size = 16384
aio read size = 16384
aio write size = 16384
aio write behind = true
loader.conf:
vm.kmem_size="1536M"
vm.kmem_size_max="1536M"
vfs.zfs.arc_max="1024M"
aio_load="YES"
sysctl.conf:
kern.ipc.maxsockbuf=2097152
net.inet.tcp.recvspace=262144
net.inet.tcp.recvspace=262144
net.inet.tcp.mssdflt=1452
net.inet.udp.recvspace=65535
net.inet.udp.maxdgram=65535
net.local.stream.recvspace=65535
net.local.stream.sendspace=65535
After applying tunables from Jeremy my configs looks like this:
smb.conf:
[global]
socket options = TCP_NODELAY SO_SNDBUF=131072 SO_RCVBUF=131072
use sendfile = yes
min receivefile size = 16384
aio read size = 16384
aio write size = 16384
aio write behind = yes
loader.conf:
vm.kmem_size="1536M"
vm.kmem_size_max="1536M"
vfs.zfs.arc_max="1024M"
vfs.zfs.txg.timeout="5"
aio_load="YES"
sysctl.conf:
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendspace=65536
net.inet.tcp.recvspace=131072
vfs.ufs.dirhash_maxmem=16777216
kern.maxvnodes=250000
vfs.zfs.txg.write_limit_override=134217728
Test: copying 1GB file both sides.
Results: stable 8MB/s both sides!
Thank you very much!
--
Bartosz Stec
More information about the freebsd-stable
mailing list