Performance issues with 8.0 ZFS and sendfile/lighttpd

Jeremy Chadwick freebsd at jdc.parodius.com
Sat Nov 7 07:31:54 UTC 2009


On Fri, Nov 06, 2009 at 11:41:12PM +0100, Miroslav Lachman wrote:
> Thomas Backman wrote:
> >On Nov 6, 2009, at 7:36 PM, Miroslav Lachman wrote:
> >
> >>Ivan Voras wrote:
> >>>Miroslav Lachman wrote:
> >>>>Ivan Voras wrote:
> >>>>>Miroslav Lachman wrote:
> >>>>
> >>>>[..]
> >>>>
> >>>>>>I have more strange issue with Lighttpd in jail on top of ZFS.
> >>>>>>Lighttpd is serving static content (mp3 downloads thru flash player).
> >>>>>>Is runs fine for relatively small number of parallel clients with
> >>>>>>bandwidth about 30 Mbps, but after some number of clients is reached
> >>>>>>(about 50-60 parallel clients) the throughput drops down to 6 Mbps.
> >>>>>>
> >>>>>>I can server hundereds of clients on same HW using Lighttpd not in
> >>>>>>jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe
> >>>>>>more).
> >>>>>>
> >>>>>>I don't know if it is ZFS or Jail issue.
> >>>>>
> >>>>>Do you have actual disk IO or is the vast majority of your data served
> >>>>>from the caches? (actually - the same question to the OP)
> >>>>
> >>>>I had ZFS zpool as mirror of two SATA II drives (500GB) and in the
> >>>>peak iostat (or systat -vm or gstat) shows about 80 tps / 60% busy.
> >>>>
> >>>>In case of UFS, I am using gmirrored 1TB SATA II drives working nice
> >>>>with 160 or more tps.
> >>>>
> >>>>Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB of
> >>>>RAM.
> >>>>
> >>>>As the ZFS + Lighttpd in jail was unreliable, I am no longer using it,
> >>>>but if you want some more info for debuging, I can set it up again.
> >>>
> >>>For what it's worth, I have just set up a little test on a production
> >>>machine with 3 500 GB SATA drives in RAIDZ, FreeBSD 7.2-RELEASE. The
> >>>total data set is some 2 GB in 5000 files but the machine has only 2 GB
> >>>RAM total so there is some disk IO - about 40 IOPS per drive. I'm also
> >>>using Apache-worker, not lighty, and siege to benchmark with 10
> >>>concurrent users.
> >>>
> >>>In this setup, the machine has no problems saturating a 100 Mbit/s link
> >>>- it's not on a LAN but the latency is close enough and I get ~~ 11
> >>>MB/s.
> >>
> >>[...]
> >>/boot/loader.conf:
> >>
> >>## eLOM support
> >>hw.bge.allow_asf="1"
> >>## gmirror RAID1
> >>geom_mirror_load="YES"
> >>## ZFS tuning
> >>vm.kmem_size="1280M"
> >>vm.kmem_size_max="1280M"
> >>kern.maxvnodes="400000"
> >>vfs.zfs.prefetch_disable="1"
> >>vfs.zfs.arc_min="16M"
> >>vfs.zfs.arc_max="128M"
> 
> >I won't pretend to know much about this area, but your ZFS values here
> >are very low. May I assume that they are remnants of the times when the
> >ARC grew insanely large and caused a kernel panic?
> >You're effectively forcing ZFS to not use more than 128MB cache, which
> >doesn't sound like a great idea if you've got 2+ GB of RAM. I've had no
> >trouble without any tuning whatsoever on 2GB for a long time now. The
> >kmem lines can probably be omitted if you're on amd64, too (the default
> >value for kmem_size_max is about 307GB on my machine).
> 
> Yes, loader values are one year old when I installed this machine.
> But I think auto tuning was commited after 7.2-RELEASE by Kip Macy,
> so some of them are still needed or am I wrong? (this is
> 7.2-RELEASE). ...

We don't know, because none of the individuals who are maintaining ZFS
at this point in time have actually responded to this question.

http://lists.freebsd.org/pipermail/freebsd-stable/2009-October/052256.html

The community really needs an official answer to this question, and one
from those familiar with the code.

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |


More information about the freebsd-stable mailing list