Performance issues with 8.0 ZFS and sendfile/lighttpd

Thomas Backman serenity at exscape.org
Fri Nov 6 18:45:51 UTC 2009


On Nov 6, 2009, at 7:36 PM, Miroslav Lachman wrote:

> Ivan Voras wrote:
>> Miroslav Lachman wrote:
>>> Ivan Voras wrote:
>>>> Miroslav Lachman wrote:
>>>
>>> [..]
>>>
>>>>> I have more strange issue with Lighttpd in jail on top of ZFS.
>>>>> Lighttpd is serving static content (mp3 downloads thru flash  
>>>>> player).
>>>>> Is runs fine for relatively small number of parallel clients with
>>>>> bandwidth about 30 Mbps, but after some number of clients is  
>>>>> reached
>>>>> (about 50-60 parallel clients) the throughput drops down to 6  
>>>>> Mbps.
>>>>>
>>>>> I can server hundereds of clients on same HW using Lighttpd not in
>>>>> jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps  
>>>>> (maybe
>>>>> more).
>>>>>
>>>>> I don't know if it is ZFS or Jail issue.
>>>>
>>>> Do you have actual disk IO or is the vast majority of your data  
>>>> served
>>>> from the caches? (actually - the same question to the OP)
>>>
>>> I had ZFS zpool as mirror of two SATA II drives (500GB) and in the
>>> peak iostat (or systat -vm or gstat) shows about 80 tps / 60% busy.
>>>
>>> In case of UFS, I am using gmirrored 1TB SATA II drives working nice
>>> with 160 or more tps.
>>>
>>> Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB  
>>> of RAM.
>>>
>>> As the ZFS + Lighttpd in jail was unreliable, I am no longer using  
>>> it,
>>> but if you want some more info for debuging, I can set it up again.
>>
>> For what it's worth, I have just set up a little test on a production
>> machine with 3 500 GB SATA drives in RAIDZ, FreeBSD 7.2-RELEASE. The
>> total data set is some 2 GB in 5000 files but the machine has only  
>> 2 GB
>> RAM total so there is some disk IO - about 40 IOPS per drive. I'm  
>> also
>> using Apache-worker, not lighty, and siege to benchmark with 10
>> concurrent users.
>>
>> In this setup, the machine has no problems saturating a 100 Mbit/s  
>> link
>> - it's not on a LAN but the latency is close enough and I get ~~ 11  
>> MB/s.
>
> [...]
> /boot/loader.conf:
>
> ## eLOM support
> hw.bge.allow_asf="1"
> ## gmirror RAID1
> geom_mirror_load="YES"
> ## ZFS tuning
> vm.kmem_size="1280M"
> vm.kmem_size_max="1280M"
> kern.maxvnodes="400000"
> vfs.zfs.prefetch_disable="1"
> vfs.zfs.arc_min="16M"
> vfs.zfs.arc_max="128M"
I won't pretend to know much about this area, but your ZFS values here  
are very low. May I assume that they are remnants of the times when  
the ARC grew insanely large and caused a kernel panic?
You're effectively forcing ZFS to not use more than 128MB cache, which  
doesn't sound like a great idea if you've got 2+ GB of RAM. I've had  
no trouble without any tuning whatsoever on 2GB for a long time now.  
The kmem lines can probably be omitted if you're on amd64, too (the  
default value for kmem_size_max is about 307GB on my machine).

Regards,
Thomas


More information about the freebsd-stable mailing list