zfs very poor performance compared to ufs due to lack of cache?

Steven Hartland killing at multiplay.co.uk
Wed Sep 15 15:04:52 UTC 2010


----- Original Message ----- 
From: "Andriy Gapon" <avg at freebsd.org>

>> Indeed. Where would this need to be addressed as ufs doesn't suffer from this?
> 
> In ZFS.  But I don't think that this is going to happen any time soon if at all.
> Authors of ZFS specifically chose to use a dedicated cache, which is ARC.
> Talk to them, or don't use ZFS, or get used to it.
> ARC has a price, but it supposedly has benefits too.
> Changing ZFS to use buffer cache is a lot of work and effectively means not using
> ARC, IMO.

Hmm, so taking a different track on the issue is the a way to make sendfile use data
directly from ARC instead of having to copy it first?

> Well, I thought that you hurried when you applied the patches and changed the
> settings at the same time.  This made it impossible for you to judge properly what
> patches do and don't do for you.

No hurry just applying the patches that where suggested, retest, apply new retest etc but
in parrallel been reading up on the arc tunables.

>> Now we have a very simple setup so we can make sensible values for min / max but
>> it still means that for every file being sent when sendfile is enabled:
>> 1. There are two copies in memory which is still going to mean that only half the
>> amount files can be successfully cached and served without resorting to disk IO.
> 
> Can't really say, depends on the size of the files.
> Though, it's approximately a half of what could have fit in memory with e.g. UFS, yes.

Out of interest if a copy of the data is being made from ARC whats ties those
two copies together, in order to prevent the next request for the same file having to
create a third copy etc...

>> 2. sendfile isn't achieving what it states it should be i.e. a zero-copy. Does
>> this explain
>> the other odd behaviour we noticed, high CPU usage from nginx?
> 
> sendfile should achieve zero copy with all the patches applied once both copies of
> data are settled in memory.  If you have insufficient memory to hold the workset,
> then that's a different issue of moving competing data in and out of memory. And
> that may explain the CPU load, but it's just a speculation.

Yes, more investigation needed ;-)
 
> At present I don't see any other way but brute force - throw even more RAM at the
> problem.
> 
> Perhaps, a miracle would happen and someone would post patches that radically
> change ZFS behavior with respect to caches.  But I don't expect it
> (pessimist/realist).

Or alternatively make sendfile work directly from ARC, would that be possible?

Thanks for all the info :)

    Regards
    Steve

================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 

In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postmaster at multiplay.co.uk.



More information about the freebsd-fs mailing list