zfs very poor performance compared to ufs due to lack of cache?
avg at freebsd.org
Tue Sep 21 12:19:11 UTC 2010
on 21/09/2010 15:02 Steven Hartland said the following:
> ----- Original Message ----- From: "Andriy Gapon" <avg at freebsd.org>
>> Yes, you really need to understand how VM works first.
>> Think of "what sendfile populates" as L1 cache and ARC as L2 cache with inclusive
>> relation-ship (i.e. the same data can be in both). The differences from CPUs is
>> that balance of sizes between L1 and L2 is established dynamically. Another
>> difference is that some operations like read(2) bypass L1 and go to L2 directly.
>> If you use operations that work through L1 and most of your data is already in L1,
>> then why you'd want L2 to be large?
> Thanks for bearing with me, I think a quick example might help explain my
> to which I hope you will correct me ;-)
> We have nginx serving some files and we have:-
> * machine with 7G RAM
> * file1 & file2 both 1GB in size
> * max ARC can grow to is 1.5GB
> * Inactive can grow to 5GB
> Now the process:-
> First client requests file1, its loaded from disk into ARC and then transferred and
> in doing so populates pages which are "inactive"
> Client 1 result:
> * all of file1 in ARC and in Inactive
> Second client requests file2, its also loaded into ARC, pushing 500MB of file1 out
> as max ARC is 1.5GB, it is then transferred and hence populates inactive pages.
> Client 2 result:
> * half of file1 in ARC but all still in Inactive
> * all of file2 in ARC and Inactive
> Third client requests file1, now what happens here? Do we have to go back to disk to
> get the 500MB of file1 which is now not in ARC or is the file transferred directly
> the Inactive pages which where never touched?
Yes, it should be transferred from the Inactive cache.
> From my tests it seems that to serve a file to a client using sendfile without
> having to
> read it from disk you need said file in ARC. See sendfile on and primarycache set to
> metadata results.
Yeah, I am puzzled by that.
But, OTOH, I don't know that feature of ZFS well enough to say what additional
pessimizations may have happened.
> So to use your caching analogy it seems that sendfile cant use the L1 cache unless
> its also present in L2 for whatever reason.
It's possible that this is how it works for you because of some bug.
But I don't see anything in the code that would lead to that behavior _after_ the
change that was committed in r212650. Change in r212782 might be useful too.
In VM theory the data should be just taken from "L1" aka "Inactive" aka page cache.
More information about the freebsd-fs