zfs very poor performance compared to ufs due to lack of cache?

Andriy Gapon avg at icyb.net.ua
Tue Sep 7 07:26:28 UTC 2010


on 06/09/2010 21:34 Wiktor Niesiobedzki said the following:
> As far as I have check recently, nginx is using sendfile by default.
> There is already a reported bug against ZFS+sendfile
> (http://www.freebsd.org/cgi/query-pr.cgi?pr=141305&cat=) which results
> in bad performance.
> 
> The quickest workaround is to set:
> sendfile        off;
> 
> In http {} sectio of nginx.conf.
> 
> What I personally have observed, is that memory, that is used by
> sendfile, once freed lands in Inact group. And ARC is not able to
> force free of this memory.

Well, there is a patch for this, but that's besides the point of the sendfile issue.

> In my case, where I have 1G of ARC, then after sending 2G of file, my
> ARC is barerly on minimum level, and my ARC hit ratio drops to ~50%.
> 
> If I remove the file that was sent through sendfile, memory is moved
> from Inact to free, from where ARC happly grabs what it wants, and ARC
> hit ratio comes back to normal (~99%).

Interesting.  I briefly looked at the code in mappedread(), zfs_vnops.c, and I
have a VM question.
Shouldn't we mark the corresponding page bits as valid after reading data into
the page?
I specifically speak of the block that starts with the following line:
} else if (m != NULL && uio->uio_segflg == UIO_NOCOPY) {
I am taking mdstart_swap as an example and it does m->valid = VM_PAGE_BITS_ALL.

-- 
Andriy Gapon


More information about the freebsd-fs mailing list