vop_getpages for zfs

Andriy Gapon avg at freebsd.org
Thu Nov 4 16:56:18 UTC 2010


on 10/10/2010 16:47 Kostik Belousov said the following:
> I think that readahead requests should be satisfied, if possible. Esp.
> in the case the readahead does not cause additional i/o.

So, per Kostik's suggestion, here is patch for your review and testing:
http://people.freebsd.org/~avg/zfs_getpages.2.diff

The patch tries to fill as many of the passed in pages as possible as long as that
can be done from a block (or even two blocks) that has to be read to fill the
requested page.
This means that the optimization kicks in only if a vnode has a block size greater
than the page size.
I mentioned "two blocks" above because I tried to optimize the case when, for
example, block size is 6KB and page size is 4KB and the requested page covers
parts of two adjacent blocks (e.g. range from 4KB to 8KB).  Not sure if this was
worth the trouble.

In other words, the code tries to avoid reading more blocks than is strictly
required, because additional blocks may be far apart on a disk and thus read
latency could get increased.

You will also notice that I added vop_bmap method to ZFS vops.
This is needed to satisfy logic in vnode_pager_haspage(); otherwise, for example,
vm_fault() would never request any additional pages besides the required one.
zfs_bmap() implementation is really simple - it returns zeros for a_runp and
a_runb (and some reasonable-ish but unused values in other fields), and
vnode_pager_haspage() figures out number of eligible pages from the block size
(f_iosize actually[*]) and the page size.

Now, that ZFS has vop_getpages method, it is safe to provide this dummy-ish
implementation of vop_bmap, because this operation would not be called by any
other place but vnode_pager_haspage().  Code from vfs_bio.c and vfs_cluster.c is
never called for ZFS, because it doesn't use the buffer cache layer.

[*] There is a few interesting questions with respect to f_iosize usage in
vnode_pager_haspage() and ZFS.
For one, ZFS has a variable block size and f_iosize for ZFS has a currently
configured maximum block ('record' in ZFS terms) size.  This should not result in
non-optimal behavior for files with smaller block sizes, because those files would
have smaller sizes too (within the block size).
Unless, of course, recordsize property is modified after some files are created on
a ZFS filesystem.  In that case there could be large files with some smaller
(previously maximum) block size; or files with a block size larger than the
current recordsize == f_iosize.  So, such situations may lead to sub-optimal
performance.
Another issue related to the above is that f_iosize for ZFS can change on the fly
when an administrators runs zfs set recordsize=NNN, and that may happen
concurrently with vnode_pager_haspage() using f_iosize for its calculations.
Not sure how to handle this correctly.

I will appreciate your comment comments, test reports, reviews and suggestions.
Thanks a lot!
-- 
Andriy Gapon


More information about the freebsd-fs mailing list