portmaster, zfs metadata caching [Was: UPDATING 20110730]

Andriy Gapon avg at FreeBSD.org
Tue Aug 2 15:06:12 UTC 2011


on 02/08/2011 16:14 Andriy Gapon said the following:
> And now to my side of the problem.
> While "profiling" pkg_info with ktrace I see getdirentries(2) calls sometimes
> take quite a while.  And since I have > 1000 ports all those calls do add up.
> DTrace shows that the calls are quite fast (~0.3 ms) when there is no actual
> disk access, but if it occurs then it introduces a delay on the orders of 1 -
> 100 milliseconds.
> I am really in doubts about what is happening here.  It seems that all the
> directory data isn't kept in ZFS ARC for long enough or is squeezed out of it by
> some other data (without additional pressure it should easily fit into the ARC).
>   And also that somehow disk accesses have quite large latency.  Although svc_t
> according to iostat is smaller (5 - 10 ms).  DTrace shows that the thread spends
> the time in cv_wait.  So it's possible that the scheduler is also involved here
> as its decisions also may add a delay to when the thread becomes runnable again.

Reporting further, just in case anyone follows this.
(You may want to scroll down to my conclusions at the end of the message).

I tracked my ZFS problem to my experiments with ZFS tuning.
I limited my ARC size at some value that I considered to be large enough to
cache my working sets of data and _metadata_.  Little did I know that by default
ZFS sets aside only 1/4th of ARC size for metadata.  So this is already
significantly smaller than I expected.  Then it seems that a large piece of that
metadata portion is permanently occupied by some non-evict-able data (not sure
what it actually is, haven't tracked yet).  In the end only a small portion of
my ARC was available for holding the metadata which included the directory
contents data.

So this is what I had with the old settings:
vfs.zfs.arc_meta_limit: 314572800
vfs.zfs.arc_meta_used: 401064272
and
$ for i in $(jot 5) ; do /usr/bin/time -p pkg_info -O print/printme ; done
The following installed package(s) has print/printme origin:
real 12.55
user 0.02
sys 2.51
The following installed package(s) has print/printme origin:
real 12.65
user 0.03
sys 1.99
The following installed package(s) has print/printme origin:
real 10.57
user 0.02
sys 1.57
The following installed package(s) has print/printme origin:
real 8.85
user 0.03
sys 0.17
The following installed package(s) has print/printme origin:
real 9.28
user 0.02
sys 0.20

I think that you should get the picture.

Now I have bumped the limit and this is what I had just right after doing it:
vfs.zfs.arc_meta_limit: 717225984
vfs.zfs.arc_meta_used: 414439800
and
$ for i in $(jot 5) ; do /usr/bin/time -p pkg_info -O print/printme ; done
The following installed package(s) has print/printme origin:
real 9.08
user 0.01
sys 0.18
The following installed package(s) has print/printme origin:
real 7.48
user 0.04
sys 0.14
The following installed package(s) has print/printme origin:
real 0.08
user 0.00
sys 0.07
The following installed package(s) has print/printme origin:
real 0.95
user 0.03
sys 0.04
The following installed package(s) has print/printme origin:
real 0.08
user 0.00
sys 0.07

Two runs to "warm up" the ARC and then everything works just perfect.

I think that this is an important discovery for two reason:
1. I learned a new thing about ZFS ARC.
2. This problem demonstrates that portmaster currently does depend on a
filesystem cache being able to hold a significant amount of ports/packages
(meta)data.

-- 
Andriy Gapon


More information about the freebsd-ports mailing list