Reoccurring ZFS performance problems [RESOLVED]
Karl Denninger
karl at denninger.net
Tue Mar 18 13:50:36 UTC 2014
On 3/18/2014 8:26 AM, Adrian Gschwend wrote:
> On 18.03.14 11:26, mikej wrote:
>
>> I am a little surprised this thread has been so quiet. I have been
>> running with this patch and my desktop is more pleasant when memory
>> demands are great - no more swapping - and wired no longer grows
>> uncontrollable.
>>
>> Is more review coming the silence is deffining.
> same here, works very nice so far and growth of memory looks much more
> controlled now. Before within no time my server had all 16GB of RAM
> wired, now it's growing only slowly.
>
> It's too early to say if my performance degradation is gone now but it
> surely looks very good so far.
>
> Thanks again to Karl for the patch! Hope others test it and integrate it
> soon.
>
Watch zfs-stats -A; you will see what the system has adapted to as
opposed to the hard limits in arc_max and arc_min.
Changes upward in reservation percentage will be almost-instantly
reflected in reduced allocation, where changes downward will grow slowly
(there's a timed lockdown in the cache code that prevents it from
grabbing more space immediately when it was previously throttled back,
and the ARC cache in general only grows when I/O that is not in the
cache occurs, and thus new data becomes available to cache for later
re-use.)
The nice thing about the way it behaves now is that it will release
memory immediately when required by other demands on the system but if
your active and inactive page count shrinks as images release RAM back
through the cache and then to the free list it will also be allowed to
expand as I/O demand diversity warrants.
That was clearly the original design intent but it was being badly
frustrated by the former cache memory allocation behavior.
There is an argument for not including cache pages in the "used" bucket
(that is, counting them as "free" instead); the way I coded it is a bit
more conservative than going the other way. Given the design of the VM
subsystem either is arguably acceptable since a cache page can be freed
when RAM is demanded. I decided not to do for two reasons -- first, a
page that is in the cache bucket could be reactivated and if it is then
you are going to have to release that ARC cache memory -- economy of
action suggests that you not do something you might quickly have to
undo. Second, my experience with the VM system over roughly a decade of
use of FreeBSD supports an argument that the VM implementation is
arguably the greatest strength that FreeBSD has, especially under
stress, and by allowing it to do its job rather than trying to "push"
the VM system to do a particular thing the philosophy of trusting that
which is believed to know what it's up to is maintained.
--
-- Karl
karl at denninger.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2711 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20140318/1b4a43f0/attachment.bin>
More information about the freebsd-fs
mailing list