zfs very poor performance compared to ufs due to lack of cache?

jhell jhell at DataIX.net
Mon Sep 6 00:21:53 UTC 2010


On 09/05/2010 19:57, Steven Hartland wrote:
> 
>> On 09/05/2010 16:13, Steven Hartland wrote:
>>>> 3656:  uint64_t available_memory =
>>>> ptoa((uintmax_t)cnt.v_free_count 3657:      +
>>>> cnt.v_cache_count);
>> 
>>> earlier at 3614 I have what I think your after which is: uint64_t
>>> available_memory = ptoa((uintmax_t)cnt.v_free_count);
>> 
>> Alright change this to the above, recompile and re-run your tests. 
>> Effectively before this change that apparently still needs to be
>> MFC'd or MFS'd would not allow ZFS to look at or use
>> cnt.v_cache_count. Pretty much to sum it up "available mem = cache
>> + free"
>> 
>> This possibly could cause what your seeing but there might be
>> other changes still yet TBD. Ill look into what else has changed
>> from RELEASE -> STABLE.
>> 
>> Also do you check out your sources with svn(1) or csup(1) ?
> 
> Based on Jeremy's comments I'm updating the box the stable. Its
> building now but will be the morning before I can reboot to activate
> changes as I need to deactivate the stream instance and wait for all
> active connections to finish.
> 
> That said the problem doesn't seem to be cache + free but more cache
> + free + inactive with inactive being the large chunk, so not sure
> this change would make any difference?
> 

If I remember correctly I thought that was already calculated into the
mix but I could be wrong. I remember a discussion about it before that
free was inactive + free, and for ARC the cache was never being
accounted for so not enough paging was happening which would result in a
situation like the one you have now. MAYBE!

> How does ufs deal with this, does it take inactive into account?
> Seems a bit silly for inactive pages to prevent reuse for extended
> periods when the memory could be better used as cache.
> 

I agree commented above.

> As an experiment I compiled a little app which malloced a large block
> of memory, 1.3G in this case and then freed it. This does indeed pull
> the memory out of inactive and back into the free pool where zfs is
> which happy to re-expand arc and once again cache large files. Seems
> a bit extreme to have to do this though.

Maybe we should add that code to zfs(1) and call it with
gimme-my-mem-back 1 for all of it 2 for half of it and 3 for panic ;)

> 
> Will see what happens with stable tomorrow though :)
> 

Good luck Steve, Look forward to hearing the result. If you are happy
with the result you get from stable/8 I would reccommend patching to v15
which is much more stable than the v14 code.

The specific patches you would want are: (in order)
http://people.freebsd.org/~mm/patches/zfs/v15/stable-8-v15.patch
http://people.freebsd.org/~mm/patches/zfs/zfs_metaslab_v2.patch
http://people.freebsd.org/~mm/patches/zfs/zfs_abe_stat_rrwlock.patch
and then the needfree.patch I already posted.

The maxusers.patch being optional.


-- 

 jhell,v


More information about the freebsd-fs mailing list