[CFT] Improved ZFS metaslab code (faster write speed)

jhell jhell at DataIX.net
Sat Aug 28 01:24:48 UTC 2010


On 08/27/2010 19:50, Artem Belevich wrote:
> Another "me too" here.
> 
> 8-stable/amd64 + v15 (zpool still uses v14) + metaslab +
> abe_stat_rrwlock + A.Gapon's vm_paging_needed() + uma defrag patches.
> 
> The box survived few days of pounding on it without any signs of trouble.
> 

	I must have missed the uma defrag patches but according to the code
those patches should not have any effect on your implimentation of ZFS
on your system because vfs.zfs.zio.use_uma defaults to off unless you
have manually turned this on or the patch reverts that facility back to
its original form.


	Running on a full ZFSv15 system with the metaslab & rrwlock patches and
a slightly modified patch from avg@ for vm_paging_needed() I was able to
achieve the results in read and write ops that I was looking for.

The modified patch from avg@ (portion patch) is:

#ifdef _KERNEL
                if (arc_reclaim_needed()) {
                        needfree = 0;
                        wakeup(&needfree);
                }
#endif

	I still moved that down to below _KERNEL for the obvious reasons.  But
when I was using the original patch with if (needfree) I noticed a
performance degradation after ~12 hours of use with and without UMA
turned on. So far with ~48 hours of testing with the top half of that
being with the above change, I have not seen more degradation of
performance after that ~12 hour mark.

In another 12 hours of testing with UMA turned off Ill be turning UMA
back on and testing for another 24 hours.  Before that third patch from
avg@ had come along I had turned UMA on and had no performance loss for
~7 hours.  Obviously I had to reboot after applying avg@'s patch and
decided to test strictly without UMA at that point.

There seems to be a problem in the logic behind the needfree use and or
arc_reclaim_needed() area that should be worked out, but at least for
this system i386 8.1-STABLE where my code is at right now "Is STABLE!".


=======================================================================
For reference I have also adjusted these: (arc.c)

- /* Start out with 1/8 of all memory */
- arc_c = kmem_size() / 8;
+ /* Start out with 1/4 of all memory */
+ arc_c = kmem_size() / 4;

And these: (arc.c)

- arc_c = MIN(arc_c, vmem_size(heap_arena, VMEM_ALLOC | VMEM_FREE) / 8);
+ arc_c = MIN(arc_c, vmem_size(heap_arena, VMEM_ALLOC | VMEM_FREE) / 4);

	There seems to be no relative way currently to handle adjusting these
properly based on the amount of memory in the system and sets a blind
default currently to 1/8 and in a system with 2GB that is ~256MB but if
you are adjusting to kmem_size as stated above and you set KVA_PAGES to
512 like suggested, then you end up with an arc_c equaling 64MB. So
unless you adjust your kmem_size accordingly on some systems to make up
for the 1/8th problem your ZFS install is going to suffer. This is more
of a problem for systems below the 2GB memory range. Now for systems
that have quite high ranges of memory 8G for example your really only
using 1GB and it will be fairly hard besides adjusting the source to use
more RAM without effecting something else in the system inherently by
bumping vm.kmem_size*
=======================================================================

1GB RAM on ZFSv15 with the patches mentioned: (loader.conf) adjust
accordingly to your own systems environment.
kern.maxdsiz="640M"
kern.maxusers="512" # Overcome the max calculated 384 for >1G of MEM.
                    # See: /sys/kern/subr_param.c for details. ???
vfs.zfs.arc_min="62M"
vfs.zfs.arc_max="496M"
vfs.zfs.prefetch_disable=0
vm.kmem_size="512M"
vm.kmem_size_max="768M"
vm.kmem_size_min="128M"


Regards,

-- 



 jhell,v


More information about the freebsd-current mailing list