Reoccurring ZFS performance problems [RESOLVED]

Johan Hendriks joh.hendriks at gmail.com
Tue Mar 18 13:07:34 UTC 2014


Karl Denninger schreef:
> On 3/18/2014 5:26 AM, mikej wrote:
>> On 2014-03-14 19:04, Matthias Gamsjager wrote:
>>> Much better thx :)
>>>
>>> Will this patch be review by some kernel devs and merged?
>>> _______________________________________________
>>> freebsd-fs at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>
>> I am a little surprised this thread has been so quiet.  I have been
>> running with this patch and my desktop is more pleasant when memory
>> demands are great - no more swapping - and wired no longer grows
>> uncontrollable.
>>
>> Is more review coming the silence is deffining.
>>
> It makes an utterly-enormous difference here.
>
> This is what one of my "nasty-busy" servers looks like this morning 
> (it's got a very busy blog on it along with other things, and is 
> pretty-quiet right now -- but it won't be in a couple of hours)
>
>     1 users    Load  0.22  0.25  0.21                  Mar 18 05:55
>
> Mem:KB    REAL            VIRTUAL                       VN PAGER SWAP 
> PAGER
>         Tot   Share      Tot    Share    Free           in out     
> in   out
> Act 4238440   31700  7953812    53652 2993908  count
> All  16025k   39644  8680436   249960          pages
> Proc: Interrupts
>   r   p   d   s   w   Csw  Trp  Sys  Int  Sof  Flt        ioflt 2083 
> total
>             204      7321 1498 6416  665  313  707    207 cow 12 uart0 4
>                                                       428 zfod 20 
> uhci0 16
>  0.4%Sys   0.1%Intr  0.6%User  0.0%Nice 99.0%Idle ozfod       pcm0 17
> |    |    |    |    |    |    |    |    |    | %ozfod       ehci0 uhci
>> daefr       uhci1 21
>                                            dtbuf      417 prcfr 455 
> uhci3 ehci
> Namei     Name-cache   Dir-cache    485892 desvn     1197 totfr 16 
> twa0 30
>    Calls    hits   %    hits   %    136934 numvn          react 994 
> cpu0:timer
>     8063    8009  99                121473 frevn          pdwak 42 
> mps0 256
>                                                       871 pdpgs 15 
> em0:rx 0
> Disks  ada0   da0   da1   da2   da3   da4   da5           intrn 20 
> em0:tx 0
> KB/t   0.00 20.46 19.92  0.00  0.00 22.06 44.21  17177460 wire        
> em0:link
> tps       0     7     7     0     0     7    11   2131860 act 45 em1:rx 0
> MB/s   0.00  0.15  0.15  0.00  0.00  0.15  0.47   2158808 inact 38 
> em1:tx 0
> %busy     0     7     7     0     0     0     0      7512 cache       
> em1:link
>                                                   2986396 free        
> ahci0:ch0
>                                                           buf 16 
> cpu1:timer
> 23 cpu11:time
> 17 cpu5:timer
> 13 cpu9:timer
> 44 cpu4:timer
> 35 cpu15:time
> 26 cpu6:timer
> 16 cpu14:time
> 28 cpu7:timer
> 23 cpu13:time
> 23 cpu3:timer
> 43 cpu10:time
> 50 cpu2:timer
> 29 cpu12:time
> 40 cpu8:timer
>
>
> Here's the ARC cache....
>
> [karl at NewFS ~]$ zfs-stats -A
>
> ------------------------------------------------------------------------
> ZFS Subsystem Report                            Tue Mar 18 05:56:42 2014
> ------------------------------------------------------------------------
>
> ARC Summary: (HEALTHY)
>         Memory Throttle Count:                  0
>
> ARC Misc:
>         Deleted:                                1.55m
>         Recycle Misses:                         66.33k
>         Mutex Misses:                           1.55k
>         Evict Skips:                            4.14m
>
> ARC Size:                               60.01%  13.40   GiB
>         Target Size: (Adaptive)         60.01%  13.40   GiB
>         Min Size (Hard Limit):          12.50%  2.79    GiB
>         Max Size (High Water):          8:1     22.33   GiB
>
> ARC Size Breakdown:
>         Recently Used Cache Size:       79.13%  10.60   GiB
>         Frequently Used Cache Size:     20.87%  2.80    GiB
>
> ARC Hash Breakdown:
>         Elements Max:                           1.34m
>         Elements Current:               62.76%  840.43k
>         Collisions:                             7.02m
>         Chain Max:                              13
>         Chains:                                 247.65k
>
> ------------------------------------------------------------------------
>
> Note the scale-down from the maximum -- this is with:
>
> [karl at NewFS ~]$ sysctl -a|grep percent
> vfs.zfs.arc_freepage_percent_target: 10
>
> My test machine has a lot less memory in it and there the default 
> (25%) appears to be a good value.
>
> Before this delta was put on the code this system would have tried to 
> grab the entire 22GB to the exclusion of anything else.  What I used 
> to do is limit it to 16GB via arc_max which was fine in the mornings 
> and overnight, but during the day it didn't cut it -- and there was no 
> way to change it without a reboot either.  This particular machine has 
> 24GB of RAM in it and provides services both externally and internally 
> (separate interfaces.)
>
> How efficient is the cache?
>
> [karl at NewFS ~]$ zfs-stats -E
>
> ------------------------------------------------------------------------
> ZFS Subsystem Report                            Tue Mar 18 05:59:01 2014
> ------------------------------------------------------------------------
>
> ARC Efficiency:                                 81.13m
>         Cache Hit Ratio:                97.84%  79.38m
>         Cache Miss Ratio:               2.16%   1.75m
>         Actual Hit Ratio:               69.81%  56.64m
>
>         Data Demand Efficiency:         99.09%  50.37m
>         Data Prefetch Efficiency:       28.77%  1.46m
>
>         CACHE HITS BY CACHE LIST:
>           Anonymously Used:             28.48%  22.61m
>           Most Recently Used:           6.81%   5.40m
>           Most Frequently Used:         64.54%  51.23m
>           Most Recently Used Ghost:     0.03%   24.86k
>           Most Frequently Used Ghost:   0.13%   104.39k
>
>         CACHE HITS BY DATA TYPE:
>           Demand Data:                  62.88%  49.91m
>           Prefetch Data:                0.53%   419.73k
>           Demand Metadata:              8.28%   6.57m
>           Prefetch Metadata:            28.31%  22.47m
>
>         CACHE MISSES BY DATA TYPE:
>           Demand Data:                  26.03%  456.20k
>           Prefetch Data:                59.29%  1.04m
>           Demand Metadata:              9.84%   172.53k
>           Prefetch Metadata:            4.84%   84.81k
>
> ------------------------------------------------------------------------
>
>
How do i apply the patch ?

regards
Johan


More information about the freebsd-fs mailing list