ARC size limit

George Kola georgekola at gmail.com
Tue Nov 4 16:53:40 UTC 2014


> On Nov 4, 2014, at 1:28 AM, Ronald Klop <ronald-lists at klop.ws> wrote:
> 
> On Tue, 04 Nov 2014 07:29:48 +0100, George Kola <georgekola at gmail.com <mailto:georgekola at gmail.com>> wrote:
> 
>> Hi All,
>>       This is my first post to freebsd-stable fresh of Meet BSD California 2014.  We are switching our entire production to FreeBSD. Our storage servers have  256 GB of RAM , 4 TB of SSD and 40 TB of spinning disks. We are running ZFS root and the SSD is configured as L2ARC.  We are running FreeBSD 10.1 RC3
>>       I am finding that on all our machines, ARC is somehow limited to < 64 GB of memory and we have a huge inactive memory (180 G). The surprising thing is that ARC seems to have almost the same limit  (< 64 GB) on all of our storage boxes and ARC is not growing even though L2ARC hit shows that there is advantage in growing ARC.
>>       Any help/pointers is appreciated.
>>       What I am trying to do is to tune ZFS for our workload. We are hoping that we get a high hit rate.
>>       Thanks to Justin Gibbs and Allan Jude for initial pointers and help. They suggested posting to the mailing list to get further help.
>> 
>>       I have pasted top output and zfs-stats output below and yes UMA is enabled.
>> 
>> Thanks,
>> George
> 
> What is actually your problem? Do you just want higher numbers of ARC usage or is your system slower than expected?
> And what is your usage of these 40TB. Are all 40TB accessed a lot or is 64 GB of ARC equal to the working set of data.
> 
> Ronald.
> 



          The problem is system is slower than expected. We bumped up the RAM from 96 GB on current illumos based production systems to 256 GB on the new FreeBSD based system and expected higher hit rate in ARC and lower disk access. Our rough initial calculation estimated that our working would be  between 96 - 128 GB


Thanks,
-George

 




        








> 
> 
>> 
>> 
>> top
>> last pid: 27458;  load averages:  3.30,  5.42,  5.34                                                                                                up 6+09:59:30  05:38:49
>> 71 processes:  1 running, 70 sleeping
>> CPU:  4.2% user,  0.0% nice,  4.6% system,  0.2% interrupt, 90.9% idle
>> Mem: 11G Active, 181G Inact, 52G Wired, 1368M Cache, 4266M Free
>> ARC: 47G Total, 1555M MFU, 41G MRU, 35M Anon, 3984M Header, 709M Other
>> Swap: 64G Total, 2874M Used, 61G Free, 4% Inuse
>> 
>> 
>> 
>> sysctl vfs.zfs.zio.use_uma
>> vfs.zfs.zio.use_uma: 1
>> 
>> 
>> 
>> 
>> zfs-mon -a output
>> 
>> ZFS real-time cache activity monitor
>> Seconds elapsed:  62
>> 
>> Cache hits and misses:
>>                                  1s    10s    60s    tot
>>                     ARC hits:   124    126    103    101
>>                   ARC misses:    35     46     29     28
>>         ARC demand data hits:    55     90     61     61
>>       ARC demand data misses:    20     32     18     17
>>     ARC demand metadata hits:    69     36     42     40
>>   ARC demand metadata misses:     9     13     10      9
>>       ARC prefetch data hits:     0      0      0      0
>>     ARC prefetch data misses:     6      1      1      1
>>   ARC prefetch metadata hits:     0      0      0      0
>> ARC prefetch metadata misses:     0      0      0      0
>>                   L2ARC hits:    16     28     14     14
>>                 L2ARC misses:    19     18     15     14
>>                  ZFETCH hits:   592   2842   2098   2047
>>                ZFETCH misses:   308   1326    507    494
>> 
>> Cache efficiency percentage:
>>                          10s    60s    tot
>>                  ARC:  73.26  78.03  78.29
>>      ARC demand data:  73.77  77.22  78.21
>>  ARC demand metadata:  73.47  80.77  81.63
>>    ARC prefetch data:   0.00   0.00   0.00
>> ARC prefetch metadata:   0.00   0.00   0.00
>>                L2ARC:  60.87  48.28  50.00
>>               ZFETCH:  68.19  80.54  80.56
>> 
>> 
>> 
>> 
>> zfs-stats -a output
>> 
>> ZFS real-time cache activity monitor
>> Seconds elapsed:  62
>> 
>> Cache hits and misses:
>>                                  1s    10s    60s    tot
>>                     ARC hits:   124    126    103    101
>>                   ARC misses:    35     46     29     28
>>         ARC demand data hits:    55     90     61     61
>>       ARC demand data misses:    20     32     18     17
>>     ARC demand metadata hits:    69     36     42     40
>>   ARC demand metadata misses:     9     13     10      9
>>       ARC prefetch data hits:     0      0      0      0
>>     ARC prefetch data misses:     6      1      1      1
>>   ARC prefetch metadata hits:     0      0      0      0
>> ARC prefetch metadata misses:     0      0      0      0
>>                   L2ARC hits:    16     28     14     14
>>                 L2ARC misses:    19     18     15     14
>>                  ZFETCH hits:   592   2842   2098   2047
>>                ZFETCH misses:   308   1326    507    494
>> 
>> Cache efficiency percentage:
>>                          10s    60s    tot
>>                  ARC:  73.26  78.03  78.29
>>      ARC demand data:  73.77  77.22  78.21
>>  ARC demand metadata:  73.47  80.77  81.63
>>    ARC prefetch data:   0.00   0.00   0.00
>> ARC prefetch metadata:   0.00   0.00   0.00
>>                L2ARC:  60.87  48.28  50.00
>>               ZFETCH:  68.19  80.54  80.56
>> 
>> 
>> 
>> _______________________________________________
>> freebsd-stable at freebsd.org <mailto:freebsd-stable at freebsd.org> mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable <http://lists.freebsd.org/mailman/listinfo/freebsd-stable>
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org <mailto:freebsd-stable-unsubscribe at freebsd.org>"
> _______________________________________________
> freebsd-stable at freebsd.org <mailto:freebsd-stable at freebsd.org> mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable <http://lists.freebsd.org/mailman/listinfo/freebsd-stable>
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org <mailto:freebsd-stable-unsubscribe at freebsd.org>"



More information about the freebsd-stable mailing list