Another ZFS ARC memory question
Luke Marsden
luke-lists at hybrid-logic.co.uk
Fri Mar 2 10:16:20 UTC 2012
On Fri, 2012-03-02 at 10:25 +0100, Alexander Leidinger wrote:
> Quoting Slawa Olhovchenkov <slw at zxy.spb.ru> (from Thu, 1 Mar 2012
> 18:28:26 +0400):
>
> > On Tue, Feb 28, 2012 at 05:14:37AM +1100, Peter Jeremy wrote:
> >
> >> > * what is the community's advice for production machines running
> >> > ZFS on FreeBSD, is manually limiting the ARC cache (to ensure
> >> > that there's enough actually free memory to handle a spike in
> >> > application memory usage) the best solution to this
> >> > spike-in-memory-means-crash problem?
> >>
> >> Are you swapping onto a ZFS vdev?
We are not swapping onto a ZFS vdev (we've been down that road and know
it's a bad idea). Our issue is primarily with ARC cache eviction not
happening fast enough or at all when there is a spike in memory usage,
causing machines to hang.
We are presently working around it by limiting arc_max to 4G on our 24G
RAM production boxes (which seems like a massive waste of performance)
and by doing very careful/aggressive application level management of
memory usage to ensure stability (limits.conf didn't work for us, so we
rolled our own). A better solution would be welcome, though, so that we
can utilise all the free memory we're presently keeping around as a
safety margin - ideally it would be used as ARC cache.
Two more questions, again wrt 8.2-RELEASE:
1. Is it expected that with a 4G limited arc_max value that we should
see wired memory usage around 7-8G? I understand that the kernel has to
use some memory, but really 3-4G of non-ARC data?
2. We have some development machines with only 3G of RAM. Previously
they had no arc_max set and were left to tune themselves. They were
quite unstable. Now we've set arc_max to 256M but things have got
worse: we've seen a disk i/o big performance hit (untarring a ports
tarball now takes 20 minutes), and wired memory usage is up around
2.5GB, the machines are swapping a lot, and crashing more frequently.
Follows is arc_summary.pl from one of the troubled dev machines showing
the ARC using 500% of the memory it should be. Also uname follows. My
second question is: have there been fixes between 8.2-RELEASE and
8.3-BETA1 or 9.0-RELEASE which solve this ARC over-usage problem?
hybrid at node5:~$ ./arc_summary.pl
------------------------------------------------------------------------
ZFS Subsystem Report Fri Mar 2 09:55:00 2012
------------------------------------------------------------------------
System Memory:
8.92% 264.89 MiB Active, 6.43% 190.75 MiB Inact
80.91% 2.35 GiB Wired, 1.97% 58.46 MiB Cache
1.74% 51.70 MiB Free, 0.03% 864.00 KiB Gap
Real Installed: 3.00 GiB
Real Available: 99.56% 2.99 GiB
Real Managed: 97.04% 2.90 GiB
Logical Total: 3.00 GiB
Logical Used: 90.20% 2.71 GiB
Logical Free: 9.80% 300.91 MiB
Kernel Memory: 1.08 GiB
Data: 98.75% 1.06 GiB
Text: 1.25% 13.76 MiB
Kernel Memory Map: 2.83 GiB
Size: 26.80% 775.56 MiB
Free: 73.20% 2.07 GiB
Page: 1
------------------------------------------------------------------------
ARC Summary: (THROTTLED)
Storage pool Version: 15
Filesystem Version: 4
Memory Throttle Count: 53.77m
ARC Misc:
Deleted: 1.99m
Recycle Misses: 6.84m
Mutex Misses: 6.96k
Evict Skips: 6.96k
ARC Size: 552.16% 1.38 GiB
Target Size: (Adaptive) 100.00% 256.00 MiB
Min Size (Hard Limit): 36.23% 92.75 MiB
Max Size (High Water): 2:1 256.00 MiB
ARC Size Breakdown:
Recently Used Cache Size: 16.97% 239.90 MiB
Frequently Used Cache Size: 83.03% 1.15 GiB
ARC Hash Breakdown:
Elements Max: 83.19k
Elements Current: 84.72% 70.48k
Collisions: 2.53m
Chain Max: 9
Chains: 18.94k
Page: 2
------------------------------------------------------------------------
ARC Efficiency: 126.65m
Cache Hit Ratio: 95.07% 120.41m
Cache Miss Ratio: 4.93% 6.24m
Actual Hit Ratio: 95.07% 120.41m
Data Demand Efficiency: 99.45% 111.87m
Data Prefetch Efficiency: 0.00% 235.34k
CACHE HITS BY CACHE LIST:
Most Recently Used: 4.14% 4.99m
Most Frequently Used: 95.85% 115.42m
Most Recently Used Ghost: 0.24% 292.53k
Most Frequently Used Ghost: 3.73% 4.50m
CACHE HITS BY DATA TYPE:
Demand Data: 92.40% 111.26m
Prefetch Data: 0.00% 0
Demand Metadata: 7.60% 9.15m
Prefetch Metadata: 0.00% 2.73k
CACHE MISSES BY DATA TYPE:
Demand Data: 9.79% 610.82k
Prefetch Data: 3.77% 235.34k
Demand Metadata: 85.67% 5.35m
Prefetch Metadata: 0.78% 48.47k
Page: 3
------------------------------------------------------------------------
VDEV Cache Summary: 5.33m
Hit Ratio: 91.14% 4.86m
Miss Ratio: 8.59% 458.07k
Delegations: 0.27% 14.34k
Page: 6
------------------------------------------------------------------------
ZFS Tunable (sysctl):
kern.maxusers 384
vm.kmem_size 3112275968
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 329853485875
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 4866048
vfs.zfs.mfu_ghost_metadata_lsize 185315328
vfs.zfs.mfu_ghost_size 190181376
vfs.zfs.mfu_data_lsize 4608
vfs.zfs.mfu_metadata_lsize 3072
vfs.zfs.mfu_size 254041600
vfs.zfs.mru_ghost_data_lsize 0
vfs.zfs.mru_ghost_metadata_lsize 0
vfs.zfs.mru_ghost_size 0
vfs.zfs.mru_data_lsize 0
vfs.zfs.mru_metadata_lsize 0
vfs.zfs.mru_size 520685568
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 20846592
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 0
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 67108864
vfs.zfs.arc_meta_used 1479184192
vfs.zfs.mdcomp_disable 0
vfs.zfs.arc_min 97258624
vfs.zfs.arc_max 268435456
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 1
vfs.zfs.check_hostid 1
vfs.zfs.recover 0
vfs.zfs.txg.write_limit_override 0
vfs.zfs.txg.synctime 5
vfs.zfs.txg.timeout 30
vfs.zfs.scrub_limit 10
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 10485760
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.ramp_rate 2
vfs.zfs.vdev.time_shift 6
vfs.zfs.vdev.min_pending 4
vfs.zfs.vdev.max_pending 10
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_disable 0
vfs.zfs.zio.use_uma 0
vfs.zfs.version.zpl 4
vfs.zfs.version.spa 15
vfs.zfs.version.dmu_backup_stream 1
vfs.zfs.version.dmu_backup_header 2
vfs.zfs.version.acl 1
vfs.zfs.debug 0
vfs.zfs.super_owner 0
Page: 7
------------------------------------------------------------------------
hybrid at node5:~$ uname -a
FreeBSD node5.hybridlogiclabs.com 8.2-RELEASE FreeBSD 8.2-RELEASE #0:
Thu Feb 17 02:41:51 UTC 2011
root at mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
Thanks!
Luke Marsden
--
CTO, Hybrid Logic
+447791750420 | +1-415-449-1165 | www.hybrid-cluster.com
More information about the freebsd-stable
mailing list