memory exhaustion on 10.1 AMD64 ZFS storage system
Joseph Mingrone
jrm at ftfl.ca
Mon Jan 12 20:20:14 UTC 2015
Hello,
We've had this storage system running 9.x without problems. After
upgrading to 10.1 we've seen "out of swap space" messages in the logs.
Dec 13 04:29:12 storage2 kernel: pid 723 (rpc.statd), uid 0, was killed:
out of swap space
...
Jan 11 23:23:51 storage2 kernel: pid 642 (mountd), uid 0, was killed:
out of swap space
What's the best way to determine if this is a ZFS problem? I've read in
the 10.1 release notes that vfs.zfs.zio.use_uma has been re-enabled.
Has this caused anyone problems with 10.1?
Below is information about the server.
Joseph
# cat /boot/loader.conf
zfs_load=YES
vfs.root.mountfrom="zfs:zroot"
vfs.zfs.arc_max=24G
# zfs-stats -F
------------------------------------------------------------------------
ZFS Subsystem Report Mon Jan 12 15:52:21 2015
------------------------------------------------------------------------
System Information:
Kernel Version: 1001000 (osreldate)
Hardware Platform: amd64
Processor Architecture: amd64
FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014 root
3:52PM up 30 mins, 1 user, load averages: 0.14, 0.15, 0.14
------------------------------------------------------------------------
# zfs-stats -M
------------------------------------------------------------------------
ZFS Subsystem Report Mon Jan 12 15:52:56 2015
------------------------------------------------------------------------
System Memory Statistics:
Physical Memory: 32706.64M
Kernel Memory: 164.14M
DATA: 84.30% 138.38M
TEXT: 15.70% 25.76M
------------------------------------------------------------------------
# zfs-stats -p
------------------------------------------------------------------------
ZFS Subsystem Report Mon Jan 12 15:53:20 2015
------------------------------------------------------------------------
ZFS pool information:
Storage pool Version (spa): 5000
Filesystem Version (zpl): 5
------------------------------------------------------------------------
# zfs-stats -A
------------------------------------------------------------------------
ZFS Subsystem Report Mon Jan 12 15:53:43 2015
------------------------------------------------------------------------
ARC Misc:
Deleted: 20
Recycle Misses: 0
Mutex Misses: 0
Evict Skips: 0
ARC Size:
Current Size (arcsize): 0.17% 40.87M
Target Size (Adaptive, c): 100.00% 24576.00M
Min Size (Hard Limit, c_min): 12.50% 3072.00M
Max Size (High Water, c_max): ~8:1 24576.00M
ARC Size Breakdown:
Recently Used Cache Size (p): 50.00% 12288.00M
Freq. Used Cache Size (c-p): 50.00% 12288.00M
ARC Hash Breakdown:
Elements Max: 1583
Elements Current: 100.00% 1583
Collisions: 0
Chain Max: 0
Chains: 0
ARC Eviction Statistics:
Evicts Total: 172032
Evicts Eligible for L2: 97.62% 167936
Evicts Ineligible for L2: 2.38% 4096
Evicts Cached to L2: 0
ARC Efficiency
Cache Access Total: 44696
Cache Hit Ratio: 95.38% 42632
Cache Miss Ratio: 4.62% 2064
Actual Hit Ratio: 85.21% 38084
Data Demand Efficiency: 97.50%
Data Prefetch Efficiency: 8.51%
CACHE HITS BY CACHE LIST:
Anonymously Used: 10.67% 4548
Most Recently Used (mru): 39.98% 17044
Most Frequently Used (mfu): 49.35% 21040
MRU Ghost (mru_ghost): 0.00% 0
MFU Ghost (mfu_ghost): 0.00% 0
CACHE HITS BY DATA TYPE:
Demand Data: 48.37% 20619
Prefetch Data: 0.01% 4
Demand Metadata: 40.97% 17465
Prefetch Metadata: 10.66% 4544
CACHE MISSES BY DATA TYPE:
Demand Data: 25.63% 529
Prefetch Data: 2.08% 43
Demand Metadata: 52.18% 1077
Prefetch Metadata: 20.11% 415
------------------------------------------------------------------------
# zpool list
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
tank 24.5T 11.1T 13.4T 14% - 45% 1.00x ONLINE -
zroot 55.5G 6.11G 49.4G 5% - 11% 1.00x ONLINE -
# zpool get "all" tank
NAME PROPERTY VALUE SOURCE
tank size 24.5T -
tank capacity 45% -
tank altroot - default
tank health ONLINE -
tank guid 8322714406813719098 default
tank version - default
tank bootfs - default
tank delegation on default
tank autoreplace off default
tank cachefile - default
tank failmode wait default
tank listsnapshots off default
tank autoexpand off default
tank dedupditto 0 default
tank dedupratio 1.00x -
tank free 13.4T -
tank allocated 11.1T -
tank readonly off -
tank comment - default
tank expandsize 0 -
tank freeing 0 default
tank fragmentation 14% -
tank leaked 0 default
tank feature at async_destroy enabled local
tank feature at empty_bpobj enabled local
tank feature at lz4_compress active local
tank feature at multi_vdev_crash_dump enabled local
tank feature at spacemap_histogram active local
tank feature at enabled_txg active local
tank feature at hole_birth active local
tank feature at extensible_dataset enabled local
tank feature at embedded_data active local
tank feature at bookmarks enabled local
tank feature at filesystem_limits enabled local
# zdb -C tank
MOS Configuration:
version: 5000
name: 'tank'
state: 0
txg: 12614760
pool_guid: 8322714406813719098
hostid: 1722087693
hostname: 'storage2.mathstat.dal.ca'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 8322714406813719098
children[0]:
type: 'raidz'
id: 0
guid: 5865699514822950384
nparity: 3
metaslab_array: 31
metaslab_shift: 37
ashift: 12
asize: 27005292380160
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 6285638336980483158
path: '/dev/label/storage_disk0'
phys_path: '/dev/label/storage_disk0'
whole_disk: 1
DTL: 106
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 9541693314532360771
path: '/dev/label/storage_disk1'
phys_path: '/dev/label/storage_disk1'
whole_disk: 1
DTL: 105
create_txg: 4
children[2]:
type: 'disk'
create_txg: 4 [63/2837]
children[0]:
type: 'disk'
id: 0
guid: 310723121207304329
path: '/dev/gpt/disk0'
phys_path: '/dev/gpt/disk0'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 16696203411283061195
path: '/dev/gpt/disk1'
phys_path: '/dev/gpt/disk1'
whole_disk: 1
create_txg: 4
More information about the freebsd-fs
mailing list