What's up with the swapping since 10/stable

Karl Denninger karl at denninger.net
Thu Apr 3 19:42:57 UTC 2014


You mention that you're running ZFS -- if so see here:

http://www.freebsd.org/cgi/query-pr.cgi?pr=187594

With this change in my kernel and more than a week of uptime on a very 
busy production machine running Internet-facing web service, Postgresql 
and serving local Windows clients over Samba:

[karl at NewFS ~]$ pstat -s
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/swap1.eli  67108864        0 67108864     0%



On 4/3/2014 2:32 PM, Johan Broman wrote:
> Hi!
>
> I’m seeing the same thing since upgrading to 10/stable. Things seems to need swap although there is still available memory. I tend not to use swap on my virtual instances but I’ve seen error messages like this since upgrading to 10/stable:
>
> pid 3028 (mysqld), uid 88, was killed: out of swap space
>
> Mem: 24M Active, 8012K Inact, 109M Wired, 2176K Cache, 69M Buf, 433M Free
>
>
> Looks like there should be enough memory to start mysql… (the above instance is a t1.micro FreeBSD AMI running on AWS EC2, created by Colin Percival)
>
> Something seems to have changed since FreeBSD 9 in terms of memory manager / page eviction.
>
> Anyone else seeing this? Is it now impossible to run FreeBSD without a swap partition (and or file)? This happens on my server as well which has 8GB RAM and plenty of free RAM…
>
> I don’t want to start guessing, but perhaps this happens when there is some memory fragmentation…? I need to verify if this is the case though.
>
> Thanks
> Johan
>
>
> On 02 Feb 2014, at 18:00, Matthias Gamsjager <mgamsjager at gmail.com> wrote:
>
>> Hi,
>>
>> My ZFS Nas box seems to use some swap since the upgrade to 10/stable. This
>> machine just runs couple of hours per week and with 9/stable I never
>> witnessed any swapping when serving media files.
>>
>> First thinks that caught my eye was the difference between ARC and Wired.
>> At some point there is a 1+ GB difference while all this machine does is
>> serving single 10GB mkv via AFP.
>>
>> Problem is that at some point the performance get's to a point that
>> streaming isn't possible.
>>
>> This is after couple of video's watched and scrub 99% done.
>>
>> No ZFS tuning in /boot/loader.conf
>>
>> last pid:  2571;  load averages:  0.19,  0.20,  0.19              up
>> 0+04:06:20  17:55:43
>>
>> 42 processes:  1 running, 41 sleeping
>>
>> CPU:  0.0% user,  0.0% nice,  2.3% system,  0.0% interrupt, 97.7% idle
>>
>> Mem: 32M Active, 14M Inact, 7563M Wired, 16M Cache, 273M Buf, 303M Free
>>
>> ARC: 6065M Total, 2142M MFU, 3309M MRU, 50K Anon, 136M Header, 478M Other
>>
>> Swap: 4096M Total, 66M Used, 4030M Free, 1% Inuse
>>
>>
>> System Information:
>>
>>
>> Kernel Version:  1000702 (osreldate)
>>
>> Hardware Platform:  amd64
>>
>> Processor Architecture:  amd64
>>
>>
>> ZFS Storage pool Version: 5000
>>
>> ZFS Filesystem Version:  5
>>
>>
>> FreeBSD 10.0-STABLE #0 r261210: Mon Jan 27 15:19:13 CET 2014 matty
>>
>> 5:57PM  up  4:08, 2 users, load averages: 0.31, 0.23, 0.21
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> System Memory:
>>
>>
>> 0.41% 32.43 MiB Active, 0.18% 14.11 MiB Inact
>>
>> 95.39% 7.39 GiB Wired, 0.21% 16.37 MiB Cache
>>
>> 3.81% 301.97 MiB Free, 0.01% 784.00 KiB Gap
>>
>>
>> Real Installed:  8.00 GiB
>>
>> Real Available:  99.50% 7.96 GiB
>>
>> Real Managed:  97.28% 7.74 GiB
>>
>>
>> Logical Total:  8.00 GiB
>>
>> Logical Used:  95.94% 7.68 GiB
>>
>> Logical Free:  4.06% 332.45 MiB
>>
>>
>> Kernel Memory:   196.21 MiB
>>
>> Data:  79.49% 155.96 MiB
>>
>> Text:  20.51% 40.25 MiB
>>
>>
>> Kernel Memory Map:  7.74 GiB
>>
>> Size:  71.72% 5.55 GiB
>>
>> Free:  28.28% 2.19 GiB
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> ARC Summary: (HEALTHY)
>>
>> Memory Throttle Count:  0
>>
>>
>> ARC Misc:
>>
>> Deleted:  34.10k
>>
>> Recycle Misses:  102.86k
>>
>> Mutex Misses:  10
>>
>> Evict Skips:  989.63k
>>
>>
>> ARC Size:  87.94% 5.93 GiB
>>
>> Target Size: (Adaptive) 90.63% 6.11 GiB
>>
>> Min Size (Hard Limit): 12.50% 863.10 MiB
>>
>> Max Size (High Water): 8:1 6.74 GiB
>>
>>
>> ARC Size Breakdown:
>>
>> Recently Used Cache Size: 65.86% 4.02 GiB
>>
>> Frequently Used Cache Size: 34.14% 2.09 GiB
>>
>>
>> ARC Hash Breakdown:
>>
>> Elements Max:  594.22k
>>
>> Elements Current: 100.00% 594.21k
>>
>> Collisions:  609.54k
>>
>> Chain Max:  15
>>
>> Chains:   122.92k
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> ARC Efficiency:   4.19m
>>
>> Cache Hit Ratio: 83.08% 3.48m
>>
>> Cache Miss Ratio: 16.92% 708.94k
>>
>> Actual Hit Ratio: 73.81% 3.09m
>>
>>
>> Data Demand Efficiency: 79.24% 456.96k
>>
>> Data Prefetch Efficiency: 2.94% 90.16k
>>
>>
>> CACHE HITS BY CACHE LIST:
>>
>>   Anonymously Used: 8.80% 306.18k
>>
>>   Most Recently Used: 23.42% 815.06k
>>
>>   Most Frequently Used: 65.43% 2.28m
>>
>>   Most Recently Used Ghost: 0.41% 14.36k
>>
>>   Most Frequently Used Ghost: 1.94% 67.65k
>>
>>
>> CACHE HITS BY DATA TYPE:
>>
>>   Demand Data:  10.40% 362.08k
>>
>>   Prefetch Data: 0.08% 2.65k
>>
>>   Demand Metadata: 76.84% 2.67m
>>
>>   Prefetch Metadata: 12.68% 441.47k
>>
>>
>> CACHE MISSES BY DATA TYPE:
>>
>>   Demand Data:  13.38% 94.88k
>>
>>   Prefetch Data: 12.34% 87.51k
>>
>>   Demand Metadata: 34.54% 244.88k
>>
>>   Prefetch Metadata: 39.73% 281.67k
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> L2ARC is disabled
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> File-Level Prefetch: (HEALTHY)
>>
>>
>> DMU Efficiency:   9.57m
>>
>> Hit Ratio:  73.77% 7.06m
>>
>> Miss Ratio:  26.23% 2.51m
>>
>>
>> Colinear:  2.51m
>>
>>   Hit Ratio:  0.06% 1.54k
>>
>>   Miss Ratio:  99.94% 2.51m
>>
>>
>> Stride:   6.92m
>>
>>   Hit Ratio:  99.99% 6.92m
>>
>>   Miss Ratio:  0.01% 594
>>
>>
>> DMU Misc:
>>
>> Reclaim:  2.51m
>>
>>   Successes:  0.85% 21.28k
>>
>>   Failures:  99.15% 2.49m
>>
>>
>> Streams:  137.84k
>>
>>   +Resets:  0.06% 79
>>
>>   -Resets:  99.94% 137.76k
>>
>>   Bogus:  0
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> VDEV cache is disabled
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> ZFS Tunables (sysctl):
>>
>> kern.maxusers                           845
>>
>> vm.kmem_size                            8313913344
>>
>> vm.kmem_size_scale                      1
>>
>> vm.kmem_size_min                        0
>>
>> vm.kmem_size_max                        1319413950874
>>
>> vfs.zfs.arc_max                         7240171520
>>
>> vfs.zfs.arc_min                         905021440
>>
>> vfs.zfs.arc_meta_used                   2166001368
>>
>> vfs.zfs.arc_meta_limit                  1810042880
>>
>> vfs.zfs.l2arc_write_max                 8388608
>>
>> vfs.zfs.l2arc_write_boost               8388608
>>
>> vfs.zfs.l2arc_headroom                  2
>>
>> vfs.zfs.l2arc_feed_secs                 1
>>
>> vfs.zfs.l2arc_feed_min_ms               200
>>
>> vfs.zfs.l2arc_noprefetch                1
>>
>> vfs.zfs.l2arc_feed_again                1
>>
>> vfs.zfs.l2arc_norw                      1
>>
>> vfs.zfs.anon_size                       51200
>>
>> vfs.zfs.anon_metadata_lsize             0
>>
>> vfs.zfs.anon_data_lsize                 0
>>
>> vfs.zfs.mru_size                        3476498432
>>
>> vfs.zfs.mru_metadata_lsize              1319031808
>>
>> vfs.zfs.mru_data_lsize                  2150589440
>>
>> vfs.zfs.mru_ghost_size                  361860096
>>
>> vfs.zfs.mru_ghost_metadata_lsize        210866688
>>
>> vfs.zfs.mru_ghost_data_lsize            150993408
>>
>> vfs.zfs.mfu_size                        2246172672
>>
>> vfs.zfs.mfu_metadata_lsize              32768
>>
>> vfs.zfs.mfu_data_lsize                  2050486272
>>
>> vfs.zfs.mfu_ghost_size                  6198800896
>>
>> vfs.zfs.mfu_ghost_metadata_lsize        2818404864
>>
>> vfs.zfs.mfu_ghost_data_lsize            3380396032
>>
>> vfs.zfs.l2c_only_size                   0
>>
>> vfs.zfs.dedup.prefetch                  1
>>
>> vfs.zfs.nopwrite_enabled                1
>>
>> vfs.zfs.mdcomp_disable                  0
>>
>> vfs.zfs.prefetch_disable                0
>>
>> vfs.zfs.zfetch.max_streams              8
>>
>> vfs.zfs.zfetch.min_sec_reap             2
>>
>> vfs.zfs.zfetch.block_cap                256
>>
>> vfs.zfs.zfetch.array_rd_sz              1048576
>>
>> vfs.zfs.top_maxinflight                 32
>>
>> vfs.zfs.resilver_delay                  2
>>
>> vfs.zfs.scrub_delay                     4
>>
>> vfs.zfs.scan_idle                       50
>>
>> vfs.zfs.scan_min_time_ms                1000
>>
>> vfs.zfs.free_min_time_ms                1000
>>
>> vfs.zfs.resilver_min_time_ms            3000
>>
>> vfs.zfs.no_scrub_io                     0
>>
>> vfs.zfs.no_scrub_prefetch               0
>>
>> vfs.zfs.metaslab.gang_bang              131073
>>
>> vfs.zfs.metaslab.debug                  0
>>
>> vfs.zfs.metaslab.df_alloc_threshold     131072
>>
>> vfs.zfs.metaslab.df_free_pct            4
>>
>> vfs.zfs.metaslab.min_alloc_size         10485760
>>
>> vfs.zfs.metaslab.prefetch_limit         3
>>
>> vfs.zfs.metaslab.smo_bonus_pct          150
>>
>> vfs.zfs.mg_alloc_failures               8
>>
>> vfs.zfs.write_to_degraded               0
>>
>> vfs.zfs.check_hostid                    1
>>
>> vfs.zfs.recover                         0
>>
>> vfs.zfs.deadman_synctime_ms             1000000
>>
>> vfs.zfs.deadman_checktime_ms            5000
>>
>> vfs.zfs.deadman_enabled                 1
>>
>> vfs.zfs.space_map_last_hope             0
>>
>> vfs.zfs.txg.timeout                     5
>>
>> vfs.zfs.vdev.cache.max                  16384
>>
>> vfs.zfs.vdev.cache.size                 0
>>
>> vfs.zfs.vdev.cache.bshift               16
>>
>> vfs.zfs.vdev.trim_on_init               1
>>
>> vfs.zfs.vdev.max_active                 1000
>>
>> vfs.zfs.vdev.sync_read_min_active       10
>>
>> vfs.zfs.vdev.sync_read_max_active       10
>>
>> vfs.zfs.vdev.sync_write_min_active      10
>>
>> vfs.zfs.vdev.sync_write_max_active      10
>>
>> vfs.zfs.vdev.async_read_min_active      1
>>
>> vfs.zfs.vdev.async_read_max_active      3
>>
>> vfs.zfs.vdev.async_write_min_active     1
>>
>> vfs.zfs.vdev.async_write_max_active     10
>>
>> vfs.zfs.vdev.scrub_min_active           1
>>
>> vfs.zfs.vdev.scrub_max_active           2
>>
>> vfs.zfs.vdev.aggregation_limit          131072
>>
>> vfs.zfs.vdev.read_gap_limit             32768
>>
>> vfs.zfs.vdev.write_gap_limit            4096
>>
>> vfs.zfs.vdev.bio_flush_disable          0
>>
>> vfs.zfs.vdev.bio_delete_disable         0
>>
>> vfs.zfs.vdev.trim_max_bytes             2147483648
>>
>> vfs.zfs.vdev.trim_max_pending           64
>>
>> vfs.zfs.max_auto_ashift                 13
>>
>> vfs.zfs.zil_replay_disable              0
>>
>> vfs.zfs.cache_flush_disable             0
>>
>> vfs.zfs.zio.use_uma                     1
>>
>> vfs.zfs.zio.exclude_metadata            0
>>
>> vfs.zfs.sync_pass_deferred_free         2
>>
>> vfs.zfs.sync_pass_dont_compress         5
>>
>> vfs.zfs.sync_pass_rewrite               2
>>
>> vfs.zfs.snapshot_list_prefetch          0
>>
>> vfs.zfs.super_owner                     0
>>
>> vfs.zfs.debug                           0
>>
>> vfs.zfs.version.ioctl                   3
>>
>> vfs.zfs.version.acl                     1
>>
>> vfs.zfs.version.spa                     5000
>>
>> vfs.zfs.version.zpl                     5
>>
>> vfs.zfs.trim.enabled                    1
>>
>> vfs.zfs.trim.txg_delay                  32
>>
>> vfs.zfs.trim.timeout                    30
>>
>> vfs.zfs.trim.max_interval               1
>>
>>
>> ------------------------------------------------------------------------
>> _______________________________________________
>> freebsd-stable at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>
>
> %SPAMBLOCK-SYS: Matched [@freebsd.org+], message ok
>

-- 
-- Karl
karl at denninger.net


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2711 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20140403/37a9458c/attachment.bin>


More information about the freebsd-stable mailing list