ZFS repeatable reboot 8.0-RC1

grarpamp grarpamp at gmail.com
Thu Oct 15 06:47:02 UTC 2009


> Your machine is starving!

How can this be, there is over 500MiB free ram at all times? I'm
running literally no userland apps other than X and xterms when it
reboots.

I think I may be hitting some limit with this 366MiB and reboot bit.
How can I tell what my kernel limits are on this platform?
Don't I have to limit ARC to ( kern + kern headroom + ARC = kern
addressablility limit ) or something like that?
Anything I should use for that besides sysctl -a and vmstat -z?
I thought I looked at my wired history before using zfs and set
arc_max to 96MiB so wired wouldn't even get close 512.

> what you can do to improve upon performance of the pool

Performance is not the problem. Yes, it's dog slow, but it's usable,
sortof :) The issue is that it's rebooting spontaneously. No OS
should do that. Though unlike a userland process that the kernel
kills when out of ram, I don't know how the kernel would recover
when its own processes bloat up.

I expect slow performance with this setup. Especially if I'm blowing
out some cache somewhere. Take UFS2 with dirhash for example. If
the size of the directory inode is much bigger than vfs.ufs.dirhash_maxmem,
it just slows down to spindle speed ... not reboot, no big deal.

> is add a separate disk to the machine in which you can configure
> your ZIL(ZFS Intent Log) and a Cache. Those two things can ...
> reduce eating up so much ram that your system starts to starve
> itself.

Reduce ram?, how so? I already have a ZIL in the main pool by
default, presumably using just as much ram as a separate one would,
so no need for a separate log. Similarly for cache, which is in
core in the default case. They just help speed if on 'faster than
spindles' devices.

I suppose I could as easily set vfs.zfs.zil_disable=0 as a test if
it wouldn't risk loss of the entire pool.

> Add that other 1G of RAM

Isn't that a game of whack a mole? What happens when I rm a dir
with 1M files in it? Add more ram? 2M files? ...

> The disk that your doing your remove operation on ? is that being
> done on a ZFS GELI ?

As mentioned, yes.

> PS: You can use thumb drives as caches and intent logs

I would presume their bit error rate is higher than platters. There
was a time when SSD's meant RAM drives, not flash drives.


# vmstat -m | egrep -i 'requests|zfs|zil|zio|arc|solaris|geom|eli'
         Type InUse MemUse HighUse Requests  Size(s)
         GEOM   208    27K       -     1708  16,32,64,128,512,1024,2048,4096
      solaris 145673 135033K       -  3413390
16,32,64,128,256,512,1024,2048,4096
     eli data     8     5K       -    23628  32,256,512,1024,2048,4096

# vmstat -z | egrep -i 'requests|zfs|zil|zio|arc|solaris|geom|eli'
ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS  FAILURES
zio_cache:                596,        0,        0,     4596,   489861,        0
arc_buf_hdr_t:            136,        0,     9367,       29,    15769,        0
arc_buf_t:                 40,        0,     2128,      264,    19641,        0
zil_lwb_cache:            176,        0,        3,       85,      488,        0
zfs_znode_cache:          232,        0,    19298,      473,    62000,        0

# sysctl -a vfs.zfs kstat.zfs
vfs.zfs.arc_meta_limit: 25165824
vfs.zfs.arc_meta_used: 39459076
vfs.zfs.mdcomp_disable: 0
vfs.zfs.arc_min: 16777216
vfs.zfs.arc_max: 100663296
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 1
vfs.zfs.recover: 0
vfs.zfs.txg.synctime: 5
vfs.zfs.txg.timeout: 30
vfs.zfs.scrub_limit: 10
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 35
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_disable: 0
vfs.zfs.version.zpl: 3
vfs.zfs.version.vdev_boot: 1
vfs.zfs.version.spa: 13
vfs.zfs.version.dmu_backup_stream: 1
vfs.zfs.version.dmu_backup_header: 2
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
kstat.zfs.misc.arcstats.hits: 102514
kstat.zfs.misc.arcstats.misses: 12662
kstat.zfs.misc.arcstats.demand_data_hits: 8150
kstat.zfs.misc.arcstats.demand_data_misses: 741
kstat.zfs.misc.arcstats.demand_metadata_hits: 94364
kstat.zfs.misc.arcstats.demand_metadata_misses: 11921
kstat.zfs.misc.arcstats.prefetch_data_hits: 0
kstat.zfs.misc.arcstats.prefetch_data_misses: 0
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0
kstat.zfs.misc.arcstats.mru_hits: 49617
kstat.zfs.misc.arcstats.mru_ghost_hits: 2511
kstat.zfs.misc.arcstats.mfu_hits: 52897
kstat.zfs.misc.arcstats.mfu_ghost_hits: 1193
kstat.zfs.misc.arcstats.deleted: 1429
kstat.zfs.misc.arcstats.recycle_miss: 5314
kstat.zfs.misc.arcstats.mutex_miss: 0
kstat.zfs.misc.arcstats.evict_skip: 3645
kstat.zfs.misc.arcstats.hash_elements: 9362
kstat.zfs.misc.arcstats.hash_elements_max: 9363
kstat.zfs.misc.arcstats.hash_collisions: 8135
kstat.zfs.misc.arcstats.hash_chains: 2042
kstat.zfs.misc.arcstats.hash_chain_max: 5
kstat.zfs.misc.arcstats.p: 57449472
kstat.zfs.misc.arcstats.c: 100663296
kstat.zfs.misc.arcstats.c_min: 16777216
kstat.zfs.misc.arcstats.c_max: 100663296
kstat.zfs.misc.arcstats.size: 99728132
kstat.zfs.misc.arcstats.hdr_size: 1273504
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.vdev_cache_stats.delegations: 961
kstat.zfs.misc.vdev_cache_stats.hits: 8593
kstat.zfs.misc.vdev_cache_stats.misses: 3377


More information about the freebsd-fs mailing list