7.2 dies in zfs

Stefan Esser se at freebsd.org
Wed Nov 25 09:52:27 UTC 2009


Am 24.11.2009 15:24, schrieb Ollivier Robert:
> According to Stefan Esser:
>> If your i386 based system has much RAM (2GB or more), than you
>> should definitely increase KVA_PAGES. Not doing so will lead to
>> panics, not in spite of but exactly because of the large RAM.
> 
> I have uppped KVA_PAGES of course but this is reducing the amount of
> memory available to processes.  If you define KVA_PAGES to 2GB for
> example, every process will be able to use only the remaining 2 GB for
> their own memory so there is a trade off there.

Yes, I had mentioned that (256 -> 512 means 2MB RAM instead of 1MB spent
on the kernel page table and 2GB rather than 3GB user process
size; and AFAIK, the limit on the user process size has been the
reason for not raising KVA_PAGES to 512 by default).

Maybe you can estimate the amount of kernel memory required by
measurement of kmem statistics without ZFS and adding about twice
the ARC cache limit you want to impose. (The ARC can grow beyond
arc_max, IIRC because this is just the high water mark where the
cache is aggressively flushed and also because meta data is not
taken into account by this limit.) E.g., if your system reports a
vm.kvm_free of 100MB, you may be able to fit in an ARC of 50MB.

>> I have been using ZFS on i386 since it became available, first for
>> testing and soon as only file-system (with UFS boot, initially, now
>> switching over to gptzfsboot). Systems range from Pentium-3 to
>> AMD64x2 and I see no problems even under significant load.
> 
> I've found that load is not a factor (if one defines load as many
> concurrent processes).  The machine is mostly idle and I've seen panics
> coming from a "cvs update" or a "svn up".  There are I/O intensive but
> not that much whereas the same machine can survive a buildworld just fine.
> 
> The machine I have is a dual Xeon @2.8 GHz with 4 GB of RAM and 200 GB
> of disk.
> 
> /boot/loader.conf
> -----
> #-- limits
> kern.maxdsiz="1024M"
> kern.maxssiz="256M"
> kern.dfldsiz="1024M"
> kern.dflssiz="128M"
> 
> #-- vm tuning
> vm.kmem_size="1024M"
> vm.kmem_size_max="1224M"
> vfs.zfs.arc_max="128M"
> vfs.zfs.prefetch_disable=1
> -----
> 
> options         KVA_PAGES=384           # 1.5GB of KVA

Well, my example was for the 512MB P3, which I use because of its
power efficiency (less than 30W idle power drawn).

My home workstation is an AMD x2 with 2GB RAM, 3*1TB disk (RAIDZ1), and
with KVA_PAGES=512 and the following tunables set:

kern.maxssiz="128M"
vfs.root.mountfrom="zfs:raid1"          # Specify root partition
vm.kmem_size="1500M"            # Sets the size of kernel memory (bytes)
vm.kmem_size_max="2G"           # Sets the size of kernel memory (bytes)
zfs_load="YES"

The ARC size is not limited, currently, and auto-sizes to some 950MB.
But I have tried arc_max limits down to 200MB to study the impact.
The system is absolutely reliable (with regard to ZFS, but haunted by
LORs). I'm using a kernel with INVARIANTS and full WITNESS, since I want
to understand lock-ups apparently caused by the combination of Atheros
WLAN and SMP (sometimes accompanied by LORs). It survives not only CVS
and SVN updates, but also other operations that made ZFS panic before I
raised KVA_PAGES to its current value. Maybe, defaults for kmem_size
and kmem_size_max would suffice, I have not tried them for a while.

But KVA_PAGES=512 is essential for my system with 2GB RAM, guess this
is even more true for your box with 4GB.

Regards, STefan


More information about the freebsd-fs mailing list