vm.kmem_size settings doesn't affect loader?
admin at kkip.pl
Fri Sep 26 08:43:31 UTC 2008
Jeremy Chadwick wrote:
> On Thu, Sep 25, 2008 at 04:14:02PM +0200, Bartosz Stec wrote:
>>> Your options are:
>>> 1) Consider increasing it from 512M to something like 1.5GB; do not
>>> increase it past that on RELENG_7, as there isn't support for more than
>>> 2GB total. For example, on a 1GB memory machine, I often recommend
>>> 768M. On 2GB machines, 1536M. You will need to run -CURRENT if you
>>> want more.
>>> 2) Tune ZFS aggressively. Start by setting vfs.zfs.arc_min="16M"
>>> and vfs.zfs.arc_max="64M".
>>> If your machine has some small amount of memory (768MB, 1GB, etc.),
>>> then you probably shouldn't be using ZFS.
>> Problem occured on i386 machine with 1GB of memory and 7.1-pre (3HDD,
>> 40GB, RAIDZ1). I know that i386 is highly unrecommended for ZFS, but
>> it's just a home box for testing and learning purposes - I just want to
>> know what I'm doing and what should I expect when I decide to put ZFS on
>> server machines :) Currently, from posts on freebsd-fs, I conclude that
>> even with a gigs of kmem and using AMD64, we still can experience panic
>> from kmem_malloc.
> The i386 vs. amd64 argument is bogus, if you ask me. ZFS works on both.
> amd64 is recommended because ZFS contains code that makes heavy use of
> 64-bit values, and because amd64 offers large amounts of addressed
> memory without disgusting hacks like PAE.
> That said -- yes, even with "gigs of kmem and using AMD64", you can
> still panic due to kmem exhaustion. I have fairly decent experience
> with this problem, because it haunted me for quite some time.
> A large portion of the problem is that kmem_max, on i386 and amd64 (yes,
> you read that right) has a 2GB limit on RELENG_7. I repeat: a 2GB
> limit, regardless of i386 or amd64.
> This limit has been increased to 512GB on CURRENT, but there are no
> plans to MFC those changes, as they are too major.
> Let me tell you something I did this weekend. I had to copy literally
> 200GB of data from a ZFS raidz1 pool (spread across 3 disks) to two
> different places: 1) a UFS2 filesystem on a different disk, and 2)
> across a gigE network to a Windows machine. I had to do this because I
> was adding a disk to the vdev, which cannot be done without re-creating
> the pool (this is a known problem with ZFS, and has nothing to do with
> The machine hosting the data runs RELENG_7 with amd64, and contains 4GB
> of memory. However, I've accomplished the same task with only 2GB of
> memory as well.
> These are the tuning settings I use:
> The entire copying process took almost 2 hours. Not once did I
> experience kmem exhaustion. I can *guarantee* that I would have crashed
> the box numerous times had I not tuned the machine with the values
>> Manual tuning is hard for me because I'm not familiar
>> with BSD kernel code nor kernel memory management. I'm just an end-user
>> who love concepts of ZFS and wait for it to be (more) stable. Of course
>> I've followed tuning guide carefully.
> I'm an "experienced" end-user who has very little experience with BSD
> kernel code and absolutely no experience with kernel memory management.
> Proper tuning is all that's needed, regardless of your knowledge set.
> Please try installing 2GB of memory in your i386 box, and then use
> the exact loader.conf values I specified above.
Thank you for hints.
Yesterday I've added 512 MB memory to box (sum 1,5GB), and set
vm.kmem_size and vm.kmem_size to "1024M". With pieces of 1024MB, 512MB,
256MB, 256MB available and 3 memory slots it is hard to have 2GB RAM ;)
Until now it survived world cleaning/building/installing/bonnie++
benchmarkink/fs scrubing and general usage. Memory usage seems stable.
If unfortunately kmem exhaustion will happen again I will experiment
with ARC settings.
IMHO you've explained gently a lot of zfs tuning concerns in this thread
and they should be added to tuning guide - espacially explanation of ARC
and prefetch settings. Thanks again!
More information about the freebsd-stable