ZFS kmem_map too small.
Joao Barros
joao.barros at gmail.com
Sat Oct 6 18:29:35 PDT 2007
On 10/5/07, Pawel Jakub Dawidek <pjd at freebsd.org> wrote:
> Hi.
>
> We'are about to branch RELENG_7 and I'd like to start discussion with
> folks that experience 'kmem_map too small' panic with the latest HEAD.
>
> I'm trying hard to reproduce it and I can't, so I need to gather more
> info how you are able to provoke this panic.
>
> What I did was to rsync 200 FreeBSD src trees from one directory to
> another on the same ZFS file system. It worked fine.
>
> The system I'm using is i386 and the only tuning I did is bigger
> kmem_map. From my /boot/loader.conf:
>
> vm.kmem_size=629145600
> vm.kmem_size_max=629145600
>
> The machine is dual core Pentium D 3GHz with 1GB of RAM. My pool is:
>
> lcf:root:/tank/0# zpool status
> pool: tank
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> ad4 ONLINE 0 0 0
> ad5 ONLINE 0 0 0
> ad6 ONLINE 0 0 0
> ad7 ONLINE 0 0 0
>
> errors: No known data errors
>
> If you can still see those panic, please let me know as soon as possible
> and try to describe what your workload looks like, how to reproduce it,
> etc. I'd really like ZFS to be rock-stable for 7.0 even on i386.
>
i386 with 1GB here. I used to get this when chown'ing some thousand
files recursively via ssh.
Last time I got this was unraring 2GB files from n x 95MB rars via NFS.
My system:
xeon# zpool status
pool: r4x320
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
r4x320 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad0s1d ONLINE 0 0 0
ad1s1d ONLINE 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
errors: No known data errors
xeon# cat /boot/loader.conf
zfs_load="YES"
vfs.root.mountfrom="zfs:r4x320"
vfs.zfs.prefetch_disable=1 (I have this to improve video play)
xeon# sysctl vm | grep kmem
vm.kmem_size_scale: 3
vm.kmem_size_max: 335544320
vm.kmem_size_min: 0
vm.kmem_size: 335544320
xeon# sysctl -a | grep vnodes
kern.maxvnodes: 52242
kern.minvnodes: 17414
vfs.freevnodes: 7797
vfs.wantfreevnodes: 17414
vfs.numvnodes: 8230
I usually set kern.maxvnodes to 50000 manually and everything is ok
but I see that I forgot to on my last reboot and haven't seen any
problems yet:
xeon# uptime
1:56AM up 4 days, 15:06
off-topic: it's lovely not having to have your 874GB fs with millions
of files checked after a crash or power failure :-D
--
Joao Barros
More information about the freebsd-fs
mailing list