zfs panic "evicting znode"
Charles Sprickman
spork at bway.net
Wed Jun 16 02:34:22 UTC 2010
Howdy,
I have a box running 8.0-RELEASE that recently started panicing every few
hours with the following message:
panic: evicting znode 0xa1eafe80
cpuid = 0
Uptime: 30m56s
Physical memory: 2034 MB
Dumping 569 MB (counts down to 458, then the box freezes hard)
The dump doesn't finish, so there's nothing for savecore to grab.
It's a very basic zfs config - two scsi drives in a mirror:
[root at h21 /usr/local/etc/pdns]# zpool status
pool: zroot
state: ONLINE
scrub: scrub completed after 0h5m with 0 errors on Tue Jun 15 21:32:11
2010
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/disk0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
errors: No known data errors
This is running on an older dual-xeon (1.8GHz/32 bit) supermicro 1U server
w/2GB of RAM. The following zfs tunables are set in loader.conf:
[root at h21 /usr/local/etc/pdns]# cat /boot/loader.conf
zfs_load="YES"
vm.kmem_size_max="1000M"
vm.kmem_size="1000M"
vfs.zfs.arc_max="400M"
vfs.root.mountfrom="zfs:zroot"
Google is showing me nothing on this panic except for hits on the source
code that actuall contains the panic message.
Any hints as to what this means?
Here's zdb output:
[root at h21 /usr/local/adm/bin]# zdb zroot
version=13
name='zroot'
state=0
txg=267691
pool_guid=12059945251392529754
hostid=1898595607
hostname='h21.biglist.com'
vdev_tree
type='root'
id=0
guid=12059945251392529754
children[0]
type='mirror'
id=0
guid=14682767316808875040
metaslab_array=23
metaslab_shift=30
ashift=9
asize=142515896320
is_log=0
children[0]
type='disk'
id=0
guid=11600930948623447097
path='/dev/gpt/disk0'
whole_disk=0
children[1]
type='disk'
id=1
guid=4279842263738814989
path='/dev/gpt/disk1'
whole_disk=0
Assertion failed: (rwlp->rw_count == 0), file
/usr/src/cddl/lib/libzpool/../../../cddl/contrib/opensolaris/lib/libzpool/common/kernel.c,
line 203.
Abort trap: 6 (core dumped)
Note: the coredump on zdb only occurs if the pool's name is specified,
otherwise the output is identical. I'm guessing this is just an error in
zdb since it happens on every 8.0 box I've got.
Any help is appreciated - we're really looking to get zfs in production,
but this problem is a bit odd, and it's not like we can fsck to fix any
possible problems with the fs.
Thanks,
Charles
More information about the freebsd-fs
mailing list