kern/125149: [zfs][nfs] changing into .zfs dir from nfs client
causes endless panic loop
Weldon Godfrey
wgodfrey at ena.com
Tue Jul 1 14:20:03 UTC 2008
>Number: 125149
>Category: kern
>Synopsis: [zfs][nfs] changing into .zfs dir from nfs client causes endless panic loop
>Confidential: no
>Severity: critical
>Priority: low
>Responsible: freebsd-bugs
>State: open
>Quarter:
>Keywords:
>Date-Required:
>Class: sw-bug
>Submitter-Id: current-users
>Arrival-Date: Tue Jul 01 14:20:02 UTC 2008
>Closed-Date:
>Last-Modified:
>Originator: Weldon Godfrey
>Release: 7.0-RELEASE
>Organization:
Education Networks of America
>Environment:
FreeBSD store1.mail.ena.net 7.0-RELEASE FreeBSD 7.0-RELEASE #1: Tue Jun 24 08:49:19 CDT 2008 root at store1.mail.ena.net:/usr/obj/usr/src/sys/STORE1 amd64
>Description:
As soon as I tried (for the 1st time) to 'cd' into .zfs from a Red Hat NFS client, the FreeBSD server paniced and rebooted. As soon as it comes up fully, it panics again and reboots. I had to disable ZFS to stop endless rebooting. There are about 7 snapshots on the file system but the system paniced on cd .zfs
If you need access to the system, I can arrange that. It is not in production yet.
If you need for me to try to create a crash dump or anything else, please let me know. The panic messages aren't hitting syslog. The panic mentions it is on nfsd. If needed, I'll re-enable ZFS and write down as much as I can from the panic message.
The ZFS file system was configured as 1 pool, 1 volume (tank/mail)
It is using a 24 300GB SAS drives configured as
zpool create tank mirror da0 da12 mirror da1 da13 mirror da2 da14 mirror da3 da15 mirror da4 da16 mirror da5 da17 mirror da6 da18 mirror da7 da19 mirror da8 da20 mirror da9 da21 mirror da10 da22 spare da11 da23
da0-11 (encl 0), da12-da23 (encl 1) IBM EXP3000's with 3ware 9690
store1# more loader.conf
vm.kmem_size_max="16106127360"
vm.kmem_size="1073741824"
vfs.zfs.cache_flush_disable="1"
kern.maxvnodes=800000
vfs.zfs.prefetch_disable=1
zfs exports file:
store1# more exports
# !!! DO NOT EDIT THIS FILE MANUALLY !!!
/var/mail -maproot=root -network 192.168.2.0 -mask 255.255.255.0
I am using the ULE scheduler
dumpdev is set to AUTO, however, only minfree exists in /var/crash
store1# zdb
tank
version=6
name='tank'
state=0
txg=391
pool_guid=9188286166961335303
hostid=1607525555
hostname='store1.mail.ena.net'
vdev_tree
type='root'
id=0
guid=9188286166961335303
children[0]
type='mirror'
id=0
guid=6940539032091406049
metaslab_array=27
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=12177063734546800829
path='/dev/da0'
whole_disk=0
children[1]
type='disk'
id=1
guid=17756148780680423243
path='/dev/da12'
whole_disk=0
children[1]
type='mirror'
id=1
guid=15553657878513052422
metaslab_array=25
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=159312058462001267
path='/dev/da1'
whole_disk=0
children[1]
type='disk'
id=1
guid=8427428225122586042
path='/dev/da13'
whole_disk=0
children[2]
type='mirror'
id=2
guid=8094557295401289097
metaslab_array=24
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=3973142902767367128
path='/dev/da2'
whole_disk=0
children[1]
type='disk'
id=1
guid=5475429582146651394
path='/dev/da14'
whole_disk=0
children[3]
type='mirror'
id=3
guid=8422371545889157332
metaslab_array=23
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=7876869405715517022
path='/dev/da3'
whole_disk=0
children[1]
type='disk'
id=1
guid=2311208437246479700
path='/dev/da15'
whole_disk=0
children[4]
type='mirror'
id=4
guid=13043784695933281991
metaslab_array=22
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=2625736407033884883
path='/dev/da4'
whole_disk=0
children[1]
type='disk'
id=1
guid=12139830734620603195
path='/dev/da16'
whole_disk=0
children[5]
type='mirror'
id=5
guid=8537975538107565110
metaslab_array=21
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=10811496881972791559
path='/dev/da5'
whole_disk=0
children[1]
type='disk'
id=1
guid=12467851920062622083
path='/dev/da17'
whole_disk=0
children[6]
type='mirror'
id=6
guid=6984776714311523782
metaslab_array=20
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=17893231162439421521
path='/dev/da6'
whole_disk=0
children[1]
type='disk'
id=1
guid=7007733400839455331
path='/dev/da18'
whole_disk=0
children[7]
type='mirror'
id=7
guid=1900649043355843336
metaslab_array=19
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=4593823921763348600
path='/dev/da7'
whole_disk=0
children[1]
type='disk'
id=1
guid=3227568170452807619
path='/dev/da19'
whole_disk=0
children[8]
type='mirror'
id=8
guid=4496327987998401292
metaslab_array=18
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=9797945788221343409
path='/dev/da8'
whole_disk=0
children[1]
type='disk'
id=1
guid=15910496011831212127
path='/dev/da20'
whole_disk=0
children[9]
type='mirror'
id=9
guid=7070511984720364207
metaslab_array=17
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=9502463708836649545
path='/dev/da9'
whole_disk=0
children[1]
type='disk'
id=1
guid=7131953916078442743
path='/dev/da21'
whole_disk=0
children[10]
type='mirror'
id=10
guid=13441563341814041750
metaslab_array=15
metaslab_shift=31
ashift=9
asize=294983827456
children[0]
type='disk'
id=0
guid=3403318897555406651
path='/dev/da10'
whole_disk=0
children[1]
type='disk'
id=1
guid=2334569660784776332
path='/dev/da22'
whole_disk=0
store1#
>How-To-Repeat:
with multiple snapshots done on file system
go to NFS client with mounted file system
cd .zfs
>Fix:
>Release-Note:
>Audit-Trail:
>Unformatted:
More information about the freebsd-bugs
mailing list