Reliably trigger-able ZFS panic
Quake Lee
quakelee at geekcn.org
Tue Mar 4 02:20:33 UTC 2008
Tue, 04 Mar 2008 03:27:35 +0800,Xin LI <delphij at delphij.net>:
The kernel is
FreeBSD fs12.sina.com.cn 7.0-STABLE FreeBSD 7.0-STABLE #0: Sun Mar 2
18:50:05 CST 2008 delphij at fs12.sina.com.cn:/usr/obj/usr/src/sys/ZFORK
amd64
the get all at below:
fs12# zfs get all
NAME PROPERTY VALUE SOURCE
midpool type filesystem -
midpool creation Fri Feb 29 15:01 2008 -
midpool used 11.1M -
midpool available 2.65T -
midpool referenced 44.7K -
midpool compressratio 1.00x -
midpool mounted yes -
midpool quota none default
midpool reservation none default
midpool recordsize 128K default
midpool mountpoint /mnt/ztest local
midpool sharenfs off default
midpool checksum on default
midpool compression off default
midpool atime on default
midpool devices on default
midpool exec on default
midpool setuid on default
midpool readonly off default
midpool jailed off default
midpool snapdir hidden default
midpool aclmode groupmask default
midpool aclinherit secure default
midpool canmount on default
midpool shareiscsi off default
midpool xattr off temporary
midpool copies 1 default
fs12# zpool get all midpool
NAME PROPERTY VALUE SOURCE
midpool bootfs - default
> Pawel Jakub Dawidek wrote:
>> On Sun, Mar 02, 2008 at 03:49:03AM -0800, LI Xin wrote:
>>> Hi,
>>>
>>> The following iozone test case on ZFS would reliably trigger panic:
>>>
>>> /usr/local/bin/iozone -M -e -+u -T -t 128 -S 4096 -L 64 -R -r 4k -s 30g
>>> -i 0 -i 1 -i 2 -i 8 -+p 70 -C
>>
>> Thanks, I'll try to reproduce it.
>>
>> [...]
>>
>>> #19 0x000000000000b55d in z_deflateInit2_ (strm=0xffffff00042dc8e0,
>>> level=70109184, method=68351768,
>>> windowBits=68351600, memLevel=76231808, strategy=76231808,
>>> version=Cannot access memory at address 0xffffffff00040010
>>> )
>>> at
>>> /usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/zmod/deflate.c:318
>>
>> Can you send me your FS configuration? zfs get all your/file/system
>> I see that you use compression on this dataset?
>
> It was all default configuration. The pool was a RAID-Z2 without
> hotspare disk. The box is now running some other tests (not FreeBSD) at
> our Beijing Lab and we don't have remote hands in the nights, so I'm
> afraid that I will not be able to provide further information at this
> moment. Please let me know if the test run will not provoke the problem
> and I will ask them to see if they can spare the box in the weekend for
> me.
>
> Cheers,
--
The Power to Serve
More information about the freebsd-stable
mailing list