kern/150503: ZFS disks are UNAVAIL and corrupted after reboot

William FRANCK william.franck at oceasys.net
Sun Sep 12 16:30:05 UTC 2010


>Number:         150503
>Category:       kern
>Synopsis:       ZFS disks are UNAVAIL and corrupted after reboot
>Confidential:   no
>Severity:       critical
>Priority:       low
>Responsible:    freebsd-bugs
>State:          open
>Quarter:        
>Keywords:       
>Date-Required:
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Sun Sep 12 16:30:04 UTC 2010
>Closed-Date:
>Last-Modified:
>Originator:     William FRANCK
>Release:        CURRENT 9.0 cvs 2010-09-12
>Organization:
>Environment:
FreeBSD serveur 9.0-CURRENT FreeBSD 9.0-CURRENT #0: Sun Sep 12 11:34:34 CEST 2010     root at serveur:/usr/obj/usr/src/sys/K9NSLI-AMD64  amd64
>Description:
After just creating zpool , not even zfs volumes, ZFS is fine.
After rebooting the system, all zfs disk are marked UNAVAIL.

Tested with different disk formating :
# dd if=/dev/zero of=/dev/ad4 bs=1m count=1
or 
# gpart create -s gpt ad8
# gpart add -b 34 -s 128 -t freebsd-boot ad8
# gpart add -b 162 -s 1465148973 -t freebsd-zfs ad8

Tested with and without any real data.

After reboot : 
# zpool status
pool: tank
state: FAULTED
status: One or more devices could not be used because the label is missing 
	or invalid.  There are insufficient replicas for the pool to continue
	functioning.
action: Destroy and re-create the pool from a backup source.
  see: http://www.sun.com/msg/ZFS-8000-5E
 scrub: none requested
config:
	NAME        STATE     READ WRITE CKSUM
	tank        FAULTED      0     0     0  corrupted data
	  raidz1    ONLINE       0     0     0
	    ad4p2   UNAVAIL      0     0     0  corrupted data
	    ad8p2   UNAVAIL      0     0     0  corrupted data

>How-To-Repeat:
# zpool destroy  tank 

CASE A  (same apply for ad4 and ad8) :
# gpart create -s gpt ad8
# gpart add -b 34 -s 128 -t freebsd-boot ad8
# gpart show ad8
# gpart add -b 162 -s 1465148973 -t freebsd-zfs ad8
# fdisk -a /dev/ad8

note : 1465148973 is the exact number reported by 'gpart show'

or CASE B :
# dd if=/dev/zero of=/dev/ad4 bs=1m count=1
# dd if=/dev/zero of=/dev/ad8 bs=1m count=1


AND FOLLOWING : 
# zpool create tank raidz ad4 ad8
# zfs create -p tank/ROOT/freebsd

REBOOT
# shutdown -r now
(restared... login ... )
# zpool status


>Fix:


>Release-Note:
>Audit-Trail:
>Unformatted:


More information about the freebsd-bugs mailing list