GEOM corrupt or invalid GPT detected on ZFS raid on Freebsd 8.0 x64
Derrick Ryalls
ryallsd at gmail.com
Fri Jan 8 15:56:26 UTC 2010
Greetings,
After not getting daily system mails for a while, then suddenly
getting them, I took a closer look and noticed this message appears
after a boot:
+GEOM: ad4: corrupt or invalid GPT detected.
+GEOM: ad4: GPT rejected -- may not be recoverable.
+GEOM: label/disk1: corrupt or invalid GPT detected.
+GEOM: label/disk1: GPT rejected -- may not be recoverable.
label/disk1 should be the same thing as ad4, and it is part of a 4
disk raidz. When I check the status of my pools, all is reported
fine:
[root at frodo ~]# zpool status
pool: backup
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
label/backup ONLINE 0 0 0
errors: No known data errors
pool: storage
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1 ONLINE 0 0 0
label/disk1 ONLINE 0 0 0
label/disk2 ONLINE 0 0 0
label/disk3 ONLINE 0 0 0
label/disk4 ONLINE 0 0 0
Checking the history of the logs, it looks like this started to occur
after I did a disk replacement test for ZFS. Going from memory, I
performed the following steps:
* Took the disk offline
* Powered down the system
* Replaced the physical disk
* Powered up the system
* Used glabel to label the new disk with the same name as old disk
* Told ZFS to replace the disk
The operation appear to be a success in that the drive resilvered and
the pool is listed as online. Copying advice in this thread
<http://forums.freebsd.org/showthread.php?t=8920&page=3> I tried:
[root at frodo ~]# zdb -l /dev/ad4
--------------------------------------------
LABEL 0
--------------------------------------------
version=13
name='storage'
state=0
txg=509115
pool_guid=3832644769924830246
hostid=400837641
hostname='myhost'
top_guid=7378337929137643727
guid=8898281456854820018
vdev_tree
type='raidz'
id=0
guid=7378337929137643727
nparity=1
metaslab_array=23
metaslab_shift=36
ashift=9
asize=8001576501248
is_log=0
children[0]
type='disk'
id=0
guid=8898281456854820018
path='/dev/label/disk1'
whole_disk=0
DTL=122
children[1]
type='disk'
id=1
guid=13535100006608832566
path='/dev/label/disk2'
whole_disk=0
DTL=126
children[2]
type='disk'
id=2
guid=2985688821708093695
path='/dev/label/disk3'
whole_disk=0
DTL=125
children[3]
type='disk'
id=3
guid=16498259053924061255
path='/dev/label/disk4'
whole_disk=0
DTL=124
--------------------------------------------
LABEL 1
--------------------------------------------
version=13
name='storage'
state=0
txg=509115
pool_guid=3832644769924830246
hostid=400837641
hostname='myhost'
top_guid=7378337929137643727
guid=8898281456854820018
vdev_tree
type='raidz'
id=0
guid=7378337929137643727
nparity=1
metaslab_array=23
metaslab_shift=36
ashift=9
asize=8001576501248
is_log=0
children[0]
type='disk'
id=0
guid=8898281456854820018
path='/dev/label/disk1'
whole_disk=0
DTL=122
children[1]
type='disk'
id=1
guid=13535100006608832566
path='/dev/label/disk2'
whole_disk=0
DTL=126
children[2]
type='disk'
id=2
guid=2985688821708093695
path='/dev/label/disk3'
whole_disk=0
DTL=125
children[3]
type='disk'
id=3
guid=16498259053924061255
path='/dev/label/disk4'
whole_disk=0
DTL=124
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='storage'
state=0
txg=509115
pool_guid=3832644769924830246
hostid=400837641
hostname='myhost'
top_guid=7378337929137643727
guid=8898281456854820018
vdev_tree
type='raidz'
id=0
guid=7378337929137643727
nparity=1
metaslab_array=23
metaslab_shift=36
ashift=9
asize=8001576501248
is_log=0
children[0]
type='disk'
id=0
guid=8898281456854820018
path='/dev/label/disk1'
whole_disk=0
DTL=122
children[1]
type='disk'
id=1
guid=13535100006608832566
path='/dev/label/disk2'
whole_disk=0
DTL=126
children[2]
type='disk'
id=2
guid=2985688821708093695
path='/dev/label/disk3'
whole_disk=0
DTL=125
children[3]
type='disk'
id=3
guid=16498259053924061255
path='/dev/label/disk4'
whole_disk=0
DTL=124
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name='storage'
state=0
txg=509115
pool_guid=3832644769924830246
hostid=400837641
hostname='myhost'
top_guid=7378337929137643727
guid=8898281456854820018
vdev_tree
type='raidz'
id=0
guid=7378337929137643727
nparity=1
metaslab_array=23
metaslab_shift=36
ashift=9
asize=8001576501248
is_log=0
children[0]
type='disk'
id=0
guid=8898281456854820018
path='/dev/label/disk1'
whole_disk=0
DTL=122
children[1]
type='disk'
id=1
guid=13535100006608832566
path='/dev/label/disk2'
whole_disk=0
DTL=126
children[2]
type='disk'
id=2
guid=2985688821708093695
path='/dev/label/disk3'
whole_disk=0
DTL=125
children[3]
type='disk'
id=3
guid=16498259053924061255
path='/dev/label/disk4'
whole_disk=0
DTL=124
Since this step differs from the linked thread above, I did not follow
the steps listed in that thread.
[root at frodo ~]# uname -a
FreeBSD myhost 8.0-RELEASE-p1 FreeBSD 8.0-RELEASE-p1 #0: Sun Dec 6
11:23:52 PST 2009 ryallsd at myhost:/usr/obj/usr/src/sys/FRODO amd64
[root at frodo ~]# kldstat
Id Refs Address Size Name
1 15 0xffffffff80100000 d17da8 kernel
2 1 0xffffffff81022000 f2a99 zfs.ko
3 1 0xffffffff81115000 199e opensolaris.ko
4 1 0xffffffff81117000 a3a0 geom_eli.ko
5 1 0xffffffff81122000 1ab9a crypto.ko
6 1 0xffffffff8113d000 a49e zlib.ko
Did I somehow glabel incorrectly or something? Is there any way to
fix this? It seems to only be an issue on boot, but don't want to
find out my data is risk when it is too late.
TIA
Derrick
More information about the freebsd-questions
mailing list