Corrupt GPT on ZFS full-disks that shouldn't be using GPT

Chris Stankevitz chrisstankevitz at gmail.com
Sun Jun 28 21:12:11 UTC 2015


On Sat, Jun 27, 2015 at 11:52 PM, Quartz <quartz at sneakertech.com> wrote:
> First off, you should double check what's going on with your layout.


Thank you for your help.  I have four 11-drive raidz3 pools that, in a
prior life, lived in FreeNAS.  Of course, being in FreeNAS, they were
gpart-ed to have two parititons (one for zfs, one for swap).

I took these 4 groups of 11-drives over to my FreeBSD box.  For each
group of 11 drives I:
gpart destroy -F /dev/da0
gpart destroy -F /dev/da1
...
gpart destroy -F /dev/da10
zpool create poolname raidz3 /dev/da0 /dev/da1 ... /dev/da10

Unfortunately on one of the 11 groups I forgot to perform the "gpart
destroy" step.  I did perform the "zpool create" step.  This is the
group of drives that triggers the dmesg "the primary GPT table is
corrupt or invalid" and "using the secondary  instead -- recovery
strongly advised."

>> I suppose I could offline and
>> resilver each of them.
>
>
> Simply resilvering is not guaranteed to fix the problem

I agree.  What I means to say was "offline the drive, dd if=/dev/zero
the drive, then resilver it.

>> I'm afraid to dd the secondary GPT header at
>> the last 512 bytes of the drive.  Perhaps there is a way I can ask ZFS
>> to do that for me?
>
>
> Zfs doesn't mess with gpt directly like that, so no. If you don't want to

What I meant here was to say "Perhaps I can politely ask ZFS 'hey if
you are not using the last 512 bytes of these devices, would you mind
just filling that with zeros?'".  I would feel more comfortable if
there was a command like that offered by ZFS rather than me just using
dd and hoping it doesn't interfere with ZFS.

Chris


More information about the freebsd-questions mailing list