ZFS woes

Michael Powell nightrecon at hotmail.com
Tue Aug 10 21:21:14 UTC 2010


Graeme Dargie wrote:

> -----Original Message-----
> From: Dick Hoogendijk [mailto:dick at nagual.nl]
> Sent: 10 August 2010 21:10
> To: FreeBSD Questions
> Subject: ZFS woes
> 
>   FreeBSD-8.1/amd64 -> I spend all evening trying to create a ZFS mirror
> 
> on my two 1Tb sata2 drives formerly used under opensolaris (zfs22) I
> wiped out the firt mb; i used sysinstall to create a fbsd slice; wiped
> it out again; booted knoppix to create an EFI / GPT; booted into
> opensolaris and created a zpool (v14), but nothing, nothing did the
> trick.
> sometimes the GEOM GPT table (first / second) was bad; sometimes I saw
> other warnings; sometimes I *seemed* to be able to create a ZFS mirror
> and it *seemed* healthy. I even could write to it, but the moment I
> wanted to do a "zpool scrub tank" the system freezes or gave me warnings
> 
> like ZFS: vdev failure, zpool=tank type=vdev.bad.label

This 'vdev' reference nudges some dim recall of something like this 
discussed either on -current or -stable quite a while back. Didn't pay it 
any real attention because it didn't pertain to me, so I promptly forgot. 
Might search the lists fot 'vdev' and ZFS.
 
> Whatever I did, I could not get rid of the errors and create a healthy
> zpool. It really drives me crazy, so if anyone can tell me HOW I can
> turn two drives into a state that I can use them for ZFS under FreeBSD,
> please tell me *in detail*.
> 
> I love to have ZFS back (I'm really used to it on opensolaris), but it
> has to be safe. It cannot be that one zpool scrub halts my system. I
> must have done something wrong then. But what?
> _______________________________________________
[snip] 
> 
> I could be over simplifying what you are trying to do, but seen as you
> did not mention it what was wrong with Freebsd and zpool create tank
> mirror device1 device2
> 
> If you are getting warnings about the drives being part of a previous
> pool and you are not fussed about the data on the drives try using the
> manufactures diagnostics to do low level format then create your pool.
> 
> Regards
> 
> Graeme
> 
[snip]

GEOM stores it's metadata in the last sector of the drive. So the old trick 
of wiping the MBR or just the front part of the drive may not be enough. 
You'd think once the partition table was gone this sector would no longer 
matter. 

The so-called "low-level" format for IDE/SATA drives isn't really a low 
level format like with a SCSI drive and controller. It just writes zeros 
from one end of the drive completely to the other. You can achieve the same 
results with dd.  

The GENERIC kernel options GEOM_PART_GPT and options GEOM_LABEL if still 
present may be "tasting" that metadata sector if it is still around on the 
drive.

I also had another experience a while back. A drive died and the spare I 
pulled from the shelf had 6.2 on it. The 8 Release install would fail, 
something to do with either the partition table and/or labels from the 
earlier being invisible to the new and thus could not be written to. This is 
what I had to do to install 8:

Boot a LiveFS CD, then at a root prompt do: 

sysctl kern.geom.debugflags=16  and:

dd if=/dev/zero of=/dev/adx oseek=1 bs=512 count=1 

where x equals your drive number. Probably should only do this before a 
fresh install and NOT on a system with data you want to keep.

Doing a dd of zeros completely over all of the drive(s) will either make the 
problem go away, or confirm it to be something else, e.g., not caused by any 
residual data present on the drive.

-Mike











 




More information about the freebsd-questions mailing list