ZFS not usable on FreeBSD-8.1

George Hartzell hartzell at alerce.com
Sat Aug 14 19:47:42 UTC 2010


Dick Hoogendijk writes:
 >   I run FreeBSD-8.1/amd64. I have used ZFS for four years on 
 > (Open)Solaris, so I have some experience with it. It always served me 
 > very very well. However, I cannot get it to work on my SATA2 drives. At 
 > first I thought they'de suffer from something from there OpenSolaris ZFS 
 > mirroring. So, I wiped out the drives completely by writing zero's to them.
 > Then I created a ZFS zpool on one drive, destroyed it and created a 
 > mirrored zpool on my 2 Tb drives. It seemed OK; files could be written 
 > and removed to/from it. A new zfs filesystem worked OK too. *HOWEVER*, 
 > the moment I *do* something to the zpool like "zpool scrub pool" I get a 
 > vdev failure (type=vdev.bad_label) and the pool is ruined. It can't be 
 > destroyed or exported anymore. It's just a waste. I tested this 
 > behaviour on 10 different drives. Four of them brandnew. It happened 
 > everytime again.
 > 
 > It is not the drives! Booting into OpenSolaris b134 I am perfectly able 
 > to create workable ZFS mirrors out of the drives. I can also scrub them 
 > ;-) ;-) or whatever io related thing I want to do.
 > 
 > This leads me to the conclusion that something is definitely wrong with 
 > ZFS in FreeBSD-8.1/amd64.
 > For the moment I created some gmirrors on a couple of drives, but man, 
 > how I'd liked to have zpools.
 > They work zo much sweater/easier.
 > 
 > Am I alone in these matters? Are there any known issues regarding ZFS. I 
 > know there are some in FreeBSD-9 (at least I saw some reports on 
 > vdev.bad_label messages) on nabble.com

You haven't provided enough information for me to make a concrete
suggestion, but this kind of thing often seems to boil down to
something getting confused over slices and partitions when they both
have the same extent (start->end) on disk.  This used to bite me in
the gmirror world until I learned to make the partition one block
smaller than the slice it lived in.

Are you using explicit device names to add the disks to your pool?  If
so you'll gain robustness by using labels, either glabels as described
here:

   http://submesa.com/data/bsd/zfs

or if you're in the gpt world then gpt labels as described in the
gpart commands illustrated here:

   http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror

g.


More information about the freebsd-questions mailing list