zpool vdev vs. glabel
Marius Nünnerich
marius at nuenneri.ch
Wed Feb 10 09:19:10 UTC 2010
2010/2/10 Gerrit Kühn <gerrit at pmp.uni-hannover.de>:
> On Tue, 9 Feb 2010 13:27:21 -0700 Elliot Finley <efinley.lists at gmail.com>
> wrote about Re: zpool vdev vs. glabel:
>
> EF> I ran into this same problem. you need to clean the beginning and end
> EF> of your disk off before glabeling and adding it to your pool. clean
> EF> with dd if=/dev/zero...
>
> Hm, I think I did that (at least for the beginning part).
> Maybe I was not quite clear what I did below: I removed and re-attached
> the *same* disk which was labelled with glabel and running fine brefore.
> The label was there when I inserted it back, but zfs went for the da
> device node anyway.
> If I see this problem again, I will try to wipe the complete disk before
> re-inserting it.
It seems there is some kind of race condition with zfs either picking
up the disk itself or the label device for the same disk. I guess it's
which ever it probes first. I wrote the GPT part of glabel for using
it in situations like this, I had not a single report of this kind of
problem with the gpt labels. Maybe you can try them too?
My zpool looks like this:
% zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2 ONLINE 0 0 0
gpt/wd1 ONLINE 0 0 0
gpt/wd2 ONLINE 0 0 0
gpt/wd3 ONLINE 0 0 0
gpt/wd4 ONLINE 0 0 0
gpt/wd5 ONLINE 0 0 0
errors: No known data errors
I already physically reordered the devices a few times and it always
worked out correctly.
More information about the freebsd-stable
mailing list