Ghost ZFS pool prevents mounting root fs
Benjamin Lutz
benjamin.lutz at biolab.ch
Fri Nov 8 10:59:11 UTC 2013
Hello,
I have a server here that after trying to reboot during the 9.2 update
process refuses to mount the root file system, which is a ZFS (tank).
The error message given is:
Trying to mount root from zfs:tank []...
Mounting from zfs:tank failed with error 5.
Adding a mit more verbosity by setting vfs.zfs.debug=1 gives one
additional crucial bit of information that probably explains why, it tries
to find the disk /dev/label/disk7, but no such disk exists.
When I first set up the server, I used glabel to label the disks disk0,
disk1, ..., disk7, disk8 (so the full path ends up being /dev/label/disk0)
and created a RAIDZ pool with those. Then I realized that I needed boot
partitions. I destroyed the pool and the labels, and instead used gpart to
set up 2 partitions on every disk, one for the boot loader, and one for
ZFS. Since GPT allows you to label partitions, glabel is no longer
necessary and is no longer used, instead the disks are called
/dev/gpt/disk00, ..., /dev/gpt/disk11 (yeah, I added 3 more at that
point.) The machine has worked fine with that configuration for a bit more
than a year, and has lived through a couple of system updates.
Imagine my surprise then when suddenly the old disk name shows up like the
ghost of christmas past.
Now, I can boot off the 9.2 installer USB stick and import the pool just
fine. Using that, I've reinstalled the kernel and base system, but the
problem persists.
I did some poking around with zdb, and strangely enough, zdb finds *two*
pools both called tank, one which references the old disk names, and one
which references the new ones. Please find zdb's full output below.
Can you tell me how to resolve the situation, i.e. how to make the ghost
pool go away? I'd rather not recreate the pool or move the data to another
system, since it's around 16TB and would take forever.
I've had a couple of ideas myself:
- move the rootfs back to a UFS partition since using zpool(8)/zfs(8) to
manipulate the pool instead of the kernel's mountroot code seems to work
just fine.
- Since zpool(8) only sees the new, proper pool, renaming it might work.
But ideally there's a way to exorcize the ghost instead of just ignoring
it.
Cheers,
Benjamin
# zdb -e tank
tank
vdev_children: 1
version: 28
pool_guid: 4570073208211798611
name: 'tank'
state: 2
hostid: 1638041647
hostname: 'blackhole'
vdev_tree:
type: 'root'
id: 0
guid: 4570073208211798611
children[0]:
type: 'raidz'
id: 0
guid: 5554077360160676751
nparity: 3
metaslab_array: 30
metaslab_shift: 37
ashift: 12
asize: 16003153002496
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7103686668495146668
phys_path: '/dev/label/disk0'
whole_disk: 1
create_txg: 4
path: '/dev/da3'
children[1]:
type: 'disk'
id: 1
guid: 11488943812765429059
phys_path: '/dev/label/disk1'
whole_disk: 1
create_txg: 4
path: '/dev/da1'
children[2]:
type: 'disk'
id: 2
guid: 2240980772490601588
phys_path: '/dev/label/disk2'
whole_disk: 1
create_txg: 4
path: '/dev/da2'
children[3]:
type: 'disk'
id: 3
guid: 7712444707588256364
phys_path: '/dev/label/disk3'
whole_disk: 1
create_txg: 4
path: '/dev/da6'
children[4]:
type: 'disk'
id: 4
guid: 7829288003258469012
phys_path: '/dev/label/disk4'
whole_disk: 1
create_txg: 4
path: '/dev/da5'
children[5]:
type: 'disk'
id: 5
guid: 9120531484255382572
phys_path: '/dev/label/disk5'
whole_disk: 1
create_txg: 4
path: '/dev/da4'
children[6]:
type: 'disk'
id: 6
guid: 7514906893097480706
phys_path: '/dev/label/disk6'
whole_disk: 1
create_txg: 4
path: '/dev/da7'
children[7]:
type: 'disk'
id: 7
guid: 4415230843798627292
path: '/dev/label/disk7'
phys_path: '/dev/label/disk7'
whole_disk: 1
create_txg: 4
tank
vdev_children: 1
version: 28
pool_guid: 4271606601895493352
name: 'tank'
state: 0
vdev_tree:
type: 'root'
id: 0
guid: 4271606601895493352
children[0]:
type: 'raidz'
id: 0
guid: 5259196639191116590
nparity: 3
metaslab_array: 30
metaslab_shift: 36
ashift: 12
asize: 23746817556480
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2163203387272462113
phys_path: '/dev/gpt/disk00'
whole_disk: 1
DTL: 35
create_txg: 4
path: '/dev/gpt/disk00'
children[1]:
type: 'disk'
id: 1
guid: 1705985029979435838
phys_path: '/dev/gpt/disk01'
whole_disk: 1
DTL: 47
create_txg: 4
path: '/dev/gpt/disk01'
children[2]:
type: 'disk'
id: 2
guid: 1954596596797161476
phys_path: '/dev/gpt/disk02'
whole_disk: 1
DTL: 46
create_txg: 4
path: '/dev/gpt/disk02'
children[3]:
type: 'disk'
id: 3
guid: 2938549304351001523
phys_path: '/dev/gpt/disk03'
whole_disk: 1
DTL: 45
create_txg: 4
path: '/dev/gpt/disk03'
children[4]:
type: 'disk'
id: 4
guid: 5829232400893566256
phys_path: '/dev/gpt/disk04'
whole_disk: 1
DTL: 44
create_txg: 4
path: '/dev/gpt/disk04'
children[5]:
type: 'disk'
id: 5
guid: 33801690562498165
phys_path: '/dev/gpt/disk05'
whole_disk: 1
DTL: 43
create_txg: 4
path: '/dev/gpt/disk05'
children[6]:
type: 'disk'
id: 6
guid: 8909758428297665941
phys_path: '/dev/gpt/disk06'
whole_disk: 1
DTL: 42
create_txg: 4
path: '/dev/gpt/disk06'
children[7]:
type: 'disk'
id: 7
guid: 8327488498400702594
path: '/dev/gpt/disk07'
phys_path: '/dev/gpt/disk07'
whole_disk: 1
DTL: 562740
create_txg: 4
offline: 1
children[8]:
type: 'disk'
id: 8
guid: 3781829895852494854
phys_path: '/dev/gpt/disk08'
whole_disk: 1
DTL: 40
create_txg: 4
path: '/dev/gpt/disk08'
children[9]:
type: 'disk'
id: 9
guid: 12478617399065078660
phys_path: '/dev/gpt/disk09'
whole_disk: 1
DTL: 37
create_txg: 4
path: '/dev/gpt/disk09'
children[10]:
type: 'disk'
id: 10
guid: 10513667545487916410
phys_path: '/dev/gpt/disk10'
whole_disk: 1
DTL: 66
create_txg: 4
path: '/dev/gpt/disk10'
children[11]:
type: 'disk'
id: 11
guid: 16680570826821909684
phys_path: '/dev/gpt/disk11'
whole_disk: 1
DTL: 109
create_txg: 4
path: '/dev/gpt/disk11'
root@:~ # zdb -l /dev/da1
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
version: 28
name: 'tank'
state: 2
txg: 61
pool_guid: 4570073208211798611
hostid: 1638041647
hostname: 'blackhole'
top_guid: 5554077360160676751
guid: 11488943812765429059
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 5554077360160676751
nparity: 3
metaslab_array: 30
metaslab_shift: 37
ashift: 12
asize: 16003153002496
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7103686668495146668
path: '/dev/label/disk0'
phys_path: '/dev/label/disk0'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 11488943812765429059
path: '/dev/label/disk1'
phys_path: '/dev/label/disk1'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 2240980772490601588
path: '/dev/label/disk2'
phys_path: '/dev/label/disk2'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 7712444707588256364
path: '/dev/label/disk3'
phys_path: '/dev/label/disk3'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 7829288003258469012
path: '/dev/label/disk4'
phys_path: '/dev/label/disk4'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 9120531484255382572
path: '/dev/label/disk5'
phys_path: '/dev/label/disk5'
whole_disk: 1
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 7514906893097480706
path: '/dev/label/disk6'
phys_path: '/dev/label/disk6'
whole_disk: 1
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 4415230843798627292
path: '/dev/label/disk7'
phys_path: '/dev/label/disk7'
whole_disk: 1
create_txg: 4
--------------------------------------------
LABEL 3
--------------------------------------------
version: 28
name: 'tank'
state: 2
txg: 61
pool_guid: 4570073208211798611
hostid: 1638041647
hostname: 'blackhole'
top_guid: 5554077360160676751
guid: 11488943812765429059
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 5554077360160676751
nparity: 3
metaslab_array: 30
metaslab_shift: 37
ashift: 12
asize: 16003153002496
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7103686668495146668
path: '/dev/label/disk0'
phys_path: '/dev/label/disk0'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 11488943812765429059
path: '/dev/label/disk1'
phys_path: '/dev/label/disk1'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 2240980772490601588
path: '/dev/label/disk2'
phys_path: '/dev/label/disk2'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 7712444707588256364
path: '/dev/label/disk3'
phys_path: '/dev/label/disk3'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 7829288003258469012
path: '/dev/label/disk4'
phys_path: '/dev/label/disk4'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 9120531484255382572
path: '/dev/label/disk5'
phys_path: '/dev/label/disk5'
whole_disk: 1
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 7514906893097480706
path: '/dev/label/disk6'
phys_path: '/dev/label/disk6'
whole_disk: 1
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 4415230843798627292
path: '/dev/label/disk7'
phys_path: '/dev/label/disk7'
whole_disk: 1
create_txg: 4
--
Benjamin Lutz | Software Engineer | BIOLAB Technology AG
Dufourstr. 80 | CH-8008 Zurich | www.biolab.ch | benjamin.lutz at biolab.ch
PHONE +41 44 295 97 13 | MOBILE +41 79 558 57 13 | FAX +41 44 295 97 19
This e-mail and the information it contains including attachments are
confidential and meant
only for use by the intended recipient(s); disclosure or copying is
strictly prohibited. If you
are not addressed, but in the possession of this e-mail, please notify the
sender immediately.
More information about the freebsd-fs
mailing list