mountroot prompt with error2, when trying to boot from a single drive in a 2-way mirror

yudi v yudi.tux at gmail.com
Wed Apr 15 05:18:50 UTC 2015


hi



It's not the BIOS settings, I checked. It picks up the other HDD in the
mirror and goes through the boot code and then it fails at booting into zfs
root pool.
The error is:

*Trying to mount root from zfs:osysPool/ROOT/default []...
*>

* Mounting from zfs:osysPool/ROOT/default failed with error 6.*

 it is something to do with the guid mismatch for ada2p3 and ada3p3, not
sure why it's even trying to compare them as they are the two partitions in
the mirror.
Please see the below images for the relevant console messages.
screen1:
https://drive.google.com/file/d/1Q-F-8kF-Nevn5ijvFXLNuvtJOuRn7ztO2Q/view?usp=sharing
screen2:
https://drive.google.com/file/d/1ZGseshS0Uk0cc6Gli_-tywHNXO7sLQ_aVw/view?usp=sharing

 I think for this to work /dev/ada2p3 should be attached (which has guid
2114803205502328891) but ends up attaching /dev/ada2 with guid
15791103587254396721 (this is the guid for ada3p3). ada3 is the one I am
disconnecting to test this.

​​
output from    #zdb -l /dev/ada2
=================================================================================

--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 5000
    name: 'osysPool'
    state: 0
    txg: 30644
    pool_guid: 3008044207603099329
    hostid: 1990654128
    hostname: ''
    top_guid: 16302517322241353808
    guid: 2114803205502328891
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 16302517322241353808
        metaslab_array: 33
        metaslab_shift: 29
        ashift: 9
        asize: 70355779584
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 2114803205502328891
            path: '/dev/ada2p3'
            phys_path: '/dev/ada2p3'
            whole_disk: 1
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 15791103587254396721
            path: '/dev/ada3p3'
            phys_path: '/dev/ada3p3'
            whole_disk: 1
            create_txg: 4
    features_for_read:
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 5000
    name: 'osysPool'
    state: 0
    txg: 30644
    pool_guid: 3008044207603099329
    hostid: 1990654128
    hostname: ''
    top_guid: 16302517322241353808
    guid: 2114803205502328891
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 16302517322241353808
        metaslab_array: 33
        metaslab_shift: 29
        ashift: 9
        asize: 70355779584
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 2114803205502328891
            path: '/dev/ada2p3'
            phys_path: '/dev/ada2p3'
            whole_disk: 1
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 15791103587254396721
            path: '/dev/ada3p3'
            phys_path: '/dev/ada3p3'
            whole_disk: 1
            create_txg: 4
    features_for_read:


output from #zdb -l /dev/ada3
==============================================================================

--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 5000
    name: 'osysPool'
    state: 0
    txg: 30644
    pool_guid: 3008044207603099329
    hostid: 1990654128
    hostname: ''
    top_guid: 16302517322241353808
    guid: 15791103587254396721
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 16302517322241353808
        metaslab_array: 33
        metaslab_shift: 29
        ashift: 9
        asize: 70355779584
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 2114803205502328891
            path: '/dev/ada2p3'
            phys_path: '/dev/ada2p3'
            whole_disk: 1
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 15791103587254396721
            path: '/dev/ada3p3'
            phys_path: '/dev/ada3p3'
            whole_disk: 1
            create_txg: 4
    features_for_read:
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 5000
    name: 'osysPool'
    state: 0
    txg: 30644
    pool_guid: 3008044207603099329
    hostid: 1990654128
    hostname: ''
    top_guid: 16302517322241353808
    guid: 15791103587254396721
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 16302517322241353808
        metaslab_array: 33
        metaslab_shift: 29
        ashift: 9
        asize: 70355779584
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 2114803205502328891
            path: '/dev/ada2p3'
            phys_path: '/dev/ada2p3'
            whole_disk: 1
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 15791103587254396721
            path: '/dev/ada3p3'
            phys_path: '/dev/ada3p3'
            whole_disk: 1
            create_txg: 4
    features_for_read:
====================================================================================
Is anything amiss in the above label info for these two drives?
I have used these two drives before for testing and reinstalled the os and
recreated the pools.
Any suggestions on how to fix this.

Thanks
Yudi

On Tue, Apr 14, 2015 at 11:17 PM, Matthew Seaman <matthew at freebsd.org>
wrote:

> On 2015/04/14 13:35, yudi v wrote:
> > I was testing recovery scenario by removing one of the drives in a 2-way
> > mirror, but the system fails to boot and comes up with the mountroot
> prompt
> > with error 2. When I reconnect the second drive, it boots fine again.
> >
> > Any suggestions on what the problem might be?
> >
> > it's a simple root-on-ZFS setup (9.1 upgraded to 10.1 recently) with two
> > disks in mirror config.
> > each disk has 3 partitions, first one has the boot code, second has the
> > swap, third has the OS.
> >
> > and the zfs pool is setup on the 3rd partition of the two disks.
>
> Check the BIOS settings -- there will be a list giving the order of
> preference for devices to boot from.  Frequently you'll find there is
> one slot for 'Harddrive' and you get to select just one of the drives
> attached to the system to boot from.  In this case, simply telling it to
> use the other disk should allow you to boot.  Otherwise, if your bios
> allows you to specify several hard drives, then reordering the drives in
> the preference list might make it work.  This last really shouldn't be
> necessary, but not all BIOSes are created equal.
>
>         Cheers,
>
>         Matthew
>
>
>
>


-- 
Kind regards,
Yudi


More information about the freebsd-questions mailing list