gpart labels - why arent't some showing up in /dev/gpt/?

Peter Maloney peter.maloney at brockmann-consult.de
Fri May 4 07:25:19 UTC 2012


On 05/04/2012 01:22 AM, Andrew Reilly wrote:
> On Wed, May 02, 2012 at 11:29:38AM +0200, Gustau Pérez i Querol wrote:
>> Al 02/05/2012 08:46, En/na Peter Maloney ha escrit:
>>> I have the same problem. Any time you boot off a CD/DVD and use import
>>> -f (and then don't export), or I guess use import -f a pool from
>>> anywhere, it does that. I don't know any non-zfs causes for the problem.
>>    When doing the import -f, use -d /dev/gpt to force zpool to search
>> for devices in /dev/gpt. That way the import will be done by gpt name,
>> instead of by device name.
> I've just read the manpage on that option again, and I don't
> think that it would help, even if it was available.
Let's test that then, shall we?

Here is an old VM I have, where one slice lost its gpt label.

==================
part 1: previously lost label on non-root disk
==================

# zpool status test2
   pool: test2
  state: ONLINE
   scan: none requested
config:

         NAME                                          STATE     READ 
WRITE CKSUM
         test2                                         ONLINE       
0     0     0
           gptid/44b52f4d-5d75-11e1-b476-080027e5bb66  ONLINE       
0     0     0

# zdb
...
test2:
     version: 28
     name: 'test2'
     state: 0
     txg: 4
     pool_guid: 16644836222594068864
     hostid: 871222403
     hostname: 'bczfsvm1test.bc.local'
     vdev_children: 1
     vdev_tree:
         type: 'root'
         id: 0
         guid: 16644836222594068864
         create_txg: 4
         children[0]:
             type: 'disk'
             id: 0
             guid: 1497402725988130066
             path: '/dev/da3p1'
             phys_path: '/dev/da3p1'
             whole_disk: 1
             metaslab_array: 30
             metaslab_shift: 22
             ashift: 9
             asize: 729284608
             is_log: 0
             create_txg: 4

...


da3 is wrong (another pool uses da3, and gpart show da3 shows "no such 
geom")... so now I try to figure out which disk it really is:

# dd if=/dev/gptid/44b52f4d-5d75-11e1-b476-080027e5bb66 of=/dev/null 
bs=1M count=5000 >/dev/null 2>&1 &
# gstat

here are the high load ones:
  L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
     0    600    600  76798    1.0      0      0    0.0   60.1| da4
     0    601    601  76924    1.1      0      0    0.0   66.4| da4p1
     0    601    601  76924    1.2      0      0    0.0   70.6| 
gptid/44b52f4d-5d75-11e1-b476-080027e5bb66

# gpart show da4
=>      34  41942973  da4  GPT  (20G)
         34   1433600    1  freebsd-zfs  (700M)
    1433634  40509373       - free -  (19G)

# gpart show -l da4
=>      34  41942973  da4  GPT  (20G)
         34   1433600    1  (null)  (700M)
    1433634  40509373       - free -  (19G)

(Strange... I thought usually when this happens, the label still shows 
in gpart)

# ls /dev/gpt
root0   root1   swap0   swap1

# shutdown -r now

# zpool import -f -d /dev/gpt test2
cannot import 'test2': no such pool available

# gpart modify -i 1 -l test2d1 da4
da4p1 modified
# ls /dev/gpt (expecting not to see it here... never works this way, but 
maybe rescan or reboot will work; I don't know how to make it 'retaste' 
the partitions other than gpart delete and create)
root0   root1   swap0   swap1
# camcontrol rescan 0
Re-scan of bus 0 was successful
# ls /dev/gpt (not sure what to expect here)
root0   root1   swap0   swap1

# shutdown -r now

# ls /dev/gpt/
root0   root1   swap0   swap1   test2d1

# zpool import test2
# zpool status test2
   pool: test2
  state: ONLINE
   scan: none requested
config:

         NAME                                          STATE     READ 
WRITE CKSUM
         test2                                         ONLINE       
0     0     0
           gptid/44b52f4d-5d75-11e1-b476-080027e5bb66  ONLINE       
0     0     0

# shutdown -r now

# zpool import -d /dev/gpt test2
# zpool status test2
# zpool status test2
config:

         NAME           STATE     READ WRITE CKSUM
         test2          ONLINE       0     0     0
           gpt/test2d1  ONLINE       0     0     0

# zdb
...
test2:
     version: 28
     name: 'test2'
     state: 0
     txg: 24635
     pool_guid: 16644836222594068864
     hostid: 871222403
     hostname: 'bczfsvm1test.bc.local'
     vdev_children: 1
     vdev_tree:
         type: 'root'
         id: 0
         guid: 16644836222594068864
         children[0]:
             type: 'disk'
             id: 0
             guid: 1497402725988130066
             path: '/dev/gpt/test2d1'
             phys_path: '/dev/gpt/test2d1'
             whole_disk: 1
             metaslab_array: 30
             metaslab_shift: 22
             ashift: 9
             asize: 729284608
             is_log: 0
             create_txg: 4
...


# zpool export test2
# zpool import test2
# zpool status test2
config:

         NAME           STATE     READ WRITE CKSUM
         test2          ONLINE       0     0     0
           gpt/test2d1  ONLINE       0     0     0

==================
part 2: root disk
==================

# zpool status zroot
config:

         NAME           STATE     READ WRITE CKSUM
         zroot          ONLINE       0     0     0
           mirror-0     ONLINE       0     0     0
             gpt/root0  ONLINE       0     0     0
             gpt/root1  ONLINE       0     0     0

# shutdown -r

boot on DVD (to break it, just to prove that the fix works)
# kldload /mnt2/boot/kernel/opensolaris.ko
# kldload /mnt2/boot/kernel/zfs.ko
# ls /dev/gpt
root0 root1 swap0 swap1 test2d1
# zpool import -f zroot
# zpool status
zpool: not found
(oops... forgot to use altroot, so the booted fixit system is broken 
now... oh well, just remove DVD and reset)

# zpool status zroot
config:

         NAME        STATE     READ WRITE CKSUM
         zroot       ONLINE       0     0     0
           mirror-0  ONLINE       0     0     0
             da0p3   ONLINE       0     0     0
             da2p3   ONLINE       0     0     0

(strange... usually you get gptid stuff instead of device names)

# ls /dev/gpt
swap0   swap1   test2d1

# shutdown -r now

boot DVD again (to fix it)
# kldload /mnt2/boot/kernel/opensolaris.ko
# kldload /mnt2/boot/kernel/zfs.ko
# ls /dev/gpt
root0 root1 swap0 swap1 test2d1
# zpool import -f -d /dev/gpt -o altroot=/z zroot
# zpool status
config:

         NAME           STATE     READ WRITE CKSUM
         zroot          ONLINE       0     0     0
           mirror-0     ONLINE       0     0     0
             gpt/root0  ONLINE       0     0     0
             gpt/root1  ONLINE       0     0     0

Remove DVD and boot

# zpool status zroot
   pool: zroot
  state: ONLINE
   scan: none requested
config:

         NAME           STATE     READ WRITE CKSUM
         zroot          ONLINE       0     0     0
           mirror-0     ONLINE       0     0     0
             gpt/root0  ONLINE       0     0     0
             gpt/root1  ONLINE       0     0     0


So, it seems to work... but not if you do it in the wrong order. You 
need to set the label again, and boot without the pool imported before 
/gpt/ exists. Then you need to use -d.

And thank you Gustau Pérez i Querol! I didn't know about -d.

> I had
> previously been able to refer to gpt entries as paths from /dev,
> without it.  I.e., zpool create raidz tank gpt/zraid1 gpt/zraid2
> ... etc.  My problem at the moment is that the /dev/gpt/zraid1
> etc entries aren't there at all, and zpool create complains
> about exactly that problem.  It's not a question of using -d
> /dev/gpt to short-cut the path name.
>
> Cheers,
>


-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney at brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------



More information about the freebsd-fs mailing list