Fwd: ZFS unable to import pool

Gena Guchin ggulchin at icloud.com
Tue Apr 22 01:06:38 UTC 2014



Begin forwarded message:

> From: Gena Guchin <ggulchin at icloud.com>
> Subject: Re: ZFS unable to import pool
> Date: April 21, 2014 at 4:25:14 PM PDT
> To: Hakisho Nukama <nukama at gmail.com>
> 
> Hakisho, 
> 
> I did try it.
> 
> 
> # zpool import -F -o readonly=on storage
> cannot import 'storage': one or more devices is currently unavailable
> 
> 
> # gpart list
> Geom name: ada0
> modified: false
> state: OK
> fwheads: 16
> fwsectors: 63
> last: 1953525134
> first: 34
> entries: 128
> scheme: GPT
> Providers:
> 1. Name: ada0p1
>   Mediasize: 524288 (512K)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r0w0e0
>   rawuuid: e621bb07-a4a4-11e3-98fc-001d7d090860
>   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
>   label: gptboot0
>   length: 524288
>   offset: 20480
>   type: freebsd-boot
>   index: 1
>   end: 1063
>   start: 40
> 2. Name: ada0p2
>   Mediasize: 4294967296 (4.0G)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r1w1e1
>   rawuuid: e6633c97-a4a4-11e3-98fc-001d7d090860
>   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
>   label: swap0
>   length: 4294967296
>   offset: 544768
>   type: freebsd-swap
>   index: 2
>   end: 8389671
>   start: 1064
> 3. Name: ada0p3
>   Mediasize: 995909353472 (928G)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r1w1e2
>   rawuuid: e6953f31-a4a4-11e3-98fc-001d7d090860
>   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
>   label: zfs0
>   length: 995909353472
>   offset: 4295512064
>   type: freebsd-zfs
>   index: 3
>   end: 1953525127
>   start: 8389672
> Consumers:
> 1. Name: ada0
>   Mediasize: 1000204886016 (932G)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r2w2e5
> 
> Geom name: ada1
> modified: false
> state: OK
> fwheads: 16
> fwsectors: 63
> last: 62499999
> first: 63
> entries: 4
> scheme: MBR
> Providers:
> 1. Name: ada1s1
>   Mediasize: 16105775616 (15G)
>   Sectorsize: 512
>   Stripesize: 0
>   Stripeoffset: 32256
>   Mode: r0w0e0
>   attrib: active
>   rawtype: 165
>   length: 16105775616
>   offset: 32256
>   type: freebsd
>   index: 1
>   end: 31456655
>   start: 63
> 2. Name: ada1s2
>   Mediasize: 15893692416 (15G)
>   Sectorsize: 512
>   Stripesize: 0
>   Stripeoffset: 3220905984
>   Mode: r0w0e0
>   attrib: active
>   rawtype: 165
>   length: 15893692416
>   offset: 16105807872
>   type: freebsd
>   index: 2
>   end: 62499023
>   start: 31456656
> Consumers:
> 1. Name: ada1
>   Mediasize: 32000000000 (30G)
>   Sectorsize: 512
>   Mode: r0w0e0
> 
> Geom name: diskid/DISK-CVEM852600N5032HGN
> modified: false
> state: OK
> fwheads: 16
> fwsectors: 63
> last: 62499999
> first: 63
> entries: 4
> scheme: MBR
> Providers:
> 1. Name: diskid/DISK-CVEM852600N5032HGNs1
>   Mediasize: 16105775616 (15G)
>   Sectorsize: 512
>   Stripesize: 0
>   Stripeoffset: 32256
>   Mode: r0w0e0
>   attrib: active
>   rawtype: 165
>   length: 16105775616
>   offset: 32256
>   type: freebsd
>   index: 1
>   end: 31456655
>   start: 63
> 2. Name: diskid/DISK-CVEM852600N5032HGNs2
>   Mediasize: 15893692416 (15G)
>   Sectorsize: 512
>   Stripesize: 0
>   Stripeoffset: 3220905984
>   Mode: r0w0e0
>   attrib: active
>   rawtype: 165
>   length: 15893692416
>   offset: 16105807872
>   type: freebsd
>   index: 2
>   end: 62499023
>   start: 31456656
> Consumers:
> 1. Name: diskid/DISK-CVEM852600N5032HGN
>   Mediasize: 32000000000 (30G)
>   Sectorsize: 512
>   Mode: r0w0e0
> 
> Geom name: ada2
> modified: false
> state: OK
> fwheads: 16
> fwsectors: 63
> last: 1953525134
> first: 34
> entries: 128
> scheme: GPT
> Providers:
> 1. Name: ada2p1
>   Mediasize: 524288 (512K)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r0w0e0
>   rawuuid: e73e1154-a4a4-11e3-98fc-001d7d090860
>   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
>   label: gptboot1
>   length: 524288
>   offset: 20480
>   type: freebsd-boot
>   index: 1
>   end: 1063
>   start: 40
> 2. Name: ada2p2
>   Mediasize: 4294967296 (4.0G)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r1w1e1
>   rawuuid: e77bd5dd-a4a4-11e3-98fc-001d7d090860
>   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
>   label: swap1
>   length: 4294967296
>   offset: 544768
>   type: freebsd-swap
>   index: 2
>   end: 8389671
>   start: 1064
> 3. Name: ada2p3
>   Mediasize: 995909353472 (928G)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r1w1e2
>   rawuuid: e7ad15ae-a4a4-11e3-98fc-001d7d090860
>   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
>   label: zfs1
>   length: 995909353472
>   offset: 4295512064
>   type: freebsd-zfs
>   index: 3
>   end: 1953525127
>   start: 8389672
> Consumers:
> 1. Name: ada2
>   Mediasize: 1000204886016 (932G)
>   Sectorsize: 512
>   Stripesize: 4096
>   Stripeoffset: 0
>   Mode: r2w2e5
> 
> 
> 
> thanks for your help!
> 
> 
> On Apr 21, 2014, at 4:17 PM, Hakisho Nukama <nukama at gmail.com> wrote:
> 
>> Hi Gena,
>> 
>> a missing cache device shouldn't be a problem.
>> There was a problem some years ago, where a pool was lost
>> with a missing cache device.
>> But that seems to be some ancient past (version 19 or whatever changed that).
>> Otherwise I wouldn't use a cache device myself.
>> 
>> You may try other ZFS implementations to import your pool.
>> ZFSonLinux, Illumos.
>> https://github.com/zfsonlinux/zfs/issues/1863
>> https://groups.google.com/forum/#!topic/zfs-fuse/TaOCLPQ8mp0
>> https://forums.freebsd.org/viewtopic.php?&t=18221
>> 
>> Have you tried the -o readonly=on option for zpool import?
>> Can you show your gpart list output?
>> 
>> Best Regards,
>> Nukama
>> 
>> On Mon, Apr 21, 2014 at 10:18 PM, Gena Guchin <ggulchin at icloud.com> wrote:
>>> Hakisho,
>>> 
>>> this is weird, while I do not see ONLINE next to cache device ada1s2, it is the same device as logs ada1s1, just different slice.
>>> I tried to see the difference between zfs labels on that device.
>>> 
>>> 
>>> [gena at ggulchin]-pts/0:57# zdb -l /dev/ada1s2
>>> --------------------------------------------
>>> LABEL 0
>>> --------------------------------------------
>>>   version: 5000
>>>   state: 4
>>>   guid: 7108193965515577889
>>> --------------------------------------------
>>> LABEL 1
>>> --------------------------------------------
>>>   version: 5000
>>>   state: 4
>>>   guid: 7108193965515577889
>>> --------------------------------------------
>>> LABEL 2
>>> --------------------------------------------
>>>   version: 5000
>>>   state: 4
>>>   guid: 7108193965515577889
>>> --------------------------------------------
>>> LABEL 3
>>> --------------------------------------------
>>>   version: 5000
>>>   state: 4
>>>   guid: 7108193965515577889
>>> [gena at ggulchin]-pts/0:58# zdb -l /dev/ada1s1
>>> --------------------------------------------
>>> LABEL 0
>>> --------------------------------------------
>>>   version: 5000
>>>   name: 'storage'
>>>   state: 1
>>>   txg: 14792113
>>>   pool_guid: 11699153865862401654
>>>   hostid: 3089874380
>>>   hostname: 'ggulchin.homeunix.com'
>>>   top_guid: 15354816574459194272
>>>   guid: 15354816574459194272
>>>   is_log: 1
>>>   vdev_children: 3
>>>   vdev_tree:
>>>       type: 'disk'
>>>       id: 1
>>>       guid: 15354816574459194272
>>>       path: '/dev/ada1s1'
>>>       phys_path: '/dev/ada1s1'
>>>       whole_disk: 1
>>>       metaslab_array: 125
>>>       metaslab_shift: 27
>>>       ashift: 9
>>>       asize: 16100884480
>>>       is_log: 1
>>>       DTL: 137
>>>       create_txg: 10478480
>>>   features_for_read:
>>> --------------------------------------------
>>> LABEL 1
>>> --------------------------------------------
>>>   version: 5000
>>>   name: 'storage'
>>>   state: 1
>>>   txg: 14792113
>>>   pool_guid: 11699153865862401654
>>>   hostid: 3089874380
>>>   hostname: 'ggulchin.homeunix.com'
>>>   top_guid: 15354816574459194272
>>>   guid: 15354816574459194272
>>>   is_log: 1
>>>   vdev_children: 3
>>>   vdev_tree:
>>>       type: 'disk'
>>>       id: 1
>>>       guid: 15354816574459194272
>>>       path: '/dev/ada1s1'
>>>       phys_path: '/dev/ada1s1'
>>>       whole_disk: 1
>>>       metaslab_array: 125
>>>       metaslab_shift: 27
>>>       ashift: 9
>>>       asize: 16100884480
>>>       is_log: 1
>>>       DTL: 137
>>>       create_txg: 10478480
>>>   features_for_read:
>>> --------------------------------------------
>>> LABEL 2
>>> --------------------------------------------
>>>   version: 5000
>>>   name: 'storage'
>>>   state: 1
>>>   txg: 14792113
>>>   pool_guid: 11699153865862401654
>>>   hostid: 3089874380
>>>   hostname: 'ggulchin.homeunix.com'
>>>   top_guid: 15354816574459194272
>>>   guid: 15354816574459194272
>>>   is_log: 1
>>>   vdev_children: 3
>>>   vdev_tree:
>>>       type: 'disk'
>>>       id: 1
>>>       guid: 15354816574459194272
>>>       path: '/dev/ada1s1'
>>>       phys_path: '/dev/ada1s1'
>>>       whole_disk: 1
>>>       metaslab_array: 125
>>>       metaslab_shift: 27
>>>       ashift: 9
>>>       asize: 16100884480
>>>       is_log: 1
>>>       DTL: 137
>>>       create_txg: 10478480
>>>   features_for_read:
>>> --------------------------------------------
>>> LABEL 3
>>> --------------------------------------------
>>>   version: 5000
>>>   name: 'storage'
>>>   state: 1
>>>   txg: 14792113
>>>   pool_guid: 11699153865862401654
>>>   hostid: 3089874380
>>>   hostname: 'ggulchin.homeunix.com'
>>>   top_guid: 15354816574459194272
>>>   guid: 15354816574459194272
>>>   is_log: 1
>>>   vdev_children: 3
>>>   vdev_tree:
>>>       type: 'disk'
>>>       id: 1
>>>       guid: 15354816574459194272
>>>       path: '/dev/ada1s1'
>>>       phys_path: '/dev/ada1s1'
>>>       whole_disk: 1
>>>       metaslab_array: 125
>>>       metaslab_shift: 27
>>>       ashift: 9
>>>       asize: 16100884480
>>>       is_log: 1
>>>       DTL: 137
>>>       create_txg: 10478480
>>>   features_for_read:
>>> 
>>> 
>>> does this mean SSD drive is corrupted?
>>> is my pool lost forever?
>>> 
>>> thanks!
>>> 
>>> 
>>> On Apr 21, 2014, at 2:24 PM, Hakisho Nukama <nukama at gmail.com> wrote:
>>> 
>>>> Hi Gena,
>>>> 
>>>> there are several options to import a pool, which might work.
>>>> It looks like only one device is missing in raidz1, so the pool
>>>> could be importable, if the cache device is also available.
>>>> Try to connect it back, this can cause an non-importable pool.
>>>> 
>>>> Try reading the zpool man page and investigate into following flags:
>>>> zpool import -F -o readonly=on
>>>> 
>>>> Best Regards,
>>>> Nukama
>>>> 
>>>> On Mon, Apr 21, 2014 at 7:29 PM, Gena Guchin <ggulchin at icloud.com> wrote:
>>>>> Hello FreeBSD users,
>>>>> 
>>>>> my appologies for reposting, but I'd really need your help!
>>>>> 
>>>>> 
>>>>> I have this huge problem with my ZFS server. I have accidentally formatted one of the drives in exported ZFS pool. and now I can’t import the pool back. this is extremely important pool for me. device that is missing is still attached to the system. Any help would be greatly appreciated.
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> #uname -a
>>>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014     root at snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
>>>>> 
>>>>> #zpool import
>>>>> pool: storage
>>>>>  id: 11699153865862401654
>>>>> state: UNAVAIL
>>>>> status: One or more devices are missing from the system.
>>>>> action: The pool cannot be imported. Attach the missing
>>>>>     devices and try again.
>>>>> see: http://illumos.org/msg/ZFS-8000-6X
>>>>> config:
>>>>> 
>>>>>     storage                 UNAVAIL  missing device
>>>>>       raidz1-0              DEGRADED
>>>>>         ada3                ONLINE
>>>>>         ada4                ONLINE
>>>>>         ada5                ONLINE
>>>>>         ada6                ONLINE
>>>>>         248348789931078390  UNAVAIL  cannot open
>>>>>     cache
>>>>>       ada1s2
>>>>>     logs
>>>>>       ada1s1                ONLINE
>>>>> 
>>>>>     Additional devices are known to be part of this pool, though their
>>>>>     exact configuration cannot be determined.
>>>>> 
>>>>> 
>>>>> # zpool list
>>>>> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
>>>>> zroot   920G  17.9G   902G     1%  1.00x  ONLINE  -
>>>>> 
>>>>> # zpool upgrade
>>>>> This system supports ZFS pool feature flags.
>>>>> 
>>>>> All pools are formatted using feature flags.
>>>>> 
>>>>> Every feature flags pool has all supported features enabled.
>>>>> 
>>>>> # zfs upgrade
>>>>> This system is currently running ZFS filesystem version 5.
>>>>> 
>>>>> All filesystems are formatted with the current version.
>>>>> 
>>>>> 
>>>>> Thanks a lot!
>>>>> 
>>>>> — Gena
>>>>> _______________________________________________
>>>>> freebsd-fs at freebsd.org mailing list
>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>> 
> 



More information about the freebsd-fs mailing list