Crashed ZFS

Александр Поволоцкий tarkhil at webmail.sub.ru
Wed May 29 14:56:42 UTC 2019


It worked!

On 29.05.2019 16:22, Mike Tancsa wrote:
> I would wait for a few more people to chime with what to do, but I had a
> similar issue (same error IIRC) last week after physically moving the
> disks to a new controller.  I did
> zpool clear -F <pool name>
> zpool export <pool name>
> zpool import <pool name>
>
> The clear gave an error but after the export / import, it came back
> online.  A scrub was done, but showed no errors. Good luck!
>
>      ---Mike
>
>
>
> On 5/29/2019 7:28 AM, Александр Поволоцкий wrote:
>> Hello
>>
>> After power surge, one of my zpools yields errors
>>
>> |root at stor:/home/tarkhil # zpool status -v big_fast_one||
>> ||  pool: big_fast_one||
>> || state: FAULTED||
>> ||status: The pool metadata is corrupted and the pool cannot be opened.||
>> ||action: Recovery is possible, but will result in some data loss.||
>> ||        Returning the pool to its state as of Tue May 28 02:00:35
>> 2019||
>> ||        should correct the problem.  Approximately 5 seconds of data||
>> ||        must be discarded, irreversibly.  Recovery can be attempted||
>> ||        by executing 'zpool clear -F big_fast_one'. A scrub of the
>> pool||
>> ||        is strongly recommended after recovery.||
>> ||   see: http://illumos.org/msg/ZFS-8000-72||
>> ||  scan: none requested||
>> ||config:||
>> ||
>> ||        NAME              STATE     READ WRITE CKSUM||
>> ||        big_fast_one      FAULTED      0     0     1||
>> ||          raidz1-0        ONLINE       0     0     7||
>> ||            gpt/ZA21TJA7  ONLINE       0     0     0||
>> ||            gpt/ZA21P6JQ  ONLINE       0     0     0||
>> ||            gpt/ZA21PJZY  ONLINE       0     0     0||
>> ||            gpt/ZA21T6L6  ONLINE       0     0     0||
>> ||            gpt/ZA21TN3R  ONLINE       0     0     0||
>> |
>>
>> |root at stor:/home/tarkhil # zpool clear -Fn big_fast_one||
>> ||internal error: out of memory|||
>>
>> while there are plenty of RAM|(96 Gb)|
>>
>> |gpart shows everything OK|
>>
>> |root at stor:/home/tarkhil # zdb -AAA -L -e big_fast_one
>>
>> Configuration for import:
>>          vdev_children: 1
>>          version: 5000
>>          pool_guid: 4972776226197917949
>>          name: 'big_fast_one'
>>          state: 0
>>          hostid: 773241384
>>          hostname: 'stor.inf.sudo.su'
>>          vdev_tree:
>>              type: 'root'
>>              id: 0
>>              guid: 4972776226197917949
>>              children[0]:
>>                  type: 'raidz'
>>                  id: 0
>>                  guid: 58821498572043303
>>                  nparity: 1
>>                  metaslab_array: 41
>>                  metaslab_shift: 38
>>                  ashift: 12
>>                  asize: 50004131840000
>>                  is_log: 0
>>                  create_txg: 4
>>                  children[0]:
>>                      type: 'disk'
>>                      id: 0
>>                      guid: 13318923208485210326
>>                      phys_path:
>> 'id1,enc at n50030480005d387f/type at 0/slot at e/elmdesc at 013/p1'
>>                      whole_disk: 1
>>                      DTL: 57
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21TJA7'
>>                  children[1]:
>>                      type: 'disk'
>>                      id: 1
>>                      guid: 5421240647062683539
>>                      phys_path:
>> 'id1,enc at n50030480005d387f/type at 0/slot at 1/elmdesc at 000/p1'
>>                      whole_disk: 1
>>                      DTL: 56
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21P6JQ'
>>                  children[2]:
>>                      type: 'disk'
>>                      id: 2
>>                      guid: 17788210514601115893
>>                      phys_path:
>> 'id1,enc at n50030480005d387f/type at 0/slot at 5/elmdesc at 004/p1'
>>                      whole_disk: 1
>>                      DTL: 55
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21PJZY'
>>                  children[3]:
>>                      type: 'disk'
>>                      id: 3
>>                      guid: 11411950711187621765
>>                      phys_path:
>> 'id1,enc at n50030480005d387f/type at 0/slot at 9/elmdesc at 008/p1'
>>                      whole_disk: 1
>>                      DTL: 54
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21T6L6'
>>                  children[4]:
>>                      type: 'disk'
>>                      id: 4
>>                      guid: 6486033012937503138
>>                      phys_path:
>> 'id1,enc at n50030480005d387f/type at 0/slot at d/elmdesc at 012/p1'
>>                      whole_disk: 1
>>                      DTL: 52
>>                      create_txg: 4
>>                      path: '/dev/gpt/ZA21TN3R'
>> zdb: can't open 'big_fast_one': File exists
>>
>> ZFS_DBGMSG(zdb):
>> |
>>
>> |root at stor:/home/tarkhil # zdb -AAA -L -u -e big_fast_one
>> zdb: can't open 'big_fast_one': File exists
>> root at stor:/home/tarkhil # zdb -AAA -L -d -e big_fast_one
>> zdb: can't open 'big_fast_one': File exists
>> root at stor:/home/tarkhil # zdb -AAA -L -h -e big_fast_one
>> zdb: can't open 'big_fast_one': File exists
>> |
>>
>> |What should I do? Export and import? Rename zpool.cache and import
>> (it's a remote box, I cannot afford another 3 hours to and from it)?
>> Something else?|
>>
>> |--|
>>
>> |Alex
>> |
>>
>>
>>
>>
>> ---
>> Это сообщение проверено на вирусы антивирусом Avast.
>> https://www.avast.com/antivirus
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>
>>


More information about the freebsd-fs mailing list