ZFS - RAIDZ1 Recovery
BlackCat
blackcatzilla at gmail.com
Fri May 27 07:36:31 UTC 2016
Hello Evgeny,
2016-05-27 1:26 GMT+03:00 esamorokov <esamorokov at gmail.com>:
> I have 3 x 3TB in RAIDZ1, where one drive is gone and I accidentally
> screwed the other two. The data should be fine, just need to revert
> uberblock in point of time, where i started doing changes.
>
try the following command, it just checks whether is possible to
import your pool by discarding some of the most recent writes):
# zpool import -fFn 2918670121059000644 zh_vol_old
Because you have already created a new pool with the same name as old,
this command import pool by it ID (2918670121059000644) with new name
(zh_vol_old).
> History:
> I was using WEB GUI of FreeNas and it reported a failed drive
> I shutdown the computer and replaced the drive, but I did not noticed
> that I accidentally disconnected power of another drive
> I powered on the server and expanded the pool where there only one drive
> of the pool was active
As far as I understand attached log, zfs assumes that disk data
corrupted. But this is quite stranger, since zfs normally survives if
you forget to attach some disk during bad disk replacement.
> Then I began to really learn ZFS and messing up with bits
> At some point I created a backup bit-to-bit images of the two drives
> from the pool (using R-Studio)
>
The question of curiosity: do you experimenting now with copies or
with original disks?
>
> Specs:
> OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
> 12:48:50 PST 2013
> RAID: [root at juicy] ~# camcontrol devlist
> <ST3000DM001-1CH166 CC29> at scbus1 target 0 lun 0 (pass1,ada1)
> <ST3000DM001-1CH166 CC29> at scbus2 target 0 lun 0 (ada2,pass2)
> <ST3000DM001-9YN166 CC4H> at scbus3 target 0 lun 0 (pass3,ada3)
> [root at juicy] ~# zdb
> zh_vol:
> version: 5000
> name: 'zh_vol'
> state: 0
> txg: 14106447
> pool_guid: 2918670121059000644
> hostid: 1802987710
> hostname: ''
> vdev_children: 1
> vdev_tree:
> type: 'root'
> id: 0
> guid: 2918670121059000644
> create_txg: 4
> children[0]:
> type: 'raidz'
> id: 0
> guid: 14123440993587991088
> nparity: 1
> metaslab_array: 34
> metaslab_shift: 36
> ashift: 12
> asize: 8995321675776
> is_log: 0
> create_txg: 4
> children[0]:
> type: 'disk'
> id: 0
> guid: 17624020450804741401
> path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
> whole_disk: 1
> DTL: 137
> create_txg: 4
> children[1]:
> type: 'disk'
> id: 1
> guid: 3253299067537287428
> path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
> whole_disk: 1
> DTL: 133
> create_txg: 4
> children[2]:
> type: 'disk'
> id: 2
> guid: 17999524418015963258
> path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
> whole_disk: 1
> DTL: 134
> create_txg: 4
> features_for_read:
--
BR BC
More information about the freebsd-fs
mailing list