ZFS - RAIDZ1 Recovery
InterNetX - Juergen Gotteswinter
jg at internetx.com
Fri May 27 07:37:19 UTC 2016
Hi,
after scrolling through the "History" i would wonder if its not
completely messed up now. Less is more in such Situations..
Juergen
Am 5/27/2016 um 12:26 AM schrieb esamorokov:
> Hello All,
>
> My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
> gone and I accidentally
> screwed the other two. The data should be fine, just need to revert
> uberblock in point of time, where i started doing changes.
>
> I AM KINDLY ASKING FOR HELP! The pool had all of the family memories
> for many years :( Thanks in advance!
>
> I am not a FreeBSD guru and have been using ZFS for a couple of
> years, but I know Linux and do some programming/scripting.
> Since I got that incident I started learning the depth of the ZFS,
> but I definitely need help on it at this point.
> Please don't ask me why I did not have backups, I was building
> backup server in my garage, when it happened
>
> History:
> I was using WEB GUI of FreeNas and it reported a failed drive
> I shutdown the computer and replaced the drive, but I did not
> noticed that I accidentally disconnected power of another drive
> I powered on the server and expanded the pool where there only one
> drive of the pool was active
> Then I began to really learn ZFS and messing up with bits
> At some point I created a backup bit-to-bit images of the two drives
> from the pool (using R-Studio)
>
>
> Specs:
> OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
> 12:48:50 PST 2013
> RAID: [root at juicy] ~# camcontrol devlist
> <ST3000DM001-1CH166 CC29> at scbus1 target 0 lun 0
> (pass1,ada1)
> <ST3000DM001-1CH166 CC29> at scbus2 target 0 lun 0
> (ada2,pass2)
> <ST3000DM001-9YN166 CC4H> at scbus3 target 0 lun 0
> (pass3,ada3)
> [root at juicy] ~# zdb
> zh_vol:
> version: 5000
> name: 'zh_vol'
> state: 0
> txg: 14106447
> pool_guid: 2918670121059000644
> hostid: 1802987710
> hostname: ''
> vdev_children: 1
> vdev_tree:
> type: 'root'
> id: 0
> guid: 2918670121059000644
> create_txg: 4
> children[0]:
> type: 'raidz'
> id: 0
> guid: 14123440993587991088
> nparity: 1
> metaslab_array: 34
> metaslab_shift: 36
> ashift: 12
> asize: 8995321675776
> is_log: 0
> create_txg: 4
> children[0]:
> type: 'disk'
> id: 0
> guid: 17624020450804741401
> path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
> whole_disk: 1
> DTL: 137
> create_txg: 4
> children[1]:
> type: 'disk'
> id: 1
> guid: 3253299067537287428
> path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
> whole_disk: 1
> DTL: 133
> create_txg: 4
> children[2]:
> type: 'disk'
> id: 2
> guid: 17999524418015963258
> path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
> whole_disk: 1
> DTL: 134
> create_txg: 4
> features_for_read:
>
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>
More information about the freebsd-fs
mailing list