ZFS - RAIDZ1 Recovery (Evgeny Sam)

Evgeny Sam esamorokov at gmail.com
Sat May 28 23:08:01 UTC 2016


BlackCat,
     I ran the command "zpool import -fFn 2918670121059000644 zh_vol_old"
amd it did not work.

[root at juicy] ~# zpool import -fFn 2918670121059000644 zh_vol_old
[root at juicy] ~# zpool status
no pools available

 I think it did not work, because I am running it on the clonned drives,
which have different GPID's, please correct me if I am wrong. I can switch
it to the original drives, if you suggest so.

Kevin,
     At this moment the third drive is connected and it is/was faulty.
Also, the rest of the drives are the clones of the original ones.

Thank you,

EVGENY.


On Fri, May 27, 2016 at 5:00 AM, <freebsd-fs-request at freebsd.org> wrote:

> Send freebsd-fs mailing list submissions to
>         freebsd-fs at freebsd.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> or, via email, send a message with subject or body 'help' to
>         freebsd-fs-request at freebsd.org
>
> You can reach the person managing the list at
>         freebsd-fs-owner at freebsd.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of freebsd-fs digest..."
>
>
> Today's Topics:
>
>    1. Re: ZFS - RAIDZ1 Recovery (Kevin P. Neal)
>    2. Re: ZFS - RAIDZ1 Recovery (BlackCat)
>    3. Re: ZFS - RAIDZ1 Recovery (InterNetX - Juergen Gotteswinter)
>    4. Re: ZFS - RAIDZ1 Recovery (Evgeny Sam)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 26 May 2016 20:47:10 -0400
> From: "Kevin P. Neal" <kpn at neutralgood.org>
> To: esamorokov <esamorokov at gmail.com>
> Cc: freebsd-fs at freebsd.org, BlackCat <blackcatzilla at gmail.com>
> Subject: Re: ZFS - RAIDZ1 Recovery
> Message-ID: <20160527004710.GA47195 at neutralgood.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, May 26, 2016 at 03:26:18PM -0700, esamorokov wrote:
> > Hello All,
> >
> >      My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
> > gone and I accidentally
> >      screwed the other two. The data should be fine, just need to revert
> >      uberblock in point of time, where i started doing changes.
>
> You may need to ask on a ZFS or OpenZFS specific list. I'm not aware of
> many deep ZFS experts who hang out on this list.
>
> > History:
> >      I was using WEB GUI of FreeNas and it reported a failed drive
> >      I shutdown the computer and replaced the drive, but I did not
> > noticed that I accidentally disconnected power of another drive
>
> What happened to the third drive, the one you pulled? Did it fail
> in a way that may make it viable for an attempt to revive the pool?
> Or is it just a brick at this point in which case it is useless?
>
> If the third drive is perhaps usable then make sure all three are
> connected and powered up.
> --
> Kevin P. Neal                                http://www.pobox.com/~kpn/
>
> "Nonbelievers found it difficult to defend their position in \
>     the presense of a working computer." -- a DEC Jensen paper
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 27 May 2016 10:36:11 +0300
> From: BlackCat <blackcatzilla at gmail.com>
> To: esamorokov <esamorokov at gmail.com>
> Cc: freebsd-fs at freebsd.org
> Subject: Re: ZFS - RAIDZ1 Recovery
> Message-ID:
>         <
> CAD-rSeea_7TzxREVAsn8tKxLbtth62m3j8opsb2FoA3qc_ZrsQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hello Evgeny,
>
> 2016-05-27 1:26 GMT+03:00 esamorokov <esamorokov at gmail.com>:
> > I have 3 x 3TB in RAIDZ1, where one drive is gone and I accidentally
> > screwed the other two. The data should be fine, just need to revert
> > uberblock in point of time, where i started doing changes.
> >
> try the following command, it just checks whether is possible to
> import your pool by discarding some of the most recent writes):
>
> # zpool import -fFn 2918670121059000644 zh_vol_old
>
> Because you have already created a new pool with the same name as old,
> this command import pool by it ID (2918670121059000644) with new name
> (zh_vol_old).
>
> > History:
> >     I was using WEB GUI of FreeNas and it reported a failed drive
> >     I shutdown the computer and replaced the drive, but I did not noticed
> > that I accidentally disconnected power of another drive
> >     I powered on the server and expanded the pool where there only one
> drive
> > of the pool was active
>
> As far as I understand attached log, zfs assumes that disk data
> corrupted. But this is quite stranger, since zfs normally survives if
> you forget to attach some disk during bad disk replacement.
>
> >     Then I began to really learn ZFS and messing up with bits
> >     At some point I created a backup bit-to-bit images of the two drives
> > from the pool (using R-Studio)
> >
> The question of curiosity: do you experimenting now with copies or
> with original disks?
>
> >
> > Specs:
> >     OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
> > 12:48:50 PST 2013
> >     RAID:   [root at juicy] ~# camcontrol devlist
> >     <ST3000DM001-1CH166 CC29>          at scbus1 target 0 lun 0
> (pass1,ada1)
> >     <ST3000DM001-1CH166 CC29>          at scbus2 target 0 lun 0
> (ada2,pass2)
> >     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
> (pass3,ada3)
> >     [root at juicy] ~# zdb
> > zh_vol:
> >     version: 5000
> >     name: 'zh_vol'
> >     state: 0
> >     txg: 14106447
> >     pool_guid: 2918670121059000644
> >     hostid: 1802987710
> >     hostname: ''
> >     vdev_children: 1
> >     vdev_tree:
> >         type: 'root'
> >         id: 0
> >         guid: 2918670121059000644
> >         create_txg: 4
> >         children[0]:
> >             type: 'raidz'
> >             id: 0
> >             guid: 14123440993587991088
> >             nparity: 1
> >             metaslab_array: 34
> >             metaslab_shift: 36
> >             ashift: 12
> >             asize: 8995321675776
> >             is_log: 0
> >             create_txg: 4
> >             children[0]:
> >                 type: 'disk'
> >                 id: 0
> >                 guid: 17624020450804741401
> >                 path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
> >                 whole_disk: 1
> >                 DTL: 137
> >                 create_txg: 4
> >             children[1]:
> >                 type: 'disk'
> >                 id: 1
> >                 guid: 3253299067537287428
> >                 path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
> >                 whole_disk: 1
> >                 DTL: 133
> >                 create_txg: 4
> >             children[2]:
> >                 type: 'disk'
> >                 id: 2
> >                 guid: 17999524418015963258
> >                 path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
> >                 whole_disk: 1
> >                 DTL: 134
> >                 create_txg: 4
> >     features_for_read:
>
> --
> BR BC
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 27 May 2016 09:30:30 +0200
> From: InterNetX - Juergen Gotteswinter <jg at internetx.com>
> To: esamorokov <esamorokov at gmail.com>, freebsd-fs at freebsd.org,
>         BlackCat <blackcatzilla at gmail.com>
> Subject: Re: ZFS - RAIDZ1 Recovery
> Message-ID: <3af5eba4-4e04-abc4-9fa7-d0a1ce47747e at internetx.com>
> Content-Type: text/plain; charset=windows-1252
>
> Hi,
>
> after scrolling through the "History" i would wonder if its not
> completely messed up now. Less is more in such Situations..
>
> Juergen
>
> Am 5/27/2016 um 12:26 AM schrieb esamorokov:
> > Hello All,
> >
> >     My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
> > gone and I accidentally
> >     screwed the other two. The data should be fine, just need to revert
> >     uberblock in point of time, where i started doing changes.
> >
> >     I AM KINDLY ASKING FOR HELP! The pool had all of the family memories
> > for many years :( Thanks in advance!
> >
> >     I am not a FreeBSD guru and have been using ZFS for a couple of
> > years, but I know Linux and do some programming/scripting.
> >     Since I got that incident I started learning the depth of the ZFS,
> > but I definitely need help on it at this point.
> >     Please don't ask me why I did not have backups, I was building
> > backup server in my garage, when it happened
> >
> > History:
> >     I was using WEB GUI of FreeNas and it reported a failed drive
> >     I shutdown the computer and replaced the drive, but I did not
> > noticed that I accidentally disconnected power of another drive
> >     I powered on the server and expanded the pool where there only one
> > drive of the pool was active
> >     Then I began to really learn ZFS and messing up with bits
> >     At some point I created a backup bit-to-bit images of the two drives
> > from the pool (using R-Studio)
> >
> >
> > Specs:
> >     OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
> > 12:48:50 PST 2013
> >     RAID:   [root at juicy] ~# camcontrol devlist
> >     <ST3000DM001-1CH166 CC29>          at scbus1 target 0 lun 0
> > (pass1,ada1)
> >     <ST3000DM001-1CH166 CC29>          at scbus2 target 0 lun 0
> > (ada2,pass2)
> >     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
> > (pass3,ada3)
> >     [root at juicy] ~# zdb
> > zh_vol:
> >     version: 5000
> >     name: 'zh_vol'
> >     state: 0
> >     txg: 14106447
> >     pool_guid: 2918670121059000644
> >     hostid: 1802987710
> >     hostname: ''
> >     vdev_children: 1
> >     vdev_tree:
> >         type: 'root'
> >         id: 0
> >         guid: 2918670121059000644
> >         create_txg: 4
> >         children[0]:
> >             type: 'raidz'
> >             id: 0
> >             guid: 14123440993587991088
> >             nparity: 1
> >             metaslab_array: 34
> >             metaslab_shift: 36
> >             ashift: 12
> >             asize: 8995321675776
> >             is_log: 0
> >             create_txg: 4
> >             children[0]:
> >                 type: 'disk'
> >                 id: 0
> >                 guid: 17624020450804741401
> >                 path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
> >                 whole_disk: 1
> >                 DTL: 137
> >                 create_txg: 4
> >             children[1]:
> >                 type: 'disk'
> >                 id: 1
> >                 guid: 3253299067537287428
> >                 path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
> >                 whole_disk: 1
> >                 DTL: 133
> >                 create_txg: 4
> >             children[2]:
> >                 type: 'disk'
> >                 id: 2
> >                 guid: 17999524418015963258
> >                 path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
> >                 whole_disk: 1
> >                 DTL: 134
> >                 create_txg: 4
> >     features_for_read:
> >
> >
> > _______________________________________________
> > freebsd-fs at freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
> >
>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 27 May 2016 00:38:56 -0700
> From: Evgeny Sam <esamorokov at gmail.com>
> To: jg at internetx.com
> Cc: BlackCat <blackcatzilla at gmail.com>, freebsd-fs at freebsd.org
> Subject: Re: ZFS - RAIDZ1 Recovery
> Message-ID:
>         <CABDVK=
> 4XKK7qiOTtYBka_gHzkVNyXh78ecvhOwqxpMZLdcsupw at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hi,
>     I don't know if it helps, but right after I recreated the pool with
> absolute paths of the drives (adaX) I made a bit-to-bit image copy of the
> drives. Now I am restoring those images to the NEW DRIVES (similar models).
>
> Thank you,
> Evgeny.
> On May 27, 2016 12:30 AM, "InterNetX - Juergen Gotteswinter" <
> jg at internetx.com> wrote:
>
> > Hi,
> >
> > after scrolling through the "History" i would wonder if its not
> > completely messed up now. Less is more in such Situations..
> >
> > Juergen
> >
> > Am 5/27/2016 um 12:26 AM schrieb esamorokov:
> > > Hello All,
> > >
> > >     My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
> > > gone and I accidentally
> > >     screwed the other two. The data should be fine, just need to revert
> > >     uberblock in point of time, where i started doing changes.
> > >
> > >     I AM KINDLY ASKING FOR HELP! The pool had all of the family
> memories
> > > for many years :( Thanks in advance!
> > >
> > >     I am not a FreeBSD guru and have been using ZFS for a couple of
> > > years, but I know Linux and do some programming/scripting.
> > >     Since I got that incident I started learning the depth of the ZFS,
> > > but I definitely need help on it at this point.
> > >     Please don't ask me why I did not have backups, I was building
> > > backup server in my garage, when it happened
> > >
> > > History:
> > >     I was using WEB GUI of FreeNas and it reported a failed drive
> > >     I shutdown the computer and replaced the drive, but I did not
> > > noticed that I accidentally disconnected power of another drive
> > >     I powered on the server and expanded the pool where there only one
> > > drive of the pool was active
> > >     Then I began to really learn ZFS and messing up with bits
> > >     At some point I created a backup bit-to-bit images of the two
> drives
> > > from the pool (using R-Studio)
> > >
> > >
> > > Specs:
> > >     OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
> > > 12:48:50 PST 2013
> > >     RAID:   [root at juicy] ~# camcontrol devlist
> > >     <ST3000DM001-1CH166 CC29>          at scbus1 target 0 lun 0
> > > (pass1,ada1)
> > >     <ST3000DM001-1CH166 CC29>          at scbus2 target 0 lun 0
> > > (ada2,pass2)
> > >     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
> > > (pass3,ada3)
> > >     [root at juicy] ~# zdb
> > > zh_vol:
> > >     version: 5000
> > >     name: 'zh_vol'
> > >     state: 0
> > >     txg: 14106447
> > >     pool_guid: 2918670121059000644
> > >     hostid: 1802987710
> > >     hostname: ''
> > >     vdev_children: 1
> > >     vdev_tree:
> > >         type: 'root'
> > >         id: 0
> > >         guid: 2918670121059000644
> > >         create_txg: 4
> > >         children[0]:
> > >             type: 'raidz'
> > >             id: 0
> > >             guid: 14123440993587991088
> > >             nparity: 1
> > >             metaslab_array: 34
> > >             metaslab_shift: 36
> > >             ashift: 12
> > >             asize: 8995321675776
> > >             is_log: 0
> > >             create_txg: 4
> > >             children[0]:
> > >                 type: 'disk'
> > >                 id: 0
> > >                 guid: 17624020450804741401
> > >                 path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
> > >                 whole_disk: 1
> > >                 DTL: 137
> > >                 create_txg: 4
> > >             children[1]:
> > >                 type: 'disk'
> > >                 id: 1
> > >                 guid: 3253299067537287428
> > >                 path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
> > >                 whole_disk: 1
> > >                 DTL: 133
> > >                 create_txg: 4
> > >             children[2]:
> > >                 type: 'disk'
> > >                 id: 2
> > >                 guid: 17999524418015963258
> > >                 path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
> > >                 whole_disk: 1
> > >                 DTL: 134
> > >                 create_txg: 4
> > >     features_for_read:
> > >
> > >
> > > _______________________________________________
> > > freebsd-fs at freebsd.org mailing list
> > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
> > >
> >
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>
> ------------------------------
>
> End of freebsd-fs Digest, Vol 672, Issue 6
> ******************************************
>


More information about the freebsd-fs mailing list