ZFS - RAIDZ1 Recovery (Evgeny Sam)

Evgeny Sam esamorokov at gmail.com
Sat May 28 23:16:40 UTC 2016


Here is the current state of the drives:

zh_vol:
    version: 5000
    name: 'zh_vol'
    state: 0
    txg: 1491
    pool_guid: 10149654347507244742
    hostid: 1802987710
    hostname: 'juicy.zhelana.local'
    vdev_children: 2
    vdev_tree:
        type: 'root'
        id: 0
        guid: 10149654347507244742
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 5892508334691495384
            path: '/dev/ada0s2'
            whole_disk: 1
            metaslab_array: 33
            metaslab_shift: 23
            ashift: 12
            asize: 983564288
            is_log: 0
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 296669430778697937
            path: '/dev/ada2p2'
            whole_disk: 1
            metaslab_array: 37
            metaslab_shift: 34
            ashift: 12
            asize: 2997366816768
            is_log: 0
            create_txg: 1489
    features_for_read:

[root at juicy] ~# camcontrol devlist
<Patriot Pyro SE 332ABBF0>         at scbus0 target 0 lun 0 (ada0,pass0)
<ST3000DM001-1ER166 CC25>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST3000DM001-1ER166 CC25>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0 (ada3,pass3)

[root at juicy] ~# gpart show
=>       63  117231345  ada0  MBR  (55G)
         63    1930257     1  freebsd  [active]  (942M)
    1930320         63        - free -  (31k)
    1930383    1930257     2  freebsd  (942M)
    3860640       3024     3  freebsd  (1.5M)
    3863664      41328     4  freebsd  (20M)
    3904992  113326416        - free -  (54G)

=>      0  1930257  ada0s1  BSD  (942M)
        0       16          - free -  (8.0k)
       16  1930241       1  !0  (942M)

=>        34  5860533101  ada1  GPT  (2.7T)
          34          94        - free -  (47k)
         128     6291456     1  freebsd-swap  (3.0G)
     6291584  5854241544     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)

=>        34  5860533101  ada2  GPT  (2.7T)
          34          94        - free -  (47k)
         128     6291456     1  freebsd-swap  (3.0G)
     6291584  5854241544     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)

=>        34  5860533101  ada3  GPT  (2.7T)
          34          94        - free -  (47k)
         128     6291456     1  freebsd-swap  (3.0G)
     6291584  5854241544     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)

Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
   Mediasize: 3221225472 (3.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   rawuuid: 5d985baa-18ac-11e6-9c25-001b7859b93e
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3221225472
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 6291583
   start: 128
2. Name: ada1p2
   Mediasize: 2997371670528 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 5dacd737-18ac-11e6-9c25-001b7859b93e
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2997371670528
   offset: 3221291008
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 6291584
Consumers:
1. Name: ada1
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada2p1
   Mediasize: 3221225472 (3.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   rawuuid: 5e164720-18ac-11e6-9c25-001b7859b93e
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3221225472
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 6291583
   start: 128
2. Name: ada2p2
   Mediasize: 2997371670528 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 5e2ab04c-18ac-11e6-9c25-001b7859b93e
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2997371670528
   offset: 3221291008
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 6291584
Consumers:
1. Name: ada2
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada3
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada3p1
   Mediasize: 3221225472 (3.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 2b570bb9-8e40-11e3-aa1c-d43d7ed5b587
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3221225472
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 6291583
   start: 128
2. Name: ada3p2
   Mediasize: 2997371670528 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2997371670528
   offset: 3221291008
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 6291584
Consumers:
1. Name: ada3
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

On Sat, May 28, 2016 at 4:07 PM, Evgeny Sam <esamorokov at gmail.com> wrote:

> BlackCat,
>      I ran the command "zpool import -fFn 2918670121059000644 zh_vol_old"
> amd it did not work.
>
> [root at juicy] ~# zpool import -fFn 2918670121059000644 zh_vol_old
> [root at juicy] ~# zpool status
> no pools available
>
>  I think it did not work, because I am running it on the clonned drives,
> which have different GPID's, please correct me if I am wrong. I can switch
> it to the original drives, if you suggest so.
>
> Kevin,
>      At this moment the third drive is connected and it is/was faulty.
> Also, the rest of the drives are the clones of the original ones.
>
> Thank you,
>
> EVGENY.
>
>
> On Fri, May 27, 2016 at 5:00 AM, <freebsd-fs-request at freebsd.org> wrote:
>
>> Send freebsd-fs mailing list submissions to
>>         freebsd-fs at freebsd.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>>         https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> or, via email, send a message with subject or body 'help' to
>>         freebsd-fs-request at freebsd.org
>>
>> You can reach the person managing the list at
>>         freebsd-fs-owner at freebsd.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of freebsd-fs digest..."
>>
>>
>> Today's Topics:
>>
>>    1. Re: ZFS - RAIDZ1 Recovery (Kevin P. Neal)
>>    2. Re: ZFS - RAIDZ1 Recovery (BlackCat)
>>    3. Re: ZFS - RAIDZ1 Recovery (InterNetX - Juergen Gotteswinter)
>>    4. Re: ZFS - RAIDZ1 Recovery (Evgeny Sam)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Thu, 26 May 2016 20:47:10 -0400
>> From: "Kevin P. Neal" <kpn at neutralgood.org>
>> To: esamorokov <esamorokov at gmail.com>
>> Cc: freebsd-fs at freebsd.org, BlackCat <blackcatzilla at gmail.com>
>> Subject: Re: ZFS - RAIDZ1 Recovery
>> Message-ID: <20160527004710.GA47195 at neutralgood.org>
>> Content-Type: text/plain; charset=us-ascii
>>
>> On Thu, May 26, 2016 at 03:26:18PM -0700, esamorokov wrote:
>> > Hello All,
>> >
>> >      My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
>> > gone and I accidentally
>> >      screwed the other two. The data should be fine, just need to revert
>> >      uberblock in point of time, where i started doing changes.
>>
>> You may need to ask on a ZFS or OpenZFS specific list. I'm not aware of
>> many deep ZFS experts who hang out on this list.
>>
>> > History:
>> >      I was using WEB GUI of FreeNas and it reported a failed drive
>> >      I shutdown the computer and replaced the drive, but I did not
>> > noticed that I accidentally disconnected power of another drive
>>
>> What happened to the third drive, the one you pulled? Did it fail
>> in a way that may make it viable for an attempt to revive the pool?
>> Or is it just a brick at this point in which case it is useless?
>>
>> If the third drive is perhaps usable then make sure all three are
>> connected and powered up.
>> --
>> Kevin P. Neal                                http://www.pobox.com/~kpn/
>>
>> "Nonbelievers found it difficult to defend their position in \
>>     the presense of a working computer." -- a DEC Jensen paper
>>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Fri, 27 May 2016 10:36:11 +0300
>> From: BlackCat <blackcatzilla at gmail.com>
>> To: esamorokov <esamorokov at gmail.com>
>> Cc: freebsd-fs at freebsd.org
>> Subject: Re: ZFS - RAIDZ1 Recovery
>> Message-ID:
>>         <
>> CAD-rSeea_7TzxREVAsn8tKxLbtth62m3j8opsb2FoA3qc_ZrsQ at mail.gmail.com>
>> Content-Type: text/plain; charset=UTF-8
>>
>> Hello Evgeny,
>>
>> 2016-05-27 1:26 GMT+03:00 esamorokov <esamorokov at gmail.com>:
>> > I have 3 x 3TB in RAIDZ1, where one drive is gone and I accidentally
>> > screwed the other two. The data should be fine, just need to revert
>> > uberblock in point of time, where i started doing changes.
>> >
>> try the following command, it just checks whether is possible to
>> import your pool by discarding some of the most recent writes):
>>
>> # zpool import -fFn 2918670121059000644 zh_vol_old
>>
>> Because you have already created a new pool with the same name as old,
>> this command import pool by it ID (2918670121059000644) with new name
>> (zh_vol_old).
>>
>> > History:
>> >     I was using WEB GUI of FreeNas and it reported a failed drive
>> >     I shutdown the computer and replaced the drive, but I did not
>> noticed
>> > that I accidentally disconnected power of another drive
>> >     I powered on the server and expanded the pool where there only one
>> drive
>> > of the pool was active
>>
>> As far as I understand attached log, zfs assumes that disk data
>> corrupted. But this is quite stranger, since zfs normally survives if
>> you forget to attach some disk during bad disk replacement.
>>
>> >     Then I began to really learn ZFS and messing up with bits
>> >     At some point I created a backup bit-to-bit images of the two drives
>> > from the pool (using R-Studio)
>> >
>> The question of curiosity: do you experimenting now with copies or
>> with original disks?
>>
>> >
>> > Specs:
>> >     OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
>> > 12:48:50 PST 2013
>> >     RAID:   [root at juicy] ~# camcontrol devlist
>> >     <ST3000DM001-1CH166 CC29>          at scbus1 target 0 lun 0
>> (pass1,ada1)
>> >     <ST3000DM001-1CH166 CC29>          at scbus2 target 0 lun 0
>> (ada2,pass2)
>> >     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
>> (pass3,ada3)
>> >     [root at juicy] ~# zdb
>> > zh_vol:
>> >     version: 5000
>> >     name: 'zh_vol'
>> >     state: 0
>> >     txg: 14106447
>> >     pool_guid: 2918670121059000644
>> >     hostid: 1802987710
>> >     hostname: ''
>> >     vdev_children: 1
>> >     vdev_tree:
>> >         type: 'root'
>> >         id: 0
>> >         guid: 2918670121059000644
>> >         create_txg: 4
>> >         children[0]:
>> >             type: 'raidz'
>> >             id: 0
>> >             guid: 14123440993587991088
>> >             nparity: 1
>> >             metaslab_array: 34
>> >             metaslab_shift: 36
>> >             ashift: 12
>> >             asize: 8995321675776
>> >             is_log: 0
>> >             create_txg: 4
>> >             children[0]:
>> >                 type: 'disk'
>> >                 id: 0
>> >                 guid: 17624020450804741401
>> >                 path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
>> >                 whole_disk: 1
>> >                 DTL: 137
>> >                 create_txg: 4
>> >             children[1]:
>> >                 type: 'disk'
>> >                 id: 1
>> >                 guid: 3253299067537287428
>> >                 path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
>> >                 whole_disk: 1
>> >                 DTL: 133
>> >                 create_txg: 4
>> >             children[2]:
>> >                 type: 'disk'
>> >                 id: 2
>> >                 guid: 17999524418015963258
>> >                 path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
>> >                 whole_disk: 1
>> >                 DTL: 134
>> >                 create_txg: 4
>> >     features_for_read:
>>
>> --
>> BR BC
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Fri, 27 May 2016 09:30:30 +0200
>> From: InterNetX - Juergen Gotteswinter <jg at internetx.com>
>> To: esamorokov <esamorokov at gmail.com>, freebsd-fs at freebsd.org,
>>         BlackCat <blackcatzilla at gmail.com>
>> Subject: Re: ZFS - RAIDZ1 Recovery
>> Message-ID: <3af5eba4-4e04-abc4-9fa7-d0a1ce47747e at internetx.com>
>> Content-Type: text/plain; charset=windows-1252
>>
>> Hi,
>>
>> after scrolling through the "History" i would wonder if its not
>> completely messed up now. Less is more in such Situations..
>>
>> Juergen
>>
>> Am 5/27/2016 um 12:26 AM schrieb esamorokov:
>> > Hello All,
>> >
>> >     My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
>> > gone and I accidentally
>> >     screwed the other two. The data should be fine, just need to revert
>> >     uberblock in point of time, where i started doing changes.
>> >
>> >     I AM KINDLY ASKING FOR HELP! The pool had all of the family memories
>> > for many years :( Thanks in advance!
>> >
>> >     I am not a FreeBSD guru and have been using ZFS for a couple of
>> > years, but I know Linux and do some programming/scripting.
>> >     Since I got that incident I started learning the depth of the ZFS,
>> > but I definitely need help on it at this point.
>> >     Please don't ask me why I did not have backups, I was building
>> > backup server in my garage, when it happened
>> >
>> > History:
>> >     I was using WEB GUI of FreeNas and it reported a failed drive
>> >     I shutdown the computer and replaced the drive, but I did not
>> > noticed that I accidentally disconnected power of another drive
>> >     I powered on the server and expanded the pool where there only one
>> > drive of the pool was active
>> >     Then I began to really learn ZFS and messing up with bits
>> >     At some point I created a backup bit-to-bit images of the two drives
>> > from the pool (using R-Studio)
>> >
>> >
>> > Specs:
>> >     OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
>> > 12:48:50 PST 2013
>> >     RAID:   [root at juicy] ~# camcontrol devlist
>> >     <ST3000DM001-1CH166 CC29>          at scbus1 target 0 lun 0
>> > (pass1,ada1)
>> >     <ST3000DM001-1CH166 CC29>          at scbus2 target 0 lun 0
>> > (ada2,pass2)
>> >     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
>> > (pass3,ada3)
>> >     [root at juicy] ~# zdb
>> > zh_vol:
>> >     version: 5000
>> >     name: 'zh_vol'
>> >     state: 0
>> >     txg: 14106447
>> >     pool_guid: 2918670121059000644
>> >     hostid: 1802987710
>> >     hostname: ''
>> >     vdev_children: 1
>> >     vdev_tree:
>> >         type: 'root'
>> >         id: 0
>> >         guid: 2918670121059000644
>> >         create_txg: 4
>> >         children[0]:
>> >             type: 'raidz'
>> >             id: 0
>> >             guid: 14123440993587991088
>> >             nparity: 1
>> >             metaslab_array: 34
>> >             metaslab_shift: 36
>> >             ashift: 12
>> >             asize: 8995321675776
>> >             is_log: 0
>> >             create_txg: 4
>> >             children[0]:
>> >                 type: 'disk'
>> >                 id: 0
>> >                 guid: 17624020450804741401
>> >                 path: '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
>> >                 whole_disk: 1
>> >                 DTL: 137
>> >                 create_txg: 4
>> >             children[1]:
>> >                 type: 'disk'
>> >                 id: 1
>> >                 guid: 3253299067537287428
>> >                 path: '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
>> >                 whole_disk: 1
>> >                 DTL: 133
>> >                 create_txg: 4
>> >             children[2]:
>> >                 type: 'disk'
>> >                 id: 2
>> >                 guid: 17999524418015963258
>> >                 path: '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
>> >                 whole_disk: 1
>> >                 DTL: 134
>> >                 create_txg: 4
>> >     features_for_read:
>> >
>> >
>> > _______________________________________________
>> > freebsd-fs at freebsd.org mailing list
>> > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>> >
>>
>>
>> ------------------------------
>>
>> Message: 4
>> Date: Fri, 27 May 2016 00:38:56 -0700
>> From: Evgeny Sam <esamorokov at gmail.com>
>> To: jg at internetx.com
>> Cc: BlackCat <blackcatzilla at gmail.com>, freebsd-fs at freebsd.org
>> Subject: Re: ZFS - RAIDZ1 Recovery
>> Message-ID:
>>         <CABDVK=
>> 4XKK7qiOTtYBka_gHzkVNyXh78ecvhOwqxpMZLdcsupw at mail.gmail.com>
>> Content-Type: text/plain; charset=UTF-8
>>
>> Hi,
>>     I don't know if it helps, but right after I recreated the pool with
>> absolute paths of the drives (adaX) I made a bit-to-bit image copy of the
>> drives. Now I am restoring those images to the NEW DRIVES (similar
>> models).
>>
>> Thank you,
>> Evgeny.
>> On May 27, 2016 12:30 AM, "InterNetX - Juergen Gotteswinter" <
>> jg at internetx.com> wrote:
>>
>> > Hi,
>> >
>> > after scrolling through the "History" i would wonder if its not
>> > completely messed up now. Less is more in such Situations..
>> >
>> > Juergen
>> >
>> > Am 5/27/2016 um 12:26 AM schrieb esamorokov:
>> > > Hello All,
>> > >
>> > >     My name is Evgeny and I have 3 x 3TB in RAIDZ1, where one drive is
>> > > gone and I accidentally
>> > >     screwed the other two. The data should be fine, just need to
>> revert
>> > >     uberblock in point of time, where i started doing changes.
>> > >
>> > >     I AM KINDLY ASKING FOR HELP! The pool had all of the family
>> memories
>> > > for many years :( Thanks in advance!
>> > >
>> > >     I am not a FreeBSD guru and have been using ZFS for a couple of
>> > > years, but I know Linux and do some programming/scripting.
>> > >     Since I got that incident I started learning the depth of the ZFS,
>> > > but I definitely need help on it at this point.
>> > >     Please don't ask me why I did not have backups, I was building
>> > > backup server in my garage, when it happened
>> > >
>> > > History:
>> > >     I was using WEB GUI of FreeNas and it reported a failed drive
>> > >     I shutdown the computer and replaced the drive, but I did not
>> > > noticed that I accidentally disconnected power of another drive
>> > >     I powered on the server and expanded the pool where there only one
>> > > drive of the pool was active
>> > >     Then I began to really learn ZFS and messing up with bits
>> > >     At some point I created a backup bit-to-bit images of the two
>> drives
>> > > from the pool (using R-Studio)
>> > >
>> > >
>> > > Specs:
>> > >     OS: FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20
>> > > 12:48:50 PST 2013
>> > >     RAID:   [root at juicy] ~# camcontrol devlist
>> > >     <ST3000DM001-1CH166 CC29>          at scbus1 target 0 lun 0
>> > > (pass1,ada1)
>> > >     <ST3000DM001-1CH166 CC29>          at scbus2 target 0 lun 0
>> > > (ada2,pass2)
>> > >     <ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0
>> > > (pass3,ada3)
>> > >     [root at juicy] ~# zdb
>> > > zh_vol:
>> > >     version: 5000
>> > >     name: 'zh_vol'
>> > >     state: 0
>> > >     txg: 14106447
>> > >     pool_guid: 2918670121059000644
>> > >     hostid: 1802987710
>> > >     hostname: ''
>> > >     vdev_children: 1
>> > >     vdev_tree:
>> > >         type: 'root'
>> > >         id: 0
>> > >         guid: 2918670121059000644
>> > >         create_txg: 4
>> > >         children[0]:
>> > >             type: 'raidz'
>> > >             id: 0
>> > >             guid: 14123440993587991088
>> > >             nparity: 1
>> > >             metaslab_array: 34
>> > >             metaslab_shift: 36
>> > >             ashift: 12
>> > >             asize: 8995321675776
>> > >             is_log: 0
>> > >             create_txg: 4
>> > >             children[0]:
>> > >                 type: 'disk'
>> > >                 id: 0
>> > >                 guid: 17624020450804741401
>> > >                 path:
>> '/dev/gptid/6e5cea27-7f52-11e3-9cd8-d43d7ed5b587'
>> > >                 whole_disk: 1
>> > >                 DTL: 137
>> > >                 create_txg: 4
>> > >             children[1]:
>> > >                 type: 'disk'
>> > >                 id: 1
>> > >                 guid: 3253299067537287428
>> > >                 path:
>> '/dev/gptid/2b70d9c0-8e40-11e3-aa1c-d43d7ed5b587'
>> > >                 whole_disk: 1
>> > >                 DTL: 133
>> > >                 create_txg: 4
>> > >             children[2]:
>> > >                 type: 'disk'
>> > >                 id: 2
>> > >                 guid: 17999524418015963258
>> > >                 path:
>> '/dev/gptid/1e898758-9488-11e3-a86e-d43d7ed5b587'
>> > >                 whole_disk: 1
>> > >                 DTL: 134
>> > >                 create_txg: 4
>> > >     features_for_read:
>> > >
>> > >
>> > > _______________________________________________
>> > > freebsd-fs at freebsd.org mailing list
>> > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>> > >
>> >
>>
>>
>> ------------------------------
>>
>> Subject: Digest Footer
>>
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>
>> ------------------------------
>>
>> End of freebsd-fs Digest, Vol 672, Issue 6
>> ******************************************
>>
>
>


More information about the freebsd-fs mailing list