Zpool surgery

Ulrich Spörlein uqs at FreeBSD.org
Mon Jan 28 08:58:25 UTC 2013


On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote:
> On 2013-Jan-27 14:31:56 -0000, Steven Hartland <killing at multiplay.co.uk> wrote:
> >----- Original Message ----- 
> >From: "Ulrich Spörlein" <uqs at FreeBSD.org>
> >> I want to transplant my old zpool tank from a 1TB drive to a new 2TB
> >> drive, but *not* use dd(1) or any other cloning mechanism, as the pool
> >> was very full very often and is surely severely fragmented.
> >
> >Cant you just drop the disk in the original machine, set it as a mirror
> >then once the mirror process has completed break the mirror and remove
> >the 1TB disk.
> 
> That will replicate any fragmentation as well.  "zfs send | zfs recv"
> is the only (current) way to defragment a ZFS pool.

But are you then also supposed to be able send incremental snapshots to
a third pool from the pool that you just cloned?

I did the zpool replace now over night, and it did not remove the old
device yet, as it found cksum errors on the pool:

root at coyote:~# zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: resilvered 873G in 11h33m with 24 errors on Mon Jan 28 09:45:32 2013
config:

        NAME           STATE     READ WRITE CKSUM
        tank           ONLINE       0     0    27
          replacing-0  ONLINE       0     0    61
            da0.eli    ONLINE       0     0    61
            ada1.eli   ONLINE       0     0    61

errors: Permanent errors have been detected in the following files:

        tank/src at 2013-01-17:/.svn/pristine/8e/8ed35772a38e0fec00bc1cbc2f05480f4fd4759b.svn-base
        tank/src at 2013-01-17:/.svn/pristine/4f/4febd82f50bd408f958d4412ceea50cef48fe8f7.svn-base
        tank/src at 2013-01-17:/sys/dev/mvs/mvs_soc.c
        tank/src at 2013-01-17:/secure/usr.bin/openssl/man/pkcs8.1
        tank/src at 2013-01-17:/.svn/pristine/ab/ab1efecf2c0a8f67162b2ed760772337017c5a64.svn-base
        tank/src at 2013-01-17:/.svn/pristine/90/907580a473b00f09b01815a52251fbdc3e34e8f6.svn-base
        tank/src at 2013-01-17:/sys/dev/agp/agpreg.h
        tank/src at 2013-01-17:/sys/dev/isci/scil/scic_sds_remote_node_context.h
        tank/src at 2013-01-17:/.svn/pristine/a8/a8dfc65edca368c5d2af3d655859f25150795bc5.svn-base
        tank/src at 2013-01-17:/contrib/llvm/utils/TableGen/DAGISelMatcher.cpp
        tank/src at 2013-01-17:/contrib/tcpdump/print-babel.c
        tank/src at 2013-01-17:/.svn/pristine/30/30ef0f53aa09a5185f55f4ecac842dbc13dab8fd.svn-base
        tank/src at 2013-01-17:/.svn/pristine/cb/cb32411a6873621a449b24d9127305b2ee6630e9.svn-base
        tank/src at 2013-01-17:/.svn/pristine/03/030d211b1e95f703f9a61201eed63efdbb8e41c0.svn-base
        tank/src at 2013-01-17:/.svn/pristine/27/27f1181d33434a72308de165c04202b6159d6ac2.svn-base
        tank/src at 2013-01-17:/lib/libpam/modules/pam_exec/pam_exec.c
        tank/src at 2013-01-17:/contrib/llvm/include/llvm/PassSupport.h
        tank/src at 2013-01-17:/.svn/pristine/90/90f818b5f897f26c7b301c1ac2d0ce0d3eaef28d.svn-base
        tank/src at 2013-01-17:/sys/vm/vm_pager.c
        tank/src at 2013-01-17:/.svn/pristine/5e/5e9331052e8c2e0fa5fd8c74c4edb04058e3b95f.svn-base
        tank/src at 2013-01-17:/.svn/pristine/1d/1d5d6e75cfb77e48e4711ddd10148986392c4fae.svn-base
        tank/src at 2013-01-17:/.svn/pristine/c5/c55e964c62ed759089c4bf5e49adf6e49eb59108.svn-base
        tank/src at 2013-01-17:/crypto/openssl/crypto/cms/cms_lcl.h
        tank/ncvs at 2013-01-17:/ports/textproc/uncrustify/distinfo,v

Interestingly, these only seem to affect the snapshot, and I'm now
wondering if that is the problem why the backup pool did not accept the
next incremental snapshot from the new pool.

How does the receiving pool known that it has the correct snapshot to
store an incremental one anyway? Is there a toplevel checksum, like for
git commits? How can I display and compare that?

Cheers,
Uli


More information about the freebsd-fs mailing list