zfs send -R | zfs recv aborted
frank2 at fjl.co.uk
Fri Jul 21 00:03:49 UTC 2017
On 19/07/2017 11:21, Derek (freebsd lists) wrote:
> On 17-07-18 05:19 PM, Frank Leonhardt wrote:
>> I'm not 100% sure that datasets that appear to be good on a failed
>> send will be safe; I presume you've checked!
>> So your problem is that you need to free up broken dataset snapshots
>> on the receiver. I don't understand why this is a problem - why not
>> just "destroy" them?
> And here, you've gotten to the heart of the matter. Perhaps the
> questions I mean to be asking are:
> - How can I tell which datasets/snapshots were received in-tact, and
> which are only partial transfers? (I *presume* some are in-tact, and
> they superficially appear to be so.)
> - Can this be done using only properties/metadata of the zfs dataset +
> pool? (like a receive completed flag)
I can't come up with an answer that would convince me if I was faced
with this. But I think there's a reasonable chance that a scrub would
pick up any problems. If any block was out of place it would flag as
soon as it was read as it would fail the checksum, and scrub reads every
block - so if the snapshot shows up, all the blocks below it must be
linked and are either there or the branch is broken.
>> I've a dim idea that zxfer might be of some help here, but as you
>> say, the OpenZFS from 10.3 onwards has exactly the option you need.
> That's a good point. I'll look there for some inspiration - and see
> how deep it goes.
>> Am I right in thinking these two machines are colocated? Why not just
>> export the pool on one and import on the other? (Lack of drive bays
>> being one obvious reason - just get a load of USB->SATA cables and a
>> hub). Just a thought.
> The source machine is active in service.
Okay, so desperate then. Crazy thoughts I'd have when desperate....
Are these in a SAS expander/enclosure? Could you mount the new drives on
the old box rather than using the network?
How much has changed in the snapshots that you want take with you to the
new system (zfs list -t snapshot)? Roll back the new zpool and then do a
differential send to the latest snapshot. This might be fairly painless
if the original snapshot contains the bulk of the data anyway.
And now for completely crazy....
Could you mirror the existing vdevs on to the new disks 1:1 - resilver,
take a snapshot and then detach from the mirror and you have a perfect
copy? This couldn't possibly work, of course, but I can't actually think
why not at 1am...
More information about the freebsd-questions