iscsi + restoring zfs snapshot
Ruben
mail at osfux.nl
Tue Jun 9 11:54:12 UTC 2020
Hi David,
As it turns out, the problem was with my configuration of iscsi. Somehow
the mapping (/dev/sd*) on the linux clients was altered; I was
constantly accessing a wrong target. Ill be refining my ctl.conf to
prevent this from happening in the future.
Thanks again for your feedback (I didn't know about "cmp" for instance :) )!
Kind regards,
Ruben
On 4/17/20 11:53 PM, David Christensen wrote:
> On 2020-04-17 06:06, Ruben via freebsd-questions wrote:
>> Hi,
>>
>> Still having trouble understanding this...
>>
>> Any pointers?
>>
>> Regards,
>>
>> Ruben
>>
>> On 4/12/20 11:10 AM, Ruben via freebsd-questions wrote:
>>>
>>> Hi,
>>>
>>> I have a couple of linux clients that mount an iscsi target provided
>>> by a zfs filesystem on a FreeBSD host.
>
> Looking below, I infer that you created the pool with:
>
> # zfs create -V 30g data/Docker/torrent
>
>
>>> Yesterday I messed things up and I am trying to restore a snapshot to
>>> revert the changes.
>>>
>>> I seem to be able to do so, but since the result is somewhat
>>> unexepected I'm probably going the wrong way about it. The strange
>>> thing is that the snapshot restored from 2 weeks ago contains changes
>>> from last night :S
>
> See my comments "It is unclear ...", below.
>
>
>>> This is the FS:
>
> I assume you mean "volume".
>
>
>>> zfs get all data/Docker/torrent
>>> NAME PROPERTY VALUE SOURCE
>>> data/Docker/torrent type volume -
>>> data/Docker/torrent creation Sun Dec 1 21:04 2019 -
>>> data/Docker/torrent used 56.8G -
>>> data/Docker/torrent available 93.0G -
>>> data/Docker/torrent referenced 19.7G -
>>> data/Docker/torrent compressratio 1.00x -
>>> data/Docker/torrent reservation none default
>>> data/Docker/torrent volsize 30G local
>>> data/Docker/torrent volblocksize 8K default
>>> data/Docker/torrent checksum on default
>>> data/Docker/torrent compression off default
>>> data/Docker/torrent readonly off default
>>> data/Docker/torrent createtxg 22810439 -
>>> data/Docker/torrent copies 1 default
>>> data/Docker/torrent refreservation 30.9G local
>>> data/Docker/torrent guid 15050313927458195147 -
>>> data/Docker/torrent primarycache all default
>>> data/Docker/torrent secondarycache all default
>>> data/Docker/torrent usedbysnapshots 6.12G -
>>> data/Docker/torrent usedbydataset 19.7G -
>>> data/Docker/torrent usedbychildren 0 -
>>> data/Docker/torrent usedbyrefreservation 30.9G -
>>> data/Docker/torrent logbias latency default
>>> data/Docker/torrent dedup off default
>>> data/Docker/torrent mlslabel -
>>> data/Docker/torrent sync standard default
>>> data/Docker/torrent refcompressratio 1.00x -
>>> data/Docker/torrent written 17.1K -
>>> data/Docker/torrent logicalused 12.1G -
>>> data/Docker/torrent logicalreferenced 9.25G -
>>> data/Docker/torrent volmode dev local
>>> data/Docker/torrent snapshot_limit none default
>>> data/Docker/torrent snapshot_count none default
>>> data/Docker/torrent redundant_metadata all default
>
> That looks okay.
>
>
> I find the output of 'zfs get all ...' easier to read if I pipe the
> output to sort(1).
>
>
>>> These are its snapshots:
>>>
>>> zfs list -t snapshot -r data/Docker/torrent
>>> NAME USED AVAIL REFER
>>> MOUNTPOINT
>>> data/Docker/torrent at 2020-02-15_09.05.00--90d 677M - 6.30G -
>>> data/Docker/torrent at 2020-02-22_09.05.00--90d 783M - 6.57G -
>>> data/Docker/torrent at 2020-02-29_09.05.00--90d 798M - 6.65G -
>>> data/Docker/torrent at 2020-03-07_09.05.00--90d 693M - 8.71G -
>>> data/Docker/torrent at 2020-03-14_09.05.00--90d 684M - 11.2G -
>>> data/Docker/torrent at 2020-03-21_09.05.00--90d 611M - 13.9G -
>>> data/Docker/torrent at 2020-03-28_09.05.00--90d 864M - 18.1G -
>>> data/Docker/torrent at 2020-04-04_09.05.00--90d 17.1K - 19.7G -
>>> [root at gneisenau:/usr/home/fux]#
>
> That looks okay.
>
>
>>> This is my restore attempt:
>>>
>>> zfs send data/Docker/torrent at 2020-03-14_09.05.00--90d | zfs receive
>>> data/restoredfromsnapshot
>
> I normally do full replication.
>
>
>>> If I unmount the FS from the client, and export this new FS instead as:
>>>
>>> lun 3 {
>>> path /dev/zvol/data/restoredfromsnapshot
>>> size 30G
>>> }
>
> I assume the above is in /etc/ctl.conf.
>
>
>>> , restart ctld, mount that on the same linux client (but with the
>>> "ro" option):
>>>
>>> /dev/sdd on /mnt/restored_data type ext4 (ro,noatime,stripe=256,_netdev)
>>>
>>> it contains :
>>>
>>> root at torrent:/mnt/restored_data# ls -laht
>>> total 44K
>>> drwxr-xr-x 5 root root 4.0K Apr 12 10:23 ..
>>> drwx--x--x 14 root root 4.0K Apr 11 21:06 docker
>>> drwxrwxr-x 8 root root 4.0K Apr 11 20:57 .
>>> drwxr-xr-x 3 root root 4.0K Apr 11 20:57 deluge_config
>>> drwxr-xr-x 3 root root 4.0K Apr 11 20:57 docker_volumes
>>> drwxr-xr-x 2 root root 4.0K May 21 2019 downloads
>>> drwxr-xr-x 2 root root 4.0K May 21 2019 sickrage
>>> drwx------ 2 root root 16K May 21 2019 lost+found
>>> root at torrent:/mnt/restored_data#
>>>
>>> changes from way after 2020-03-14 , including those from last night .
>>>
>>> Huh? I'm using zfSnap for creating the snapshots, like this:
>>>
>>> /usr/local/sbin/zfSnap -s -z -a 90d -r data/Docker
>>>
>>> My first attempt to rollback yesterday's changes involved using the
>>> rollback option ( zfs rollback -r
>>> data/Docker/torrent at 2020-04-04_09.05.00--90d ) but that did not work
>>> either (yesterday's changes were not reverted).
>
> It is unclear if your steps were complete or in a proper order. Services
> must be stopped before the restore and re-started after the restore. I
> do not see any use of the ZFS "readonly" property. I see verification
> at the very end, but not verification immediately after the restore.
>
>
> I would proceed as follows:
>
> 1. Disconnect, stop, etc., all services on FreeBSD and Linux that
> access the volume.
>
> 2. Destroy the failed restore volume.
>
> 3. Enable the ZFS "readonly" property on the volume.
>
> 4. Scrub the pool containing the volume.
>
> 5. Do a ZFS rollback on the volume, as before:
>
> # zfs rollback -r data/Docker/torrent at 2020-04-04_09.05.00--90d
>
> 6. Figure out how to mount the live volume read-only and how to mount
> the snapshot (which will be read-only by definition). Verify they are
> identical with cmp(1).
>
> 7. Disable the ZFS "readonly" property on the volume.
>
> 8. Enable services, as required, on FreeBSD.
>
> 9. Mount the volume on Linux. Run fsck(8).
>
> 10. Enable services on Linux.
>
>
> David
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to
> "freebsd-questions-unsubscribe at freebsd.org"
More information about the freebsd-questions
mailing list