zfs send, recv questions

Matt Churchyard matt.churchyard at userve.net
Tue Feb 9 16:03:16 UTC 2021


Zfs send/recv specifically works on entire datasets. Whether it’s a full or incremental send, you are effectively creating a full replica of the data. You cannot use it to merge data at the file level.

If you have a dataset of test data, and want a copy of that on the same pool (you mention using mv across directories), then you can just clone the snapshot rather than send it. The process is near identical to sending a snapshot, but you just get a new writable dataset based on the snapshot with no data copied. (obviously it relies on the original dataset so you can't remove the source snapshot without deleting all clones)

And yes, your comparison isn't like for like. Moving across UFS mounts would require actual data to be moved and also be slow, just as moving files inside a single ZFS dataset would be fast.

Matt

-----Original Message-----
From: owner-freebsd-fs at freebsd.org <owner-freebsd-fs at freebsd.org> On Behalf Of joe mcguckin
Sent: 08 February 2021 22:08
To: freebsd-fs at freebsd.org
Subject: zfs send, recv questions



I’m using zfs send to populate the test box with some sample throwaway files. zfs recv wants the name of a non-existant directory/mountpoint that it will create with all the new files. Is there a way to have zfs add the files to an existing directory? I tried simply ‘mv’ing the files to another directory on the same pool (trying to add the files to an existing directory) - Usually on UFS this is very quick, just a quick change to the directory, but on zfs it’s recopying all the files; yet another 30 minute wait… I guess since this is going across a mount point, FreeBSD wants to make a copy.

Is there a better way to achieve this?

I’m cheating by doing ali of this as root, how to do zfs recv as non-root? The Lucas book did a lot of hand-waving without a concrete example.

Thanks,

Joe


Joe McGuckin
ViaNet Communications

joe at via.net
650-207-0372 cell
650-213-1302 office
650-969-2124 fax



> On Feb 8, 2021, at 1:06 PM, Freddie Cash <fjwcash at gmail.com> wrote:
> 
> 
> 
> 
> 
> On Mon., Feb. 8, 2021, 12:27 p.m. joe mcguckin, <joe at via.net <mailto:joe at via.net>> wrote:
> df -h reports 66T available
> 
> zpool list says 102T
> 
> Why the discrepency?
> 
> This is on a system with 7 16Tb drives configured as raidz2.
> 
> Thanks,
> 
> Joe
> 
> 
> Joe McGuckin
> ViaNet Communications
> 
> joe at via.net <mailto:joe at via.net>
> 650-207-0372 cell
> 650-213-1302 office
> 650-969-2124 fax
> 
> "zpool list" shows the raw storage available on the pool, across all the disks in the pool, minus some internal reserved storage.
> 
> "zfs list" shows the usable storage space after all the parity drives are removed from the calculation.
> 
> "df" output can be misleading as it doesn't take into account compression and reservations and things like that. It can give you an approximation of available space, but it won't be as accurate as "zfs list".
> 
> For example, if you have 6x 2 TB drives configured as a single raidz2 vdev, then:
> 
> zpool list: around 12 TB (6 drives x 2 TB) zfs list: around 8 TB (4 
> data drives x 2 TB)
> df: should be around 8 TB
> 
> Cheers,
> Freddie
> 
> Typos due to smartphone keyboard.

_______________________________________________
freebsd-fs at freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"


More information about the freebsd-fs mailing list