Re: speeding up zfs send | recv (update)

From: mike tancsa <mike_at_sentex.net>
Date: Wed, 22 Feb 2023 20:13:55 UTC
On 2/22/2023 1:06 PM, Miroslav Lachman wrote:
>
> I am facing similar problem with low performance of zfs send over 
> network. I have 2 machines in two different datacenters, both have 
> 1Gbps NIC and I would like to saturate the network but it seems 
> impossible even if "everything" seem to have enough unused resources.
> The sending side is very old HP ML110 G5 with bge0 NIC, receiving side 
> is VM with enough CPU, RAM, 22TB storage and vtnet0 NIC.
> Sender has about 25% CPU idle during sending, disks are not saturated 
> according to iostat -w -x but I still cannot see more than 52MiB/s. 
> Everything about zfs snapshot, send, receive etc. is handled by 
> syncoid from sanoid package (ssh + mbuffer + pv, no lzop)
>
> I thought it is slow because of ssh ciphers so I tried to change with 
> --sshciper but it was 10MiB/s slower when I changed from default 
> chacha20-poly1305@openssh.com to aes128-ctr


aes128-gcm@openssh.com is what I settled on for the cipher.   If you 
just blast dd if=/dev/zero | ssh are you able to achieve close to 
wirespeed ?  As (I think) I mentioned in this thread, different zfs 
datasets transmit at different speeds.  Ones with tens of thousands of 
small files are much slower than those with a few multi gigabit files.  
The disk seems to be the limiting factor for me as graphing "time spent 
in IO" via telegraf, shows the disks pretty well 100% busy.

e.g.

dd if=/dev/zero count=20000 bs=1m status=progress | ssh -c 
aes128-gcm@openssh.com mdtancsa@coldstore "cat - > /dev/null"
   20650655744 bytes (21 GB, 19 GiB) transferred 18.001s, 1147 MB/s
20000+0 records in
19982+36 records out

20971520000 bytes transferred in 18.259553 secs (1148523199 bytes/sec)

vs

  dd if=/dev/zero count=20000 bs=1m status=progress | ssh -c 
chacha20-poly1305@openssh.com mdtancsa@coldstore "cat - > /dev/null"
   20481835008 bytes (20 GB, 19 GiB) transferred 43.000s, 476 MB/s
20000+0 records in
19961+78 records out
20971520000 bytes transferred in 43.947239 secs (477197671 bytes/sec)

dd if=/dev/zero count=20000 bs=1m status=progress | ssh -c aes128-ctr 
mdtancsa@coldstore "cat - > /dev/null"
   20781727744 bytes (21 GB, 19 GiB) transferred 29.001s, 717 MB/s
20000+0 records in
19973+54 records out
20971520000 bytes transferred in 29.263111 secs (716653818 bytes/sec)


     ---Mike