zfs send/receive: is this slow?
Dan Langille
dan at langille.org
Mon Oct 4 01:11:06 UTC 2010
On 10/1/2010 9:32 PM, Dan Langille wrote:
> On 10/1/2010 7:00 PM, Artem Belevich wrote:
>> On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille<dan at langille.org> wrote:
>>> FYI: this is all on the same box.
>>
>> In one of the previous emails you've used this command line:
>>> # mbuffer -s 128k -m 1G -I 9090 | zfs receive
>>
>> You've used mbuffer in network client mode. I assumed that you did do
>> your transfer over network.
>>
>> If you're running send/receive locally just pipe the data through
>> mbuffer -- zfs send|mbuffer|zfs receive
>
> As soon as I opened this email I knew what it would say.
>
>
> # time zfs send storage/bacula at transfer | mbuffer | zfs receive
> storage/compressed/bacula-mbuffer
> in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
>
>
> $ zpool iostat 10 10
> capacity operations bandwidth
> pool used avail read write read write
> ---------- ----- ----- ----- ----- ----- -----
> storage 9.78T 2.91T 1.11K 336 92.0M 17.3M
> storage 9.78T 2.91T 769 436 95.5M 30.5M
> storage 9.78T 2.91T 797 853 98.9M 78.5M
> storage 9.78T 2.91T 865 962 107M 78.0M
> storage 9.78T 2.91T 828 881 103M 82.6M
> storage 9.78T 2.90T 1023 1.12K 127M 91.0M
> storage 9.78T 2.90T 1.01K 1.01K 128M 89.3M
> storage 9.79T 2.90T 962 1.08K 119M 89.1M
> storage 9.79T 2.90T 1.09K 1.25K 139M 67.8M
>
>
> Big difference. :)
I'm rerunning my test after I had a drive go offline[1]. But I'm not
getting anything like the previous test:
time zfs send storage/bacula at transfer | mbuffer | zfs receive
storage/compressed/bacula-buffer
$ zpool iostat 10 10
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
storage 6.83T 5.86T 8 31 1.00M 2.11M
storage 6.83T 5.86T 207 481 25.7M 17.8M
storage 6.83T 5.86T 220 516 27.4M 17.2M
storage 6.83T 5.86T 221 523 27.5M 21.0M
storage 6.83T 5.86T 198 430 24.5M 20.4M
storage 6.83T 5.86T 248 528 30.8M 26.7M
storage 6.83T 5.86T 273 508 33.9M 22.6M
storage 6.83T 5.86T 331 499 41.1M 22.7M
storage 6.83T 5.86T 424 662 52.6M 34.7M
storage 6.83T 5.86T 413 605 51.3M 36.7M
[1] - http://docs.freebsd.org/cgi/mid.cgi?4CA73702.5080203
--
Dan Langille - http://langille.org/
More information about the freebsd-stable
mailing list