zfs send/receive: is this slow?

Dan Langille dan at langille.org
Sat Oct 2 01:32:48 UTC 2010


On 10/1/2010 7:00 PM, Artem Belevich wrote:
> On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille<dan at langille.org>  wrote:
>> FYI: this is all on the same box.
>
> In one of the previous emails you've used this command line:
>> # mbuffer -s 128k -m 1G -I 9090 | zfs receive
>
> You've used mbuffer in network client mode. I assumed that you did do
> your transfer over network.
>
> If you're running send/receive locally just pipe the data through
> mbuffer -- zfs send|mbuffer|zfs receive

As soon as I opened this email I knew what it would say.


# time zfs send storage/bacula at transfer | mbuffer | zfs receive 
storage/compressed/bacula-mbuffer
in @  197 MB/s, out @  205 MB/s, 1749 MB total, buffer   0% full


$ zpool iostat 10 10
                capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     9.78T  2.91T  1.11K    336  92.0M  17.3M
storage     9.78T  2.91T    769    436  95.5M  30.5M
storage     9.78T  2.91T    797    853  98.9M  78.5M
storage     9.78T  2.91T    865    962   107M  78.0M
storage     9.78T  2.91T    828    881   103M  82.6M
storage     9.78T  2.90T   1023  1.12K   127M  91.0M
storage     9.78T  2.90T  1.01K  1.01K   128M  89.3M
storage     9.79T  2.90T    962  1.08K   119M  89.1M
storage     9.79T  2.90T  1.09K  1.25K   139M  67.8M


Big difference.  :)


>
> --Artem
>
>>
>> --
>> Dan Langille
>> http://langille.org/
>>
>>
>> On Oct 1, 2010, at 5:56 PM, Artem Belevich<fbsdlist at src.cx>  wrote:
>>
>>> Hmm. It did help me a lot when I was replicating ~2TB worth of data
>>> over GigE. Without mbuffer things were roughly in the ballpark of your
>>> numbers. With mbuffer I've got around 100MB/s.
>>>
>>> Assuming that you have two boxes connected via ethernet, it would be
>>> good to check that nobody generates PAUSE frames. Some time back I've
>>> discovered that el-cheapo switch I've been using for some reason could
>>> not keep up with traffic bursts and generated tons of PAUSE frames
>>> that severely limited throughput.
>>>
>>> If you're using Intel adapters, check xon/xoff counters in "sysctl
>>> dev.em.0.mac_stats". If you see them increasing, that may explain slow
>>> speed.
>>> If you have a switch between your boxes, try bypassing it and connect
>>> boxes directly.
>>>
>>> --Artem
>>>
>>>
>>>
>>> On Fri, Oct 1, 2010 at 11:51 AM, Dan Langille<dan at langille.org>  wrote:
>>>>
>>>> On Wed, September 29, 2010 2:04 pm, Dan Langille wrote:
>>>>> $ zpool iostat 10
>>>>>                 capacity     operations    bandwidth
>>>>> pool         used  avail   read  write   read  write
>>>>> ----------  -----  -----  -----  -----  -----  -----
>>>>> storage     7.67T  5.02T    358     38  43.1M  1.96M
>>>>> storage     7.67T  5.02T    317    475  39.4M  30.9M
>>>>> storage     7.67T  5.02T    357    533  44.3M  34.4M
>>>>> storage     7.67T  5.02T    371    556  46.0M  35.8M
>>>>> storage     7.67T  5.02T    313    521  38.9M  28.7M
>>>>> storage     7.67T  5.02T    309    457  38.4M  30.4M
>>>>> storage     7.67T  5.02T    388    589  48.2M  37.8M
>>>>> storage     7.67T  5.02T    377    581  46.8M  36.5M
>>>>> storage     7.67T  5.02T    310    559  38.4M  30.4M
>>>>> storage     7.67T  5.02T    430    611  53.4M  41.3M
>>>>
>>>> Now that I'm using mbuffer:
>>>>
>>>> $ zpool iostat 10
>>>>                capacity     operations    bandwidth
>>>> pool         used  avail   read  write   read  write
>>>> ----------  -----  -----  -----  -----  -----  -----
>>>> storage     9.96T  2.73T  2.01K    131   151M  6.72M
>>>> storage     9.96T  2.73T    615    515  76.3M  33.5M
>>>> storage     9.96T  2.73T    360    492  44.7M  33.7M
>>>> storage     9.96T  2.73T    388    554  48.3M  38.4M
>>>> storage     9.96T  2.73T    403    562  50.1M  39.6M
>>>> storage     9.96T  2.73T    313    468  38.9M  28.0M
>>>> storage     9.96T  2.73T    462    677  57.3M  22.4M
>>>> storage     9.96T  2.73T    383    581  47.5M  21.6M
>>>> storage     9.96T  2.72T    142    571  17.7M  15.4M
>>>> storage     9.96T  2.72T     80    598  10.0M  18.8M
>>>> storage     9.96T  2.72T    718    503  89.1M  13.6M
>>>> storage     9.96T  2.72T    594    517  73.8M  14.1M
>>>> storage     9.96T  2.72T    367    528  45.6M  15.1M
>>>> storage     9.96T  2.72T    338    520  41.9M  16.4M
>>>> storage     9.96T  2.72T    348    499  43.3M  21.5M
>>>> storage     9.96T  2.72T    398    553  49.4M  14.4M
>>>> storage     9.96T  2.72T    346    481  43.0M  6.78M
>>>>
>>>> If anything, it's slower.
>>>>
>>>> The above was without -s 128.  The following used that setting:
>>>>
>>>>   $ zpool iostat 10
>>>>                capacity     operations    bandwidth
>>>> pool         used  avail   read  write   read  write
>>>> ----------  -----  -----  -----  -----  -----  -----
>>>> storage     9.78T  2.91T  1.98K    137   149M  6.92M
>>>> storage     9.78T  2.91T    761    577  94.4M  42.6M
>>>> storage     9.78T  2.91T    462    411  57.4M  24.6M
>>>> storage     9.78T  2.91T    492    497  61.1M  27.6M
>>>> storage     9.78T  2.91T    632    446  78.5M  22.5M
>>>> storage     9.78T  2.91T    554    414  68.7M  21.8M
>>>> storage     9.78T  2.91T    459    434  57.0M  31.4M
>>>> storage     9.78T  2.91T    398    570  49.4M  32.7M
>>>> storage     9.78T  2.91T    338    495  41.9M  26.5M
>>>> storage     9.78T  2.91T    358    526  44.5M  33.3M
>>>> storage     9.78T  2.91T    385    555  47.8M  39.8M
>>>> storage     9.78T  2.91T    271    453  33.6M  23.3M
>>>> storage     9.78T  2.91T    270    456  33.5M  28.8M
>>>>
>>>>
>>>> _______________________________________________
>>>> freebsd-stable at freebsd.org mailing list
>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>>>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>>>>
>>>
>>
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>


-- 
Dan Langille - http://langille.org/


More information about the freebsd-stable mailing list