svn commit: r271946 - in head/sys: dev/oce dev/vmware/vmxnet3 dev/xen/netfront kern net netinet ofed/drivers/net/mlx4 sys

Adrian Chadd adrian.chadd at gmail.com
Mon Sep 14 18:35:38 UTC 2015


Hi,

So what's the actual behaviour of the new tso logic before and after
the above change in tsomax? like, what are the actual packet sizes
being sent up to the hardware? Is TSO or the TCP stack so fragile that
a slight change in how packets are broken up results in ridiculously
less throughput? It's only a few bytes.




-adrian


On 14 September 2015 at 08:53, Roger Pau Monné <royger at freebsd.org> wrote:
> El 14/09/15 a les 13.01, Hans Petter Selasky ha escrit:
>> On 09/14/15 12:51, Roger Pau Monné wrote:
>>> El 14/09/15 a les 11.29, Hans Petter Selasky ha escrit:
>>>> On 09/14/15 11:17, Roger Pau Monné wrote:
>>>>> El 22/09/14 a les 10.27, Hans Petter Selasky ha escrit:
>>>>>> Author: hselasky
>>>>>> Date: Mon Sep 22 08:27:27 2014
>>>>>> New Revision: 271946
>>>>>> URL: http://svnweb.freebsd.org/changeset/base/271946
>>>>>>
>>>>>> Log:
>>>>>>     Improve transmit sending offload, TSO, algorithm in general.
>>>>>>
>>>>>>     The current TSO limitation feature only takes the total number of
>>>>>>     bytes in an mbuf chain into account and does not limit by the
>>>>>> number
>>>>>>     of mbufs in a chain. Some kinds of hardware is limited by two
>>>>>>     factors. One is the fragment length and the second is the fragment
>>>>>>     count. Both of these limits need to be taken into account when
>>>>>> doing
>>>>>>     TSO. Else some kinds of hardware might have to drop completely
>>>>>> valid
>>>>>>     mbuf chains because they cannot loaded into the given
>>>>>> hardware's DMA
>>>>>>     engine. The new way of doing TSO limitation has been made
>>>>>> backwards
>>>>>>     compatible as input from other FreeBSD developers and will use
>>>>>>     defaults for values not set.
>>>>>>
>>>>>>     Reviewed by:    adrian, rmacklem
>>>>>>     Sponsored by:    Mellanox Technologies
>>>>>
>>>>> This commit makes xen-netfront tx performance drop from ~5Gbits/sec
>>>>> (with debug options enabled) to 446 Mbits/sec. I'm currently looking,
>>>>> but if anyone has ideas they are welcome.
>>>>>
>>>>
>>>> Hi Roger,
>>>>
>>>> Looking at the netfront code you should subtract 1 from tsomaxsegcount
>>>> prior to r287775. The reason might simply be that 2K clusters are used
>>>> instead of 4K clusters, causing m_defrag() to be called.
>>>>
>>>>>          ifp->if_hw_tsomax = 65536 - (ETHER_HDR_LEN +
>>>>> ETHER_VLAN_ENCAP_LEN);
>>>>>          ifp->if_hw_tsomaxsegcount = MAX_TX_REQ_FRAGS;
>>>>>          ifp->if_hw_tsomaxsegsize = PAGE_SIZE;
>>>>
>>>> After r287775 can you try these settings:
>>>>
>>>> ifp->if_hw_tsomax = 65536;
>>>> ifp->if_hw_tsomaxsegcount = MAX_TX_REQ_FRAGS;
>>>> ifp->if_hw_tsomaxsegsize = PAGE_SIZE;
>>>>
>>>> And see if the performance is the same like before?
>>>
>>
>> Hi Roger,
>>
>>> Yes, performance seems to be fine after setting if_hw_tsomax to 65536.
>>> Is there some documentation about the usage of if_hw_tsomax? Does the
>>> network subsystem already takes care of subtracting the space for ether
>>> header and the vlan encapsulation, so it's no longer needed to specify
>>> them in if_hw_tsomax?
>>
>> In the past only the TCP and IP layers were accounted for by the TSO
>> parameters. A the present all layers are accounted for. This might fit
>> the kind of adapter you are using better, because it appears to me it is
>> DMA'ing all of the mbuf chain. Some other network adapters only DMA the
>> TCP payload data and copy the ETH/TCP/IP headers into a special DMA'able
>> memory area.
>
> Thanks for the hint, I'm not sure where that DMA tag is coming from,
> xen-netfront doesn't define any DMA tag at all, and AFAICT none of it's
> parents do:
>
> nexus0
>   [...]
>   xenpv0
>     granttable0
>     xen_et0
>     xenstore0
>       xenballoon0
>       xctrl0
>       xs_dev0
>       xenbusb_front0
>         xbd0
>         xn0
>
> So I don't see where this bouncing requirement is coming from, although
> I'm sure I'm missing something...
>
>>>
>>> Also, this commit was MFC'ed to stable/10 and 10.2 suffers from the same
>>> problem. Can we issue and EN to get this fixed in 10.2?
>>
>> When this patch has been given some time to settle, and more people have
>> tested it, I can submit a request for re @ to do that. Please remind me
>> if I forget.
>
> No problem, will do so if needed :).
>
> Roger.
>


More information about the svn-src-head mailing list