[status report] RPS/RFS #week2

Takuya ASADA syuu at dokukino.com
Mon Jun 6 23:12:34 UTC 2011


Hi,

I think you noticed me last week, RPS kernel performance is slower
than normal kernel and it gets higher CPU usage.
Was it "net.isr.numthreads < CPU_NUM" case?

And, in that time you told me it maybe because hash function is too
heavy, was it wrong?

2011/6/7 Kazuya Goda <gockzy at gmail.com>:
> Hi,
>
> The goal of my project is to implement RPS/RFS on FreeBSD. RPS solves
> the problem of
> mono-queue NIC which can't distribute packets across multiple processors.
>
> This week status:
>
> * Implement
> RPS act this:
> 1. get IP address and TCP port in Ethernet layer
> 2. calculate hash from IP address and TCP port
> 3. assign hash value to m->pkthdr.flowid
> 4. enable M_FLOWID flags in m->m_flags
>
> I added this process in ether_demux(). I used rss_hash_ip_4tuple()
> from //depot/users/rwatson/tcp/...
> branch to calculating hash value. I think I'd like to share functions
> of calculating hash value with RSS.
>
>
> * Test
> - Confirm to select CPU
> Enable RPS, pakcets are distributed other CPU on IP layer. At this
> time, same flow is distributed
> same CPU. So, I printed below values to comfirm.
>
> -- In netisr_select_cpuid() : m->pkthdr.flowid (flowid) , cpuid
> (destination CPU)
> -- In ip_input()                  : m->pkthdr.flowid (flowid), curcpu
> (current CPU)
>
> I confirmed that if flowid is the same, destination CPU is equal to current CPU.
>
> - Simplified benchmark test
> I used netperf to benchmark test. Server environment is:
>
> CPU : Xeon E5310 at 1.6GHz x2(total 8 core)
> NIC : e1000 (interface : PCI)
>
> Below is result running 300 instances of netperf TCP_RR test with 1
> byte req. and resp.
> In both cases, net.isr.numthreads is 8.
>
> -- Result --
> Without RPS : 132 tps
> With      RPS : 230 tps
>
>
> *known problem
> In the case of net.isr.numthreads < CPU_NUM, connection is not closed
> at high load.
>
>
> * Next week
> - Search for a cause "known problem"
> - Implement IPv6, UDP support
>
>
> Regards,
>
> Kazuya Goda
>


More information about the soc-status mailing list