Multiqueue testing project - final status report
Tiwei Bie
btw at mail.ustc.edu.cn
Sun Aug 30 00:09:22 UTC 2015
On Sat, Aug 29, 2015 at 08:54:10PM +0100, George Neville-Neil wrote:
>
>
> On 27 Aug 2015, at 10:59, Tiwei Bie wrote:
>
> > Dear All!
> >
> > This is the final status report of the Multiqueue Testing Project.
> >
> > The aim of this project is to design and implement an infrastructure
> > to validate that a number of the network stack's multiqueue behaviours
> > are as expected.
> >
> > The deliverables of this project mainly consist of:
> >
> > - A general mechanism to collect the per-ring per-cpu statistics
> > which can be used by all NIC drivers, and the extended netstat(1)
> > which can report these statistics.
> >
> > - A suite of network stack behavior testing programs which consists
> > of (a) a virtual multiqueue ethernet interface (vme), (b) a UDP
> > packet generator based on vme, (c) a UDP server based on socket(2),
> > (d) a TCP client based on lwip and vme, (e) a TCP server based on
> > socket.
> >
> > At present, most parts of this project have been implemented. Few
> > parts have been committed to -HEAD, and other parts have been submitted
> > to Phabricator.
> >
> > Now, I'm also working on improving the IPv6 RSS supports with the
> > help of adrian at . ^_^
> >
> > Finally, I want to say thank you to my mentor Hiren, George and Robert,
> > for the helps and guidance! ^_^
> >
>
> Howdy,
>
> I've seen quite a few commits go by which is most excellent. Quick question,
> where are the tests? Did I miss those?
>
I only sent the test to Adrian when replying his mail. I should submit
the test to Phabricator together with the patch... Sorry... :-(
PS. This is the test I sent to Adrian:
> On Wed, Aug 26, 2015 at 10:38:30PM -0700, Adrian Chadd wrote:
> > ooo cool! ok, so have you tested it at all? :) If so, how'd you test it?
> >
>
> I have tested it with the tools in tools/tools/mq-testing [1]. ^_^
> I set each packet's hash type to M_HASHTYPE_NONE for the packets
> injected by pktgen [2] via vme [3]. So the packets are received with
> no RSS info setted. And it will be done a software calculation of
> the RSS by rss_soft_m2cpuid_v6().
>
> The command I used to inject packets to network stack via vme:
>
> % sudo ./pktgen -i vme0 -6
>
> vme is configured to have 4 queues:
>
> % sysctl net.link.vme.num_queues
> net.link.vme.num_queues: 4
>
> I have 4 cores on my computer, And netisr is configured to use 4
> threads:
>
> % cat /boot/loader.conf
> net.isr.maxthreads=4
> net.isr.bindthreads=1
>
> And I got the following outputs from netstat [4]:
>
> % ./netstat -I vme0 -R
> --------------------------- vme0 ------------------------
>
> ring0:
> cpu0 cpu1 cpu2 cpu3
> ifinput : 39648737 37519099 37491641 37382462
> netisr : 112393348 0 0 0
> ether : 39648732 37519102 37491643 37382462
> ip : 0 0 0 0
> ip6 : 152041939 0 0 0
> tcp : 0 0 0 0
> udp : 151189032 0 0 0
>
> ring1:
> cpu0 cpu1 cpu2 cpu3
> ifinput : 39639711 37518364 37496728 37383776
> netisr : 0 114520209 0 0
> ether : 39639710 37518363 37496724 37383782
> ip : 0 0 0 0
> ip6 : 0 152038579 0 0
> tcp : 0 0 0 0
> udp : 0 151183691 0 0
>
> ring2:
> cpu0 cpu1 cpu2 cpu3
> ifinput : 39643992 37520994 37503514 37382789
> netisr : 0 0 114547782 0
> ether : 39643999 37520995 37503505 37382790
> ip : 0 0 0 0
> ip6 : 0 0 152051289 0
> tcp : 0 0 0 0
> udp : 0 0 151195834 0
>
> ring3:
> cpu0 cpu1 cpu2 cpu3
> ifinput : 39650698 37524393 37504604 37382778
> netisr : 0 0 0 114679712
> ether : 39650694 37524394 37504604 37382781
> ip : 0 0 0 0
> ip6 : 0 0 0 152062473
> tcp : 0 0 0 0
> udp : 0 0 0 151206219
>
> This table shows the number of packets received on each ring
> and processed by each layer on each CPU. So the following
> line means:
>
> ring0:
> cpu0 cpu1 cpu2 cpu3
> ifinput : 39648737 37519099 37491641 37382462
>
> There are 39648737 packets received on ring0 are processed by
> ifp->if_input(), ie. ether_input() on CPU0.
>
> Because the pktgen wasn't bind to a specified CPU, and ifp->if_input(),
> ie. ether_input() is executed on pktgen's process context. So
> the 'ifinput' row shows the packets have been processed by ether_input()
> on each CPU.
>
> As "ether" netisr handler's dispatch is NETISR_DISPATCH_DIRECT,
> the packets are processed by ether_nh_input() on the same CPU.
>
> As "ip6" netisr handler's dispatch is NETISR_DISPATCH_HYBRID,
> a new CPU will be selected by rss_soft_m2cpuid_v6(). So the
> packets are processed by ip6_input() on a selected CPU based
> on the RSS hashing.
>
> The packets injected to vme are assigned to a selected ring
> based on the RSS hashing. So the packets received on each ring
> are processed by ip6/udp on a corresponding CPU. It is the
> same with what the table shows. So rss_soft_m2cpuid_v6() works.
>
> Sorry for my broken English... :-(
>
> Best regards,
> Twei Bie
>
> [1] https://svnweb.freebsd.org/socsvn/soc2015/btw/head/tools/tools/mq-testing
> [2] https://svnweb.freebsd.org/socsvn/soc2015/btw/head/tools/tools/mq-testing/udp
> [3] https://svnweb.freebsd.org/socsvn/soc2015/btw/head/tools/tools/mq-testing/vme
> [4] https://svnweb.freebsd.org/socsvn/soc2015/btw/head/usr.bin/netstat
Best regards,
Tiwei Bie
More information about the soc-status
mailing list