nmbclusters: how do we want to fix this for 8.3 ?
jmallett at FreeBSD.org
Thu Feb 23 16:33:01 UTC 2012
On Thu, Feb 23, 2012 at 07:19, Ivan Voras <ivoras at freebsd.org> wrote:
> On 23/02/2012 09:19, Fabien Thomas wrote:
>> I think this is more reasonable to setup interface with one queue.
> Unfortunately, the moment you do that, two things will happen:
> 1) users will start complaining again how FreeBSD is slow
> 2) the setting will be come a "sacred cow" and nobody will change this
> default for the next 10 years.
Is this any better than making queue-per-core a sacred cow? Even very
small systems with comparatively-low memory these days have an
increasing number of cores. They also usually have more RAM to go
with those cores, but not always. Queue-per-core isn't even optimal
for some kinds of workloads, and is harmful to overall performance at
higher levels. It also assumes that every core should be utilized for
the exciting task of receiving packets. This makes sense on some
systems, but not all.
Plus more queues doesn't necessarily equal better performance even on
systems where you have the memory and cores to spare. On systems with
non-uniform memory architectures, routinely processing queues on
different physical packages can make networking performance worse.
More queues is not a magic wand, it can be roughly the equivalent of
go-faster stripes. Queue-per-core has a sort of logic to it, but is
not necessarily sensible, like the funroll-all-loops school of system
Which sounds slightly off-topic, except that dedicating loads of mbufs
to receive queues that will sit empty on the vast majority of systems
and receive a few packets per second in the service of some kind of
magical thinking or excitement about multiqueue reception may be a
little ill-advised. On my desktop with hardware supporting multiple
queues, do I really want to use the maximum number of them just to
handle a few thousand packets per second? One core can do that just
FreeBSD's great to drop-in on forwarding systems that will have
moderate load, but it seems the best justification for this default is
so users need fewer reboots to get more queues to spread what is
assumed to be an evenly-distributed load over other cores. In
practice, isn't the real problem that we have no facility for changing
the number of queues at runtime?
If the number of queues weren't fixed at boot, users could actually
find the number that suits them best with a plausible amount of work,
and the point about FreeBSD being "slow" goes away since it's perhaps
one more sysctl to set (or one per-interface) or one (or one-per)
ifconfig line to run, along with enabling forwarding, etc.
The big commitment that multi-queue drivers ask for when they use the
maximum number of queues on boot and then demand to fill those queues
up with mbufs is unreasonable, even if it can be met on a growing
number of systems without much in the way of pain. It's unreasonable,
but perhaps it feels good to see all those interrupts bouncing around,
all those threads running from time to time in top. Maybe it makes
FreeBSD seem more serious, or perhaps it's something that gets people
excited. It gives the appearance of doing quite a bit behind the
scenes, and perhaps that's beneficial in and of itself, and will keep
users from imagining that FreeBSD is slow, to your point. We should
be clear, though, whether we are motivated by technical or
psychological constraints and benefits.
More information about the freebsd-stable