Dual Core Or Dual CPU - What's the real difference in performance?

Cy Schubert Cy.Schubert at komquats.com
Thu Feb 8 15:18:44 UTC 2007


In message <17866.47828.219523.71972 at bhuda.mired.org>, Mike Meyer writes:
> Generally, more processors means things will go faster until you run
> out of threads. However, if there's some shared resource that is the
> bottleneck for your load, and the resource doesn't support
> simultaneous access by all the cores, more cores can slow things
> down.
> 
> Of course, it's not really that simple. Some shared resources can be
> managed so as to make things improve under most loads, even if they
> don't support simultaneous access.

Generally speaking the performance increase is not linear. At some point 
there is no benefit to adding more processors. In a former life when I was 
an MVS systems programmer the limit was seven processors in a System/370. 
Today we can use 16, 32, even 64 processors with a standard operating 
system and current hardware, unless one of the massively parallel 
architectures is used.

To answer the original posters question, there are architectural 
differences mentioned here, e.g. shared cache, I/O channel, etc., but the 
reason the chip manufacturers make them is that they're more cost effective 
than two CPUs.

The AMD X2 series of chips (I have one), they're not truely a dual 
processor chip. They're analogous to the single processor System/370 with 
an AP (attached processor) in concept. What this means is that both 
processors can execute all instructions and are just as capable in every 
way except external interrupts, e.g. I/O interrupts, are handled by the 
processor 0 as only that processor is "wired" to be interrupted in case of 
external interrupt. I can't comment about Intel's Dual Core CPUs as I don't 
know their architecture but I'd suspect the same would be true. Chips in 
which there are two dual core CPUs on the same die, I believe one of each 
of the dual core CPUs can handle external interrupts.

>From an operating system perspective an AP means that processor 0 will 
receive the interrupt and put it on it's queue. Then either processor 0 or 
processor 1 would take the interrupt off the queue and do something with it.

To add another dimension to this discussion, hyperthreading uses spare 
cycles in a single processor to pretend there are two processors, 
increasing performance for some apps and reducing performance for other 
apps. For example Sun T2000 systems have multiple CPUs each with multiple 
cores and each core capable of hyperthreading, presenting to Solaris 32 
processors where in fact there are only two CPU chips (I may have the 
numbers wrong as I spend most of my time in "management" mode at work and 
you know managers don't have brains).

Generally speaking, dual core is an inexpensive way to get SMP into the 
hands of people who could not normally afford SMP technology as it was. I 
have a mortgage so spending money on computers is not a high priority in 
relation to that priority but dual core does give me an opportunity to 
enter the market relatively inexpensively and get good value for the money 
I spend on the technology. That's really what it's all about, how much 
performance you get for the money you spend.


-- 
Cheers,
Cy Schubert <Cy.Schubert at komquats.com>
FreeBSD UNIX:  <cy at FreeBSD.org>   Web:  http://www.FreeBSD.org

			e**(i*pi)+1=0




More information about the freebsd-questions mailing list