These days, computer CPUs have multiple cores. This simply means there’s more than one processor core in a single physical CPU chip. Years ago, when extremely high performance was required, a computer two two separate CPUs was the answer. These days, they combine the two separate CPUs in one chip and call it a multi-core CPU.
So why has the multi-core CPU gone mainstream? The same reason many things happen: economy. For decades, the way computing power was increased was to make faster processors. Faster processors draw more electricity and electricity generates heat. At the same time, computers were getting smaller, and more heat in a smaller area means trouble. Cooling the CPU with fans and heatsinks wouldn’t be sufficient to prevent them from melting themselves.
Instead of moving away from air cooling, manufacturers simply put two processors into one CPU. Does this give you twice the computing power? In theory, but certainly not in practice. A program not written for a multi-core processor will simply run on one of the two cores. Granted, a second progam can run on the other, but if you want to get one job done as quickly as possible, simply buying a multi-core processor isn’t the way to do it. Instead, the program needs to be rewritten to divide the work up into descrete parts, which can be handed out to multiple cores. This does carry an automatic efficiency. Many hands means light work, and all that. Even still, there’s an overhead in controlling multiple cores that cuts into the efficiency. A 2 GHz dual core processor does not get the same work done as a single core 4 GHz processor. But the reason multi-core processors have overrun the industry is because a dual core processor does indeed get more done than a single core processor of the same speed, and despite the additional complexity, it’s a heck of a lot cheaper than one of double the speed.
Dual core processors are the standard, and quad core processors are available for workstation and server requirements. Is this the way things will go? How many core will be have on our desks in the future? Sixteen? Sixty four? Six hundred and forty?
Not any time soon, according to “Analysis: more than 16 cores may well be pointless,” an Ars Technica article by Jon Stokes. He points out that the ability of memory to supply data to the CPU has not been keeping pace with the increasing number of cores and their increasing efficiency. At some point, processing power will go unused because cores will not be able to get data to process. He calls this the ‘memory wall.’
Stokes cites a study by Sandia National Laboratories as cited in an IEEE spectrum article when he says,
8 cores is the point where the memory wall causes a fall-off in performance on certain types of science and engineering workloads (informatics, to be specific). At the 16-core mark, the performance is the same as it is for dual-core, and it drops off rapidly after that as you approach 64 cores.
Granted these figures are for certain, and presumably specialized, types of scientific and engineering work, but surely it will apply to more generalized tasks at some point as well.
So we’ve hit the speed wall and attacked the problem by moving to a parallel architecture. When we hit the memory wall, how will be tackle it? Will work shift to memory architecture to move data much faster? Surely this will just move the memory wall away rather than destroying it forever. What then?
Leave a Reply