CPU architecture after Moore’s Law: What’s next? – Computerworld

Thank you

Your message has been sent.

There was an error emailing this page.

By Lamont Wood

Contributing Writer, Computerworld | Jul 24, 2017 3:00 AM PT

When considering the future of CPU architecture, some industry watchers predict excitement, and some predict boredom. But no one predicts a return to the old days, when speed doubled at least every other year.

The upbeat prognosticators include David Patterson, a professor at the University of California, Berkeley, who literally wrote the textbook (with John Hennessy) on computer architecture. This will be a renaissance era for computer architecture these will be exciting times, he says.

Not so much, says microprocessor consultant Jim Turley, founder of Silicon Insider. In five years we will be 10% ahead of where we are now, he predicts. Every few years there is a university research project that thinks they are about to overturn the tried-and-true architecture that John von Neumann and Alan Turing would recognize and unicorns will dance and butterflies will sing. It never really happens, and we just make the same computers go faster and everyone is satisfied. In terms of commercial value, steady, incremental improvement is the way to go.

They are both reacting to the same thing: the increasing irrelevance of Moores Law, which observed that the number of transistors that could be put on a chip at the same price doubled every 18 to 24 months. For more to fit they had to get smaller, which let them run faster, albeit hotter, so performance rose over the years but so did expectations. Today, those expectations remain, but processor performance has plateaued.

Power dissipation is the whole deal, says Tom Conte, a professor at the Georgia Institute of Technology and past president of the IEEE Computer Society. Removing 150 watts per square centimeter is the best we can do without resorting to exotic cooling, which costs more. Since power is related to frequency, we cant increase the frequency, as the chip would get hotter. So we put in more cores and clock them at about the same speed. They can accelerate your computer when it has multiple programs running, but no one has more than a few trying to run at the same time.

The approach reaches the point of diminishing returns at about eight cores, says Linley Gwennap, an analyst at The Linley Group. Eight things in parallel is about the limit, and hardly any programs use more than three or four cores. So we have run into a wall on getting speed from cores. The cores themselves are not getting much wider than 64 bits. Intel-style cores can do about five instructions at a time, and ARM cores are up to three, but beyond five is the point of diminishing returns, and we need new architecture to get beyond that. The bottom line is traditional software will not get much faster.

Actually, we hit the wall back in the 90s, Conte adds. Even though transistors were getting faster, CPU circuits were getting slower as wire length dominated the computation. We hid that fact using superscalar architecture [i.e., internal parallelism]. That gave us a speedup of 2x or 3x. Then we hit the power wall and had to stop playing that game.

Sponsored Links

Go here to see the original:

CPU architecture after Moore's Law: What's next? - Computerworld

Related Posts

Comments are closed.