Who Wants Parallel Computers?
An announcement of a DIMACS conference on parallel computation
Gordon Bell is of course not a theorist, but is one of the great computer designers of all time. He helped design machines that were used by millions—especially the early PDP family and the VAX family while at Digital Equipment Corporation (DEC). His 1971 book “Computer Structures: Readings and Examples,” with Allen Newell was an early classic book on computer architecture. When I was a student it was “the” book we all studied.
Today I would like to help announce an event that will happen next year—an event that is related to Gordon’s work. I usually do not announce events, but there is always room for exceptions, so here goes.
Gordon likes to get straight to the point, he is quite strong in his opinions, and he is usually right. He is famous for listening to talks that predict some technology X will soon change the world, then interrupting the speaker with:
“I am willing to bet dollars that X will not be in use in five years.”
He has won almost all of these bets, lost one or two, and had a few people fail to pay up after they lost. Of course, behind the personal bet there was often millions of dollars of research, venture capital, and work effort essentially betting which direction the industry would take. This post is about such an issue: will raw speed give way to multi-core parallel processing, and will that ramp up the usage of fault-tolerance technology?
I recall attending a meeting called Nextgens Technologies in December 2006. It is a yearly meeting held by the TTI/Vanguard group on various new technologies, as you might guess from its title. One of the invited speakers was an expert on processor design from UCLA, Eli Yablonovitch. He was giving a very technical talk—talks at this meeting range from very technical to very general—on why extremely fast processors were impossible to make. Essentially he was explaining the physics behind the collapse of Moore’s Law and the rise of many-core systems. The short answer is power: if chips are clocked much faster than today’s rates, they will burn up. No amount of cooling could stop them from melting.
Gordon did not challenge Yablonovitch with a bet, nor did Gordon challenge his arguments. But he did say,
I do not want parallel computers. I want faster uni-processors.
Yablonovitch listened, nodded to Gordon, and answered you may want uni-processors that are very fast, but many-core is what you are going to get.
For just a small example of what Gordon meant, suppose you are developing a chess program, where speed is utmost. Which would you rather have, two cores or a twice-faster uniprocessor? The world champion Rybka chess program advertises honestly that it gets only 71% of the benefit of doubling the number of cores. The other 29% is the overhead for chess search being hard to parallelize, for communication between cores, and for system management. The predictions are that this will only get much worse as the number of cores increase from two to thousands.
Let’s turn to the announcement that is all about the rise of many-core.
Many-Core: The Future
Phil Gibbons, Howard Karloff, and Sergei Vassilvitskii are organizing a workshop entitled Parallelism: A 2020 Vision, to be held March 14-16 at DIMACS. Howard says the goal of the workshop is to bring together both users and researchers to discuss:
- how parallel computing in its various forms is used today;
- what new uses and programming abstractions will arise by 2020;
- what parallel computers will look like in 2020; and
- how to model parallelism theoretically.
I think the workshop looks interesting, and have no doubt that it will be fun to attend. If you are interested get in touch with DIMACS or Howard: as usual all are welcome.
I would like to make one prediction about the future of computers. The rise of many-core and the construction of systems with huge numbers of chips will lead to the following: Fault tolerant computation will become more important both in practice and in theory. This is a safe prediction and a dangerous one. It is safe because 2020 is a long way from now. I could predict we will all be driving space cars like on the Jetsons and be safe, since who will remember what I predict today?
It is dangerous because fault tolerant computing has been around for a long time, has been predicted as “just around the corner” many times, and yet has largely been unused in most systems.
In the early days of computers, when they used vacuum tubes which were very unreliable, John von Neumann wrote in 1952 a famous paper on fault tolerance methods. It was titled: “Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components.” This work is justly famous, and fault tolerant computing methods are in vogue again. However, one of the reasons that von Neumann’s work did not have much practical impact is that quickly the technology changed: tubes became transistors, which became micro-chips. Each of these changes yielded huge increases in reliability. Perhaps that will happen again, or perhaps it will not and fault tolerant methods will be needed in the future.
Many-Core: The Future?
I wonder if many-core is the final answer? I do not doubt that we will see more and more processors on a single chip. But I do wonder if predictions of the type made by Yablonovitch are absolute. I have talked before on how hard it is to prove that something is impossible. This is true in mathematics and in complexity theory, but is even harder in physical systems. What about some breakthrough that allows chips to run cooler? Or allows them to use totally different methods to implement gates? I wonder.
Perhaps one of the topics in the planned workshop should be on how to make faster uni-processors? I hope that you attend the workshop if you are interested in this area. Maybe we can do a post on the workshop in the spring.