Skip to content

NSF Workshop On Design Automation and Theory

July 8, 2009


A report from an NSF workshop on design automation and theory, and a video

images

Carver Mead is famous for creating the “Mead-Conway” revolution with Lynn Conway. In 1980 they published a best selling textbook “Introduction to VLSI System Design” which allowed almost anyone to design a VLSI chip. Carver has won many awards including the National Medal of Technology, for this and related work.

Today I want to talk about the NSF workshop that is going on right now: Electronic Design Automation–Past, Present, and Future. One goal of the workshop is to get theorists working again on various aspects of design automation. Back in the 1980’s theory played a major part in design automation. Somehow that stopped years ago.

In the 1980’s it really was possible to design chips with their book as a guide. Really. My students at the time designed chips that actually worked. Dan Lopresti’s Princeton PhD. thesis was based, in part, on building a chip that solved the edit distance problem via a parallel algorithm. I have mentioned his work earlier in a discussion on this problem.

Talk Highlights: Avant Lunch

Jeannette Wing, the head of CISE, started the workshop off and said as clearly as possible:

“NSF is interested in long term, high risk, research that will change the world.”

I love her statement, and I believe that there are great opportunities for theory to help make this happen.


So far there have been a number of interesting talks:

{\bullet } Bill Joyner, the keynote speaker, who is from industry had some neat statistics on FOCS/STOC papers. In the 1980’s a major fraction, about {30\%}, of those papers were on VLSI and algorithms for design. In the last FOCS/STOC he counted that there was less than {1\%}. Quite a dramatic change.

{\bullet } Andreas Kuehlmann, also from industry, made two interesting points. First, he repeated a rule that I learned as a graduate student. Suppose someone says that X is true. Here is a simple test to see if this statement has any meaning. Think of not X: if this statement is obviously false or is obviously true, then X is meaningless. Pretty good rule.

Second, he reported on a really interesting experiment. He took a circuit {C} and fed it to a tool that makes a chip out of the circuit. The size of the chip that the tool yields is a measure of its quality. He got some value. Then, he changed the circuit in a completely trivial manner: he just changed the names of the circuit nodes. That’s all. Next, he ran the tool again. Then, he changed the circuit names again, and ran the tool. And so on. He plotted all the values. The surprise was that the area values changed dramatically–even though the circuit was really the same. The audience all laughed except for me.

I thought he had discovered something positive, while he considered the behavior of the tool a problem. My hypothesis is that the tool is deterministic, but the random re-naming of the nodes essentially converts the tool into a random algorithm. Andreas’s experiments show that the minimum of several random runs of the tool are better than a single run. Clearly, we should think about adding randomness to other design tools.

{\bullet } Carl Seger, of Intel, had a nice number–a big number. He said that the just announced Intel processor has {2.6} billion transistors. In 1980, when the Mead-Conway revolution started, the number of transistors was {20,000}. Carl says in ten years Intel chips will have over one trillion transistors. For theory research we want {n} as large as possible. If we are going to work hard on a problem and change the running time by just a logarithmic factor, then for these size {n}‘s that really matters. Ever better — it would be terrific if we can reduce the running time of a quadratic algorithm to near linear time.

{\bullet } Edmund Clarke, Turing Award winner, talked about statistical checking of certain kinds of systems. It is unclear from his short talk exactly what he has in mind. However, his idea I believe is this: Imagine that you have a Markov type system. You do not want to know the exact probability {p} of some event. Instead you want to tell if either {p \le \theta_{1}} or {p \ge \theta_{2}}. He states he can use Bayesian analysis to handle this type of question with relatively few simulations. One area of application is not to digital logic, but to biological systems. This type of question seems to be the kind that theorist could make progress on.

{\bullet } Shaz Qadeer, of Microsoft Research, spoke on Heisenbugs–a term due to the late Jim Gray. These are bugs that when you try to find them, they go away. Clearly, the name comes from the Heisenberg uncertainty principle, due to Werner Heisenberg. The reason these bugs “go away” is that your test code tends to change the order that the program is executed.

Shaz has a system called, CHESS, that allows the scheduler of the system to help you find Heisenbugs. In distributed systems he can find certain “race conditions.” The main problem is the usual state explosion problem: if there are {t} threads and {n} steps, there are {n^{t}} ways to execute the code. I think there are some neat theory type questions here.

In answering a question he said: “at some level I am lying.” Is this a paradox? Or is it just near lunch and I am distracted?

Talk Highlights: Après Lunch

It’s after lunch, a pretty nice buffet, and here are some highlights of the talks:

{\bullet } Jaijeet Roychowdhury gave a talk on analogy systems. The highlight of his talk was a realtime demonstration of how biological systems can become synchronized. He took three metronomes and started them ticking on a simple platform on the podium that was balanced on two soda cans. Initially they were all running at the same rate, but they were not in synch. In less than a minute they were all running completely in synch. Very cool demo. Clearly, the serious questions are how to use local behavior to create global behavior.

{\bullet } Jason Hibbeler, of IBM, talked about rule driven design. If you follow the rules, then we can make your design work. Traditionally, the rules were always monotone. For example, if two wires get too close, they will not work. However, if they are far apart they will always be okay. Now the fancy lithography that is used to make chips–I do not follow this–causes the rules to become non-monotone. This raises all sorts of new geometric algorithmic problems.

{\bullet } Igor Markov, of the University of Michigan, spoke on design automation and its connection to physics. In short, he is interested in the power and simulation of non-standard physical systems. Someone named Scott Aaronson was referred to when he talked about quantum computing: “Closed Timelike Curves Make Quantum and Classical Computing Equivalent” a paper with John Watrous.

Igor’s main idea is to try to create efficient simulations of new non-standard physics systems. Clearly, a very theory based talk with lots of connections to things theorists are interested in.

{\bullet } Mary Jane Irwin, of Penn State, stated as systems get made out of very small components, you need to be able to re-configure the system. This is needed to get around faults, for example. The system must be able to monitor itself and also control itself. She likes to call the system the “compute fabric.” Where do the sensors go? This is a design automation issue. Also she raised the issue of 3-dimensional fabrics, which will further stress existing design tools. This clearly raises many interesting geometric algorithmic questions.

{\bullet } David Pan, of University of Texas at Austin, talked about Moore’s Law. There have been many “lower bounds proofs” claimed over years for lithography limits. Computer chips are made via an optical process, since visible light’s wavelength is 193nm there were proofs that chip features could not get below 193nm. Now the processes are at 22nm or better. How it’s done is very clever, and of course it gets around the assumptions that were used in the lower bound proofs. This, I think, says something interesting about lower bound “proofs” in general.

Talk Highlights: Biology

It’s the end of the day and I am tired, so I cannot say that anything from here on will make sense.

{\bullet } Jim Heath, of Caltech, discussed how to manage personal biological data, and use the data for medical diagnosis and treatment. An evolving area and one that is ripe for algorithmic applications, he claims. The major data is DNA, mRNA, and proteins. Complete DNA per person will cost $1,000 soon; mRNA costs $1 per messenger RNA now; proteins costs about $50 per measurement now. Interesting questions concern how to build network models of biological processes. A key is that most interactions are pairwise between proteins. He said that there is only one known example of three proteins that interact together. This raises the hope that powerful theory ideas could play a role here.

{\bullet } Chris Meyers, of University of Utah, spoke about synthetic biological circuits. There is a catalogue of biological “parts” that people have discovered. They seem to work by themselves inside certain bacteria. The next step is to try to use them to make more complex biological circuits. The parts tend to be extremely error prone, which raises interesting questions of how to design biological circuits: sounds like fault tolerance problem. The first Bio-Design conference is this July {27^{th}}.

{\bullet } Lou Scheffer, of industry, spoke on trying to figure out how the brain works. Modest goal. His approach is to take it apart and see how it is made. Basically, he is interested in reverse engineering, which is traditionally a method used by a company to figure out how another company’s chips work. Lou’s idea is to try to use reverse engineering methods from electronics to make biological reverse engineering work.

Open Problems

The main goal is to get theory more involved in the area of design automation. There seems to be many possible ways that theory could help.

Tomorrow is another day and as a group we will try to make progress on what to do next. I will report on that another time.

About these ads
2 Comments leave one →
  1. July 9, 2009 1:31 am

    Site Bookmarked!! I usually dont comment on blogs, but this is an awesome piece of work. I finally found1 that Im into… I’ll be back often.

Trackbacks

  1. NSF Workshop On Design Automation and Theory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,914 other followers