Skip to content

NSF Workshop Summary

July 9, 2009


The second day of the NSF workshop on role of theory in design automation

images-1

Lynn Conway, with Carver Mead, also played a major role in the VLSI revolution. She is the other half of the creators of the Mead-Conway revolution. This workshop would not be happening if it were not for her wonderful work.

Today I want to summarize the last half day of the NSF workshop. We had multiple breakout groups to discuss what should NSF do in this area.

I once stayed at the Hilton next to NSF for another panel. As many hotels like to do, this Hilton had a computer monitor with the events of the day displayed. I looked up and saw the following on the monitor:

Bureau of Prisons: Breakout Sessions 10am-11am.

I cracked up and pointed this out to my colleagues. I think they smiled, but did not find it as funny as I did. Oh well.

Our Group Discussions

There was a discussion on one of the big puzzles in theory : Why do SAT solvers of various kinds work so well in practice? It was pointed out that real-world problems seem to be easy to solve with various SAT solvers. So what does this mean for P=NP?

There was a long discussion on stochastic systems. Perhaps this can be linked with the theorists who work on Rapidly Mixing Markov Processes. Many of the future design problems are stochastic for a variety of reasons. How do we check the properties of such systems? Applications include reasoning about power consumption of chips or biological systems, since such systems are not deterministic.

Theory Report

Vijaya Ramachandran and Suresh Venkatasubramanian and I were charged with putting together a half hour report on what role theory can play in design automation. We divided the report like a bad version of Charles Dickens’ Christmas Carol: I did the past, Suresh the present, and Vijaya the future.

{\bullet } I spoke on why theory played a major role in design automation in the 1980’s. I claimed that it was because theorists actually designed chips, had them fabricated, and tested them. I said that we need to do something like this today, if we want to have theorist get re-involved. There was at best mixed support. The major question was: but chips are very complex {\dots} This was the same comment that we got in the 1980’s. As Yogi Berra says, “This is like déjà vu all over again.”

{\bullet } Suresh spoke on the present state of theory. He pointed out that now theory has many ideas that could possibly help. They include: randomness, approximate algorithms, streaming, sub-linear methods and others. I think the group liked his presentation much more than mine.

{\bullet } Vijaya spoke last on the future of theory. Well, actually she spoke mostly on her own work on multicore algorithms, which is definitely part of the future of theory. She, and others, are working hard building a powerful theory on how to use multicore processors for real algorithms.

All Hands Discussions

Here are the reports from the four group leaders of the breakout groups. Well not their reports, but some high level view of what they had to say.

{\bullet } Pinaki Mazumder gave an overview of the theory issues in design automation. The theoretical questions in design automation have moved over the years from discrete mathematics to more continuous mathematics. The major areas he talked about were: verification methods based on SAT and other tools; analog design methods based on stochastic differential equations; simulation based on many different tools; and finally the new area of design of biological systems.

{\bullet } Sharad Malik spoke on what are the key research challenges. Scalable design methods are needed. They need new kinds of algorithms including: sub-linear methods, incremental algorithms, parallel algorithms. A surprising issue that they raised that is unusual for theory is they want all their algorithms to be deterministic. The reason is that if one tool is nondeterministic, then the design may change and mess up some later tool. I think this is a unique challenge for theory, since random algorithms are so basic to us.

A very hot topic, this is a bad pun, is that there are power viruses that use power to destroy a chip. The idea is suppose there is a strange program that if executed will cause way beyond the planned power consumption. If this can happen, then a virus can destroy your chip. Quite an interesting security question. Can you say cyber-warfare?

{\bullet } Jason Cong spoke on what is EDA–electronic design automation? Main idea is it’s a set of tools that go from high level description of circuits down to actual physical implementation.

{\bullet } Bob Brayton spoke on curriculum for design automation. The number of students interested in this area is way less in the last few years. Few positions are available either in academia or industry. Future is not clear.

Open Problems

One of the recurrent themes of the discussion is: we have SAT solvers that seem to work on all examples we try them on. How does this fit with the conventional wisdom that P{\neq}NP? This is a very interesting open problem, that we should work on in the future.

Last post on NSF. Back to usual for next post.

26 Comments leave one →
  1. Michael Mitzenmacher permalink
    July 9, 2009 4:13 pm

    I admit some confusion as to why “randomness” is considered such a problem? In the real world, usually randomness is pseudo-randomness (often something simple, perhaps lacking in provable properties but functional nonetheless), and you can essentially make your “randomized algorithm” deterministic by remembering the small seed. So why is randomness going to be a problem?

    (I’m not being flip — I’m really interested! If they view it as a problem, there’s probably some research to be done to make it not a problem in this context.)

    Comment cc’ed at Suresh’s blog.

    • Suresh permalink
      July 9, 2009 6:21 pm

      We need a way to link comments across blogs !

      To answer your question, Michael, my understanding is this is part user driven and part problem driven.

      From the system perspective, there’s an entire pipeline of components that go into making a design, and they like to fiddle with these pieces repeatedly (the incremental part). Having a randomized element that could change its answer the next time the whole pipeline is compiled is a problem.

      Now this can be changed with a litte tweaking: as you mention, as long as the seed can be preserved, an algorithm can be run in “randomized” or “deterministic mode”: in the latter form, it merely uses the cached seed.

      But here’s the user side of the story. They want these designs to be portable across different machines/systems, so that different users, running with the same “knobs”, get similar answers.

      This is because the user of such a tool is faced with a bewilidering array of knobs, and over time has become accustomed to setting certain knobs a certain way. Changing things around makes them unhappy.

      Now this latter problem is sociological, and is difficult to deal with for us. But I suspect the best approach is to ignore it for now. It’s really the companies that are concerned with satisfying their user base, and it’s not clear to me why research-level tools need to live under the same constraints. My personal view is that the underlying facilitating technologies for EDA research appear to be very tied to corporate products, and this is somewhat of a problem. To me it’s as if all database researchers had to use Oracle, and couldn’t really complain about it if it wasn’t very useful.

      Cc’d over at my place.

  2. Michael Mitzenmacher permalink
    July 9, 2009 4:23 pm

    Richard — your open question regarding the solvability of SAT reminded me of my paper with Salil Vadhan on “Why Simple Hash Functions Work”, where we were tackling a similar question in that there seemed to be a gap between practice and what the theory seemed to be telling us. There the “answer” seemed to be that if we assume the sequence of items being hashed has “enough entropy” for some notion of enough (and some notion of entropy), then even very simple hash functions suffice to get near-uniform performance.

    I wonder if something similar could work here? We do have provable performance guarantees for sparse random k-sat formulae, for instance. Perhaps real-world instances have enough entropy that they look enough like random formulae to suffice for these algorithms?

    • Suresh permalink
      July 9, 2009 6:23 pm

      This is why I was intrigued by the offer to provide “real-world” instances of SAT that are “easy to solve” by a solver. Having a db of such instances might make such investigations feasible.

    • rjlipton permalink*
      July 9, 2009 8:36 pm

      I really like the idea of trying to prove if as you say there is enough entropy then SAT solver will work.

      We should talk about this, would be a cool result.

      • Michael Mitzenmacher permalink
        July 9, 2009 11:02 pm

        E-mail me and let’s get started! 🙂

        I think to start I’d have to review why the “standard SAT algorithms” happen to work. Way back when I worked on the “simplest” algorithm — the pure literal rule — but I’m not that up on my backtracking algorithms.

      • July 9, 2009 11:38 pm

        I wonder if the threshold has anything to do with this: i.e being dense versus being close to the 4.2 ratio for clauses/variables.

  3. none permalink
    July 9, 2009 7:48 pm

    I thought it was well known that SAT solvers fail miserably on precisely the problems where you really wish they would work well (e.g. integer factoring, cryptanalysis, etc). Is that not the situation?

    • rjlipton permalink*
      July 9, 2009 8:37 pm

      The answer is probably yes. But have people tried it? Note it also might be important exactly how the problem is encoded into SAT.

      • David Molnar permalink
        July 13, 2009 9:26 pm

        Ilya Mironov at Microsoft Research Silicon Valley had a neat paper with his colleagues on finding hash collisions using SAT solvers.
        http://research.microsoft.com/en-us/people/mironov/sat-hash.pdf

        Despite encouraging results on MD4 and MD5, the technique has not so far progressed to finding collisions in SHA-1. It’s also worth noting that the solver in this work is used in a specific way that depends on a human first discovering a “differential path” for a particular hash function that is likely to lead to easy collisions. There is some work prior to Mironov’s that tries to brute force the collision search problem by directly encoding into SAT, but if I remember correctly such attempts failed even on MD4.

        On the other hand, 90+% of all the instances I see in my “whitebox fuzz testing” work turn out to be solvable within seconds. So there is certainly an issue of hidden structure here.
        http://www.cs.berkeley.edu/~dmolnar/metafuzz-tr-draft.pdf
        ftp://ftp.research.microsoft.com/pub/tr/TR-2007-58.pdf

    • July 9, 2009 11:38 pm

      Moreover, I don’t think the EDA folks particularly care about those examples.

      • Oink permalink
        July 10, 2009 2:58 am

        I agree with Suresh. Their SAT instances are probably the SAT-encoded version of “given an array, is it sorted?” If it’s sorted, then it’s satisfiable, otherwise it is unsatisfiable.

  4. Chandra permalink
    July 9, 2009 9:38 pm

    It appears that the SAT instances from EDA applications are quite structured
    and far from “random” because they come from circuits typically designed in a hierarchical fashion by either people or automated design tools. I wonder if looking towards entropy/random SAT analysis is the right thing to do. Just a thought, I am not familiar with the domain but I had a conversation with Igor Markov at some point along these lines.

  5. none permalink
    July 9, 2009 11:52 pm

    Yes, I believe cryptographers have tried SAT solvers on ciphers and hash functions, and gotten nowhere. It sounds to me like EDA (circuit routing) simply gives rise to easy problem instances. After all, you get the sorts of component layouts that humans can usually figure out with enough effort. Maybe it can also be compared with protein folding, which is NP-hard but which is “solved” by nature in cell reproduction all the time–as Scott Aaronson points out, again because the instances that arise in natural biology are easy. Heck, maybe even mathematics is like that. Serious mathematicians like Hilbert believe until the 1930’s (Gödel’s theorem) that all math problems were not only solvable, but were solvable by humans.

    Here’s an article about trying Minisat on AES:
    http://cs.swan.ac.uk/~csmg/2009_BCTCSWarwick_MG.pdf

    Actually, this one might interest you more. It’s about random formulas:
    http://www.cs.swan.ac.uk/~csoliver/Artikel/OKsolverAnalyse.html

    I found both (and others) through: http://www.google.com/search?q=“sat+solver”+aes

  6. July 10, 2009 7:53 am

    I thought the standard explanation for why SAT solvers work well “in practice” is that they are applied to instances that come from “modular” designs (for example in the case of VLSI or software verification), this modularity translates into instances of low treewidth, and SAT is provably easy on low-treewidth (and related non-expansion measures) instances.

    • rjlipton permalink*
      July 10, 2009 9:14 am

      I think that this amy be part of the answer, but cannot be hold story. Crypto boxes, like AES, are very well structured and made out of modules. So why is it so hard to break them with SAT solvers?

      I think this is a really interesting challenge. I like Mike (Mitzenmacher) ideas about entropy a lot. Still a mystery to me.

  7. July 10, 2009 9:35 am

    I am curious. Is there any known SAT solver that when given a SAT instance with bounded treewidth terminates in polynomial time (it could take n^{f(k)} time where k is the treewidth)? Basically it should work without having a tree decomposition.

    • rjlipton permalink*
      July 10, 2009 4:45 pm

      Interesting question. I assume that you mean something like: take a “standard” SAT solver. Can we prove that for treewidth k, the algorithm runs in time exponentially only in k?

      • Suresh permalink
        July 10, 2009 4:51 pm

        This is actually known. Specifically, construct a graph for a SAT instance where variables are nodes and two variables are connected if they occur in the same clause. Then if this graph has treewidth k, then there exists an algorithm for solving SAT that runs in time nk2^k.

        In other words, SAT parametrized by treewidth is in FPT.

        Ref: http://courses.csail.mit.edu/6.854/06/scribe/s22-online.pdf

      • Suresh permalink
        July 10, 2009 4:53 pm

        So I misunderstood slightly. The algorithm I mentioned above requires the tree decomposition, so it won’t work “obliviously”.

  8. July 10, 2009 7:15 pm

    About AES, I think that because of the “diffusion” steps, if you reduce the task of breaking AES to a SAT formula you would get a formula in which the clause-variable graph has noticeable expansion and, definitely, high treewidth. (So “modularity” is not sufficient for the formula to be easy, it should also be true that the modules have “small interface” and that the modules themselves recursively have a modular structure.) I don’t know, unfortunately, of any experimental study of the treewidth (or related parameters) of large SAT formulas that are solved efficiently in practice.

    I have heard (but I don’t know of references) that SAT solvers that are good in practice do not do well on random instances of density slightly below the threshold. (Very sparse random instances are easy to satisfy by fairly trivial heuristics, but the satisfiable regime close to the threshold is tricky.) It is conjectured that the survey propagation algorithms solve random instances of any density below the threshold, but Dimitris Achiloptas discussed some evidence against the conjecture at the Princeton Cryptography/Complexity workshop in June. In any case the “practical” SAT solvers don’t do well on random instances.

  9. Josephine permalink
    July 11, 2009 4:30 pm

    Please see the paper

    http://www.lsi.upc.es/~atserias/papers/clauselearning/aft_sat_paper_llncs.pdf

    for results on SAT solvers and bounded treewidth.

    • Chandra permalink
      July 12, 2009 4:38 pm

      Thanks. The paper is quite relevant to the discussion.

  10. Paul Beame permalink
    July 12, 2009 8:07 pm

    There is a website SAT Live! that has links to a wide variety of benchmark SAT problems, many of which were contributed as part of annual SAT-solver competitions. It is a little misleading to discuss items like numbers of variables for the formulas since many are derived by standard translations that generate many variables that don’t show up outside of 2-clauses. It is true that problems involving tens of thousands of variables are solved routinely by forms of backtracking search. On the other hand such methods fail on satisfiable random 3-CNF formulas with only 500 variables and 2050 clauses (whereas a metropolis-style local search will handle 100,000-variable formulas at the same clause-variable ratios and survey propagation will handle 1,000,000-variable formulas at the same ratios). Unfortunately, survey propagation seems useless on practical problems (it cannot handle large arity constraints well) and local search does not compete well with backtracking search for most such problems.

    A major win in moving to SAT solvers for tasks such as analyzing AES is that the solution methods are so simple and flexible that they do not get so bogged down in the structural details. On the other hand, most of the effective backtracking routines are, at heart, only efficient engines for resolution inference and as such have the inherent limitations of resolution inference.

  11. October 27, 2010 7:42 pm

    LCD monitors are the de factor standard these days because they do not consume too much electricity`;~

Trackbacks

  1. Arithmetic Complexity and Symmetry « Gödel’s Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading