Skip to content

Proving Cook’s Theorem

October 10, 2013

Another proof idea using finite automata


Steve Cook proved three landmark theorems with 1971 dates. The first has been called a “surprising theorem”: that any deterministic pushdown automaton with two-way input tape can be simulated in linear time by a random-access machine. This implies that string matching can be done in linear time, which inspired Donald Knuth and Vaughan Pratt to find a direct algorithm that removes a dependence on the input alphabet size. This was before they learned that James Morris had found it independently, later than but without knowledge of Cook, and their algorithm is now called KMP. Fast string matching has thousands of applications. Second was his characterization of polynomial time by log-space machines with auxiliary pushdowns, a fact which may yet help for proving lower bounds. Then there was his third result, which appeared at STOC 1971, and was given a “slif” (stronger, later, independent form) by Leonid Levin.

Today I want to present a proof of the famous Cook-Levin Theorem that {\mathsf{SAT}} is {\mathsf{NP}}-complete, and also mention one used by Ken.

I am currently giving a class on complexity theory, and thought this proof might have some advantage over the usual proofs. There are the original tableau-based proofs, the later circuit-based proofs, and variations of them.

The proof here is based on my favorite kind of objects—well one of them—finite automata. They are remarkably powerful and can be used to give a relatively clean proof. At least I believe the proof is clean for students—I would like to hear any thoughts that you all may have about it. It needs no assumption about oblivious tape access patterns, and does not use lots of complex indexing.

So let’s take a look at the proof. Here is a Wordle from STOC 1971:



My goal is to try and give a proof of Cook’s famous theorem that is easy to follow. There are many proofs already, but I thought I would try and see if I could give a slightly different one, and hope that it is clear. My thought is that sometimes the measure of clarity of a proof is the same as “ownership”: if you wrote it—own it—then it is clear. But here goes a proof that is a bit different.

The overarching idea is to do two things: (i) avoid as much detailed encoding as possible, and (ii) leverage existing concepts that you probably already know.


Let {L} be any set in {\mathsf{NP}}. Then there is always a one-tape nondeterministic Turing Machine (NTM) {M} that accepts exactly {L} and runs in time {n^{k}} for some fixed {k}. For any input {x} this means that {M} accepts {x} if and only if {x} is in {L}. An ID {U} of {M} is an encoding of the total state of {M} in the usual way. We have two special ID’s:

  • the start ID {U_{start}}, which depends on the input, and
  • the accepting ID {U_{accept}}, which can be made unique and independent of the input.

For ID’s,

\displaystyle  U \vdash V

means that {U} can reach {V} in one step of {M}. This is all standard and is stated here just for review.

Saying that {A} is an FSA means that {A} is a deterministic finite state automaton. The language that {A} accepts is {L(A)}. Also for any two strings {x=x_{1},\dots,x_{n}} and {y=y_{1},\dots,y_{n}} of the same length let {x || y} be the shuffle,

\displaystyle  x_{1}\ y_{1}, \dots, x_{n} \ y_{n}.

If one string is longer, we suppose the other is padded with a special null character to have equal length, and then we shuffle.

You might ask why introduce FSA when our goal is to encode NTM’s? It turns out that our proof will take the following steps:

  1. Show that the behavior of an FSA can be encoded by {\mathsf{GEN}}{\mathsf{SAT}}, which is a more powerful version of {\mathsf{SAT}}. This version is easier to use and understand.

  2. Show that the behavior of a NTM can be reduced to the behavior of FSA’s, and so to {\mathsf{GEN}}{\mathsf{SAT}}.

  3. Finally, replace {\mathsf{GEN}}{\mathsf{SAT}} by regular {\mathsf{SAT}}.

A Slight Generalization of {\mathsf{SAT}}

A {k}{\mathsf{GEN}}{\mathsf{SAT}} problem {C} is of the following form:

\displaystyle  C = \exists x_{1} \exists x_{2} \cdots \exists x_{m} C_{1} \wedge \cdots \wedge C_{L}.

Here each {C_{i}} is a general clause, which means it is a Boolean function of the variables {x_{i}}‘s. We allow the general-clauses to any Boolean function, but will restrict the number of variables that they can depend to at most {k}. The problem {C} is called satisfiable provided there exists Boolean values for the variables {x_{1},\dots,x_{m}} so that each {C_{i}} evaluates to true.

Note that {k}{\mathsf{GEN}}{\mathsf{SAT}} is closed under conjunction in the following sense: Suppose that {A} is

\displaystyle  A = \exists x_{1} \exists x_{2} \cdots \exists x_{m} A_{1} \wedge \cdots \wedge A_{K}

and {B} is

\displaystyle  B = \exists x_{1} \exists x_{2} \cdots \exists x_{m} B_{1} \wedge \cdots \wedge B_{L}.


\displaystyle  C = \exists x_{1} \exists x_{2} \cdots \exists x_{m} A_{1} \wedge \cdots \wedge A_{K} \wedge B_{1} \cdots \wedge B_{L}

is the conjunction. This is satisfiable if and only if there are variables {x_{i}} so that both sets of clauses are true. This is a simple, but very useful property of {k}{\mathsf{GEN}}{\mathsf{SAT}}.

There is nothing mysterious or strange about {k}{\mathsf{GEN}}{\mathsf{SAT}}. It is just a way of defining a type of search problem. Informally it asks: are there Boolean values (…) that satisfy all of the properties (…)? We have lots of problems like this in mathematics. Consider a system of linear equations over the reals:

\displaystyle  Ax = b,

where {A} is a matrix, {x} is a vector, and {b} is a vector. The LINEAR problem is: does there exist a vector {x} so that the above is true? We could have written it as

\displaystyle  \exists x Ax = b,

to make it look like the {k}{\mathsf{GEN}}{\mathsf{SAT}} problem. You should, and probably do know, what Gauss knew years ago that LINEAR is “easy”: in modern terms LINEAR is in {\mathsf{P}}, that is in polynomial time.

The difference between {k}{\mathsf{GEN}}{\mathsf{SAT}} and LINEAR is that while the latter is extremely useful and can be used to model many important problems, {k}{\mathsf{GEN}}{\mathsf{SAT}} is “universal.” By universal we mean that any problem from {\mathsf{NP}} can be reduced to solving a {k}{\mathsf{GEN}}{\mathsf{SAT}} problem.

Cook’s Theorem

Our plan is to show that we can encode {U \vdash V} by a {k}{\mathsf{GEN}}{\mathsf{SAT}} formula. Then the behavior of {M} for {n^{c}} steps will be encoded by simply using the fact that {k}{\mathsf{GEN}}{\mathsf{SAT}} is closed under conjunction.

Theorem 1 The problem {k}{\mathsf{GEN}}{\mathsf{SAT}} is {\mathsf{NP}}-complete.

Proof: Let {M} be a one-tape NTM that runs in {N=n^{c}} time and let {x} be an input of length {n}. Clearly {M} accepts if and only if there are exists a series of ID’s

\displaystyle  S^{1},S^{2},\dots,S^{N}

of length {N} so that {S^{1}} is the initial state, {S^{N}} is a final accept state, and for each {i},

\displaystyle  S^{i} \vdash S^{i+1}. .

Let {S^{i}} for all {i} consist of {N} Boolean variables.

Claim: We have {k}{\mathsf{GEN}}{\mathsf{SAT}} formulas {U^{i}} over these variables so that (i) {U^{0}} is satisfied precisely when {S^{1}} is the initial state corresponding to the input {x}; (ii) {U^{N}} is satisfied precisely when {S^{N}} corresponds to the final state; (iii) and for each {1 \leq i < N}, the {U^{i}} is satisfied precisely if

\displaystyle  S^{i} \vdash S^{i+1}.

Then we claim that the conjunction of the clauses of all the

\displaystyle  U^{1},\dots,U^{N}

are satisfiable if and only if {M} accepts. This follows directly by the definitions. \Box

Thus to complete the proof of the theorem we must show that claim about the existence of {U^{i}}‘s is true. But this is easy as we will now show.

Encoding With FSA

Theorem 2 Let {M} be a one-tape NTM. Then there is an FSA {A} that depends only on {M} so that for all {U} and {V},

\displaystyle  U \vdash V \text{ iff } U||V \in L(A).

Proof: The FSA {A} just passes over {U} and {V} checking that the tapes are the same, except at the location of the head. There it checks that a legal move of {M} has been used. Because the ID’s are shuffled together this can be done by the automaton. \Box

Encoding With {k}{\mathsf{GEN}}{\mathsf{SAT}}

Theorem 3 Let {A} be a fixed FSA. Then for any input string {a_{1},\dots,a_{n}} there are a set of general-clauses {C} over {O(n)} variables so that

\displaystyle  a_{1},\dots,a_{n} \in L(A) \text{ iff } C \text{ is satisfiable}.

Proof: The idea is to look at the sequence

\displaystyle  q_{0} a_{1} q_{1} a_{1} \dots q_{n-1} a_{n} q_{n},

where {q_{i}} are strings of Boolean variables that are long enough to encode the state of the FSA. Then the general-clauses are of three types:

  1. One clause that checks that {q_{0}} is the start state of {A}.
  2. One clause that checks that {q_{i+1}} is a possible state that {q_{i}} can reach after seeing the input bit {a_{i}}.
  3. One clause that checks that {q_{n}} is an accept state of {A}.

Clearly all of these can be done by general-clauses: they are just Boolean tests. \Box

Final Reduction to {\mathsf{SAT}}

A {\mathsf{3SAT}} problem {C} is a {k}{\mathsf{GEN}}{\mathsf{SAT}} problem except that the clauses are restricted to be disjunctions of at most three variables and their negations. Thus a clause can only be of the form

\displaystyle (x_{20} \vee x_{30} \vee \bar{x}_{101}),

for example.

The final step is to note:

Theorem 4 The problem {k}{\mathsf{GEN}}{\mathsf{SAT}} can be reduced to {\mathsf{SAT}}.

This proves Cook’s Theorem.

Nand Now For a Shortcut

Ken tells me that especially in Buffalo’s undergraduate theory course, he uses—and gets away with—a big handwave. He states the principle

“Software Can Be Efficiently Burned Into Hardware.”

He uses this to assert that the witness predicate {R(x,y)} for a given {\mathsf{NP}}-language {L} can be replaced by a polynomial-sized circuit {C_n} for any input length {n}. Since NAND gates are universal, every gate can be a binary NAND. Thus {x \in L} if and only if there exist a {y} and an assignment to every wire in {C_n} that is consistent with every NAND gate, such that the output wire {w_o} has value {1}. For the inputs {u,v} and every output {w} of each NAND gate, the assignment must satisfy the clauses

\displaystyle (u \vee w) \wedge (v \vee w) \wedge (\bar{u} \vee \bar{v} \vee \bar{w}).

Together with the singleton clause {(w_0)} and clauses fixing the wires for {x_i} to the corresponding bits of the given input string {x}, this creates a {\mathsf{3SAT}} formula that is satisfiable if and only if {x \in L}.

This proof has some nice features. It shows the completeness of several restricted forms of {\mathsf{3SAT}} where the variables in each clause have the same sign and (with a little more work) assignments must make at least one literal in each clause false as well. But it hides the way computations are represented behind the handwave.

Open Problems

What do you think of the proof via finite automata?

34 Comments leave one →
  1. October 10, 2013 9:30 pm

    Synchronicity Rules❢

    I just started reworking an old exposition of mine on Cook’s Theorem, where I borrowed the Parity Function example from Wilf (1986), Algorithms and Complexity, and translated it into the cactus graph syntax for propositional calculus that I developed as an extension of Peirce’s logical graphs.

    Parity Machine Example

    By way of providing a simple illustration of Cook’s Theorem, namely, that Propositional Satisfiability is NP-Complete, I will describe one way to translate finite approximations of turing machines into propositional expressions, using the cactus language syntax for propositional calculus that I will describe in more detail as we proceed.

  2. October 10, 2013 10:15 pm

    I guess I’m a little confused why you are so attached to the Turing machine as a model of computation. If you start with with the fact that Boolean circuits can do polynomial time computation with polynomial-size circuits — something that every CS major understands, since programs can be “compiled” down to machine language and thence to hardware — then we can take Circuit SAT as our root problem. The reduction from Circuit SAT to 3-SAT is extremely simple: just invent additional variables for the internal truth values of the circuit’s wires, and express in CNF form the assertions that each gate in the circuit functions properly, and that the output is true.

    Isn’t this 1) a lot simpler, 2) just as rigorous, and 3) a lot easier for modern CS students to understand?

    • October 10, 2013 10:22 pm

      Chris, That sort of method leads to less clarity on the distinction between P and P/poly.

      • October 10, 2013 10:36 pm

        I know that’s the usual argument, but I don’t agree. One can always address uniformity in circuit classes after presenting this proof. What a student needs to understand is that for any fixed instance of an NP problem (Hamiltonian Path, Graph Coloring, etc.) it is easy to produce a circuit that checks solutions, and thus reduce the problem to an instance of Circuit SAT.

        I agree that you need a uniform program to do this reduction: on the other hand, this reduction is extremely simple in most cases that we discuss. For local problems like Graph Coloring, it’s in DLOGTIME, i.e. local gadget replacement.

        Do you really think that starting with single-tape Turing machines is better pedagogically? Doesn’t it just create more formal overhead for the student to wade through, before they get a chance to appreciate what NP-completeness is all about?

    • Bhupinder Singh Anand permalink
      October 10, 2013 11:12 pm

      “… why you are so attached to the Turing machine as a model of computation.”

      Good question, since theoretically any deterministic algorithm would seem to be equally logical for distinguishing between NP and P from a foundational perspective.

    • October 11, 2013 12:00 am

      Roughly speaking, the importance of the Turing machine model is that we are factoring an infinite machination into an infinite component times a finite component, where it’s the same finite component each time. That amounts to a significant reduction in conceptual complexity.

    • October 11, 2013 8:16 am

      Cris, exactly what you say is in the last section of the post. When I lecture, I do indeed use the words “…check that every NAND gate works correctly…” and present it as a basic instance of hardware verification. But for lower bound purposes, maybe there is interest in exactly how low one goes for the verification to remain NP-complete.

      • October 11, 2013 9:02 am

        Hi Ken,

        I know, and I like your “software can be burned into hardware” line. I guess my claim is that it’s not that much of a handwave! Yes, we need to teach the idea of uniformity, and at some point we need a uniform model. But we should feel free to use whichever model makes it easier for us to get NP-completeness up and running.

        For teaching, my favorite uniform model is the program, written in the student’s favorite programming language. The fact that it doesn’t matter whether we use Java or Lisp for this is a wonderful fact, and the students need to appreciate it (and the hard work that went into proving it in the 1930s, i.e. the equivalence of partial recursive functions, lambda expressions, and Turing machines). But I prefer to start with the “software” proof of NP-completeness, and then to circle back and say “why can we get away with this?” At that point we can talk the equivalence (up to polynomial time) of different models of computation.

    • October 11, 2013 12:12 pm

      hi CM/all john savages book “models of computation” [free online! very nicely typeset] is partly a massive research project to reformulate/”reframe” most computational/complexity theory in terms of circuits, one that I generally agree with, but which few other researchers seem to have consciously acknowledged although there is much pursuit [allender comes to mind as another authority/expert who has advocated a circuit-centric view over many yrs]. also stasys jukna has contributed strongly to this agenda with his book “boolean function complexity, advances and frontiers”.

      the TM model seems to predate circuits, but circuits are arguably a more fundamental or natural way to understand computation in many ways.

      its a real ongoing paradigm shift [worthy of a blog somewhere at least?] that is somewhat unrecognized/unremarked on so far. it somewhat reminds me of the major theoretical “teams” in physics eg standard model vs strings etc.
      also note that for circuits the P vs NP problem is P/poly vs NP, a stronger assertion, and the concept/distinction of uniform vs nonuniform comes into play.

      surely [even after decades of research] one is not really provably superior to the other (circuits/TMs) in all cases, each has its own context, but they exist in a close theoretical coupling/synergy/symbiosis. two sides of the same coin. the yin & yang of complexity theory. and cooks proof is a nice example of this deep interconnection (which is more obvious four decades in retrospect thx in part to massive research in circuit-related complexity classes).

      personally I have gone in the direction of circuits esp for complexity class separations & believe the view may eventually/ultimately lead to a P/poly != NP proof. this is because of the deep ties between monotone circuits and extremal set theory….

      [CM ps enjoyed reading about your exploits in “complexification” by waldrop yrs ago.]

      • October 11, 2013 12:15 pm

        oops sorry got that link wrong! here is a NP vs P/poly proof sketch/outline using circuits/hypergraphs etc

      • October 12, 2013 12:11 am

        vznvzn you are coming out as a crank. If Ken or Lipton, thought your ideas were worthwhile, they would have written about them.

      • October 28, 2013 7:46 pm

        sincere congratulations J on the feat of combine the converse of argumentum ad populum, genetic fallacy, and argumentum ad hominem all in only two sentences! do they teach this stuff in school any more? but, dont feel too bad, because even elite theorists are known to fall in the same basic traps of reasoning. =)

      • October 13, 2013 11:33 pm

        No, it’s not a more fundamental or natural way.

        Indeed, the very thing that makes computation a natural and interesting notion is exactly that it is independent of the particular choice of implementation. Choose any reasonable implementation, make sure it doesn’t suffer from obvious limitations in power (only finitely many machine states, inability to make an unbounded number of intermediate computations) and you end up with the same notion. So there can’t be a more natural or fundamental way to approach computation since such a way would undermine the very naturalness and interestingness of the notion of computation itself.

        I mean it’s like saying that counting pebbles is the fundamental/natural way to understand the natural numbers. It may be true that evolutionarily/historically counting fingers is neurologically easier and occurred first but what makes the numbers the numbers is that they behave the same regardless of what you describing. The best way to understand the numbers is by understanding the numbers!

        Indeed, I think all this focus on circuits is holding back the computational complexity theory from looking at more powerful results from number theory and distracts from looking more closely at what makes the difference between the relativizeable complexity results and the non-relativizeable ones.

      • November 1, 2013 12:01 pm

        ok, will bite, just have to ask

        I think all this focus on circuits is holding back the computational complexity theory from looking at more powerful results from number theory…

        what results are you referring to, did you have something specific in mind?
        ps re J, feel is also guilty of argument from ignorance

  3. October 11, 2013 12:53 am

    Dear Pip,

    Since you are proposing a new proof for the CLT, would you be slightly starting to believe that paper

  4. October 11, 2013 10:24 am

    It’s interesting you mention NAND gates and 3SAT.
    Recently I tried to come up with a method to encode logic gates (OR and NOT) on Flow Networks with costs, and tried to show how this could maybe be applied to solve 3SAT in poly-time using minimum cost flow algorithm.
    Obviously there’s probably some flaw that I don’t see (yet). I will have to implement this and test my theory.

  5. Chandra permalink
    October 11, 2013 10:34 am

    I have often heard the sentiment that it should be easy to see, especially for CS students, that programs are equivalent to circuits and hence it suffices to prove Cook’s theorem by proving that Circuit-SAT is reducible to SAT. I don’t think it is any where as simple as that. Typical programs are complicated and have non-local references because of the RAM model and it is easy to get confused by the size of integers, pointer addressing etc etc. The idea of non-determinism, checking solutions vs coming up with them, and a proper understanding of when a language is in NP are, in my experience, not easy to teach.

    • October 14, 2013 2:58 pm

      If that’s so, then it gives you a good opportunity to teach a little about hardware. Can the students design a circuit that takes an n-bit input x and a log_2 n-bit input i, and returns x with the i’th bit flipped?

      If not, then I think we have a larger pedagogical problem 😉

      • Chandra permalink
        October 16, 2013 3:31 pm

        Yes, they can design such a circuit but this is precisely where the uniformity issues comes up. They can also write code on a regular computer for the same problem with x and the index i given. However the code runs on a fixed size hardware (their laptop) while the circuit designed for the problem depends on n, the size of x. Of course one can say that circuit itself can be generated by a uniform program but then one is really getting back to a uniform model of computation at some level.

  6. October 13, 2013 3:30 am

    Nice share sir, but i still doubt with this theory

  7. October 13, 2013 1:32 pm

    Almost surely I missed something in this proof, but since it does not require a lower bound on k, the number of boolean variables in each SAT clause, I think you can just substitute “3-SAT” by “2-SAT” and the theorem would read “2-SAT is NP-Complete”, which is of course not true. Am I missing something here? Maybe the point is that the encoding of the clauses hinted by Theorem 3 requires some clause to have exactly 3 boolean variables?

    • Jonathan Buss permalink
      May 26, 2014 11:40 am

      A gate with k inputs and one output requires k+1 variables to describe; thus the CNF clauses need size k+1 (in general). Since one-input gates don’t suffice, we get k+1=3 as the smallest possible clause size in the proof.

  8. Leonid Gurvits permalink
    October 13, 2013 2:57 pm

    I was teaching the last spring my first ever undergraduate course, happened to be “the introduction to TCS”.
    Usually in this class the complexity, i.e P vs NP, is barely touched. Most of the time is spent
    on regular and context free languages and Turing machines. Anyway, I gave “informal” treatment of Cook-Levin very similar to yours. BTW, in the simple Theorem 3 you need to mention locality in order to get back to Cook-Levin. Actually, the proof from Wikipedia is quite similar. I tried to be very concrete and put the main focus on colorability and maximum clique(because of the connection, Motzkin-Straus, proved in the class, to the continuous world). Colorability is a great motivation for complexity: it is easy to convert an algorithm
    for k-colorability to the algorithm for m-colorability, m 2. The students really liked and understood this motivation.
    And it can be done without Cook-Levin as for the colorability it is very easy to write down the boolean formula. But, say for maximum clique what is needed is a “short” boolean formula describing all boolean strings with exactly k ones. The case k= 1 is simple and actually is used in Wikipedia proof.
    And I gave this formula for “large” k using FSA. Also, similarly to your examples, I talked a lot about linear
    equations: gave the algorithm for 2-colorability based on the gauss elimination.
    The cool thing about this algorithm: it counts the number of colorings as well; and some of the students were learning
    the gauss elimination at the same time in linear algebra class. The connection was a shock!

    Thanks a lot for this blog and best regards, Leonid.

  9. Leonid Gurvits permalink
    October 13, 2013 3:44 pm

    Suppose to be “Colorability is a great motivation for complexity: it is easy to convert an algorithm
    for k-colorability to the algorithm for m-colorability, m 2.
    The students really liked and understood this motivation.”

  10. Leonid Gurvits permalink
    October 13, 2013 3:49 pm

    Somehow some of my sentences in the posts above got lost, with completely kosher words, strange.

  11. Serge permalink
    October 15, 2013 4:33 pm

    How do you encode an instance of SORTING into an instance of 3-SAT? And how do you encode an instance of FACTORING into an instance of 3-SAT? I’d bet the former encoding yields much bigger instances than the latter.

    My other guess: this size difference is an explanation of why SORTING is easier than FACTORING. The intuition comes from quantum mechanics. Small objects have unpredictable behavior, but large groups of small objects are deterministic.

  12. Serge permalink
    October 26, 2013 6:52 am

    Is it possible to infer, from the mere probabilistic nature of observing a quantum particle, that a sequence of bits can only be found on a probabilistic basis? If quantum mechanics is the ultimate machine language, then that’s a likely hypothesis.

    Thus, whenever a digital data – algorithm or data structure – is observed, this is due to its low-enough Kolmogorov complexity – i.e. the length of the shortest program which outputs this data. Although uncomputable, K-complexity is a function defined on all bit sequences and it plays the role of a probability distribution – just like probability waves in quantum mechanics.

    Being undecidable, K-complexity would have been different in another Universe generated by another Big-Bang. In one of those, the Riemann Hypothesis is perhaps an easy lemma for undergraduate students – along with P!=NP – but the fundamental theorem of algebra is undecidable.

    • Serge permalink
      October 29, 2013 8:40 am

      Well, perhaps not exactly undecidable but a hard conjecture anyway. What is provably true, provably false or provably undecidable in another Universe is the same as in ours, but the hardness of the proofs might be different.

      So, whenever you come across a very hard conjecture, it may just be “true for no reason” in the words of Gregory Chaitin. My opinion is that the distribution of easiness and hardness is completely casual, depending on the quantum structure of our Universe. In another one, the same fact might have been true for some reason.

      • Serge permalink
        October 29, 2013 8:58 am

        “P is true for no reason” means “all proofs of P have zero probability of being found”.

  13. Anonymous permalink
    April 5, 2014 2:49 am

    I wouldn’t use the automata proof. It is technically nice but it hides the intuition why the theorem is correc, and students intuitively understanding the reason why a statement is correct is very important for me.

    I prefer using students programming skills and experience rather than their knowledge about automata theory.

  14. August 2, 2020 8:24 am

    Dear Dick/Ken,

    Scott Aaronson’s recent blog post and preprint on Busy Beavers prompted me to go back and look at some of the work I did in the late 1980s when I was trying to understand Cook’s Theorem.  I had posted a link to that work above but the site where it was has fallen off the live web so here’s the new improved edition —

    🙞 Differential Analytic Turing Automata

    Most folks here may find it quicker to skip to Part 2 and refer to Part 1 only as needed.




  1. Proving Cook’s Theorem | Rocketboom
  2. Differential Analytic Turing Automata : 1 | Inquiry Into Inquiry
  3. How Hard, Really, is SAT? | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s