Skip to content

Does Program Size Matter?

May 3, 2014


A problem about this forgotten complexity measure

ErikHead_no4

Erik Winfree is one of the leaders in DNA computation, especially the theory of self-assembly using DNA. He won a MacArthur Fellowship in 2000, among many other awards for his terrific work. With Paul Rothemund he wrote a STOC 2000 paper titled “The Program-Size Complexity of Self-Assembled Squares,” work that contributed to their sharing the 2006 Richard Feynman Prize in Nanotechnology.

Today Ken and I want to talk about program size: about how many instructions are needed to compute something.

A DNA self-assembly system consists of a set of tile types, rules of assembly, and theorems of the form:

The number of tile types to create {X} is at most …

The rationale to reduce tile types is simple: making duplicates of tiles is like printing, its cheap; but making many types of tiles is expensive. Each tile type requires a separate printing. So there is a natural imperative to reduce the number of tile types.

Hence a major goal of their research, using their terminology, is to reduce the “program complexity.” Thus there are parts of computer science theory where program size is important, but this area uses a non-standard notion that may say nothing about programs for conventional computers, which are the kind we want to study.

If you are interested, consider going to the next DNA Conference which was started by one of us, Dick, years ago. They are now on the twenty-first edition.

Let’s look at program size of machines not tile systems, since it came up recently in some thoughts Ken and I had on {\mathsf{P=NP}}. We do think about that question sometimes…

Size of Code, Old and New

In the real world the size of a program is probably much more important than the time it takes or the amount of memory it requires. The size of code directly correlates with the cost of writing the program and maintaining it, with its number of bugs, and with its security issues. Big program are usually expensive in all ways. If we are creating the next great app that will make us all a fortune, we want the program to be a manageable size.

Yet complexity theory—for the most part—is focused almost entirely on the time a program takes or the space it uses. Not on its size. Hence we study time classes like {\mathsf{DTIME}(n^{3})} and space classes like {\mathsf{L}}, logspace. And we prove theorems like hierarchy theorems. But not many theorems say anything about the size of a program. In one of the first posts on this blog, I posed this challenge:

Prove there is no algorithm for {\mathsf{SAT}} that runs in polynomial-time and can be written down on a single piece of paper. You can make the paper 8.5 by 11, if you wish. The writing has to be normal size. No tricks.

I went on to say that while “conventional wisdom” is confident in {\mathsf{P \neq NP}}, so that no such program exists, yet I find it shocking that even this one-page challenge appears to be hopeless.

Well perhaps we can prove it equivalent after all, in the course of raising two questions. The first is about program size overall, and the second about the size of “new code” that needs to be written to solve a new problem, versus using already-written routines. To further our-real-world analogy, the latter have presumably already been debugged, and no one has to pay for writing them again.

Let’s look at the main obstacle and then the two questions.

An Obstacle?

Here is the source of our questions that we have recently thought about. Imagine, try to imagine if you can, that {\mathsf{P=NP}}. This implies that every nondeterministic Turing Machine (NTM) that runs in linear time can be simulated by a deterministic TM (DTM) that runs in time {n^{c}} for some constant {c}. The constant is absolute, but it can be small or large or astronomical. Thus one reason that {\mathsf{P=NP}} could still imply nothing about practical computing is that {c} could be the obstacle. Even a modest size {c=5} would render many applications useless.

This is nothing new—we have pondered before that {c} could be huge. And have postulated that one reason that none have found such a deterministic algorithm is that most tools in algorithm design are not set up to create algorithms that have running times of the form

\displaystyle  n^{2^{2^{2^{100}}}}.

Dynamic programming, divide-and-conquer, clever data structures, random sampling, linear programming, semi-definite programming, and others do not yield such galactic running times.

Ken and I wonder if the obstacle could be not that {c} is large, but that the program size hidden away in the statement

\displaystyle  \mathsf{NTIME}(n) \subset \mathsf{DTIME}(n^{c})

has a blowup. Suppose that {c=1.1}. That would sound great—too good to be true. But it could still be useless. What if given an NTM program {N} that runs in linear time, the corresponding DTM program {M} that runs in time {n^{c}} is itself huge? The fact that the program is deterministic and runs fast is really of no value if the program {M} is exponentially larger than {N}. If this is the case, then even though the running time is small, the program is of no value.

This is the obstacle that we are interested in. Our response will involve universal simulators, which can be small, even tiny. We will first wrap one inside a famous old construction by Leonid Levin, and then write a little more “new code.” But first, what questions might a complexity theorist ask about program size?

Two Simple Questions

For both questions, our general task will be taking a nondeterministic program {N} of size {s} that runs in some nondeterministic time {t(n)}, and simulating it by a deterministic program of size {s'} in time {t'(n)} that is at most polynomially slower: {t'(n) = O(t(n)^c)}. It’s enough to suppose {t(n)} is linear, so the goal is {t'(n) = O(n^c)}. We won’t care if {c} is huge, so long as it is fixed and {s'} avoids being huge. Our questions are:

  1. Could it be that any polynomial-time deterministic program must be exponentially bigger, that is {s' = \exp(s)}?

  2. Can we arrange that {s' = O(s)}, or even {s' = s + O(1)}, where the constant in the {O} is not only independent of {N} (of course), but also small in an absolute sense?

Of course if {\mathsf{P \neq NP}} these questions are generally moot. But if {\mathsf{P=NP}}, could the reason for the equality not being practical be that the program blows up in size, not that the running time blows up? Put another way, could we have {\mathsf{P=NP}} but {\mathsf{SAT}} requires a program so large that it cannot ever be used? Note, this says nothing about the running time of the {\mathsf{SAT}} algorithm.

The short answer to the first question is no. This is because of reducibility and completeness for {\mathsf{NP}}. The code for the reduction from the language of {N} to {\mathsf{SAT}} is linear in the size {s} of {N}. Call this code {R}. By {\mathsf{P=NP}} there is a fixed deterministic polynomial-time program {V} for {\mathsf{SAT}}. The program that takes an input {x} and outputs {V(R(x))} runs in poly-time and has size {s' = O(s)}. Moreover {R} itself can be viewed as a fixed piece of code that takes the code of {N} as a parameter, so the size {s'} can be regarded as {s + O(1)} or even just {O(1)} in terms of “new code.”

However, the ‘O’ here includes the unknown size of the possibly-huge program {V}. Hence this doesn’t fully answer the second question. That’s our point: can we do better?

The Idea

The basic idea, Levin’s “universal search,” is well known, and we covered one aspect of it here. We have not seen it addressed regarding program size, however.

Levin’s universal machine {U} accepts a Boolean formula {\phi} only when it has found a satisfying assignment. Call this “verified acceptance.” It means that {U} can possibly err only by rejecting a {\phi} that is satisfiable after all. If {\mathsf{P = NP}} is true, then Levin’s code—which is known and small—will run in polynomial time, and accept {\mathsf{SAT}} except for finitely many errors.

What {U} does on a formula {\phi} of size {m} is first spend {m} steps trying to prove that the first yea-many small polynomial-time programs don’t recognize {\mathsf{SAT}} with verified acceptance. Let {P_k} be the least program that does not fail this test. {U} then runs {P_k(\phi)}. If it gives a satisfying assignment, {U} accepts {\phi}; else {U} rejects. Then the code {U(R(x))} has size {s' = O(s)} with {O(1)} “new code” where the constant in the {O} is small and known, not huge.

If {\mathsf{P=NP}}, then some {P_k} runs in polynomial time {m^c}, and (by the self-reducibility of {\mathsf{SAT}}) recognizes {\mathsf{SAT}} with verified acceptance. Let {k} be least for this. Then each of the machines {P_1,\dots,P_{k-1}} makes some mistake, and some finite time {n_0} suffices for {U} to cross off each one. Hence whenever {m \geq n_0}, {U(\phi)} simulates {P_k(\phi)} and gives the same answer in basically the same time, which becomes the overall polynomial runtime of {U}. This carries over to runtime {O(n^c)} for our code {U(R(x))} on {x} of length {n \geq n_0}.

The issue is, what happens when {|x| < n_0}? Our code may err. We could “patch” it by adjoining a lookup table for the finitely many errors. But this table becomes part of the program size {s'} and might be huge. Moreover a given table could be buggy—remember that the famous Pentium bug of 20 years ago came from a few bad lookup-table entries.

Patchless Assembly?

Levin originally stated his {U} for factoring rather than {\mathsf{SAT}}. The point is that the language

\displaystyle  \mathsf{Fact} = \{(x,w): w \text{ is a prefix of the unique prime factorization of } x\}

belongs to {\mathsf{NP \cap co}\text{-}\mathsf{NP}}, and so can be recognized with verified rejection as well as verified acceptance. Then with reference to the above description of {U} in general, {U} rejects only if {P_k} outputs a verified rejection. If {P_k} gives no verified answer, then {U} takes as much time as it needs to get the correct answer. This {U} still settles down to polynomial time for {n \geq n_0}.

Thus it is not possible to have factoring in {\mathsf{P}} but all polynomial-time programs for it are huge. To be sure, the running time can be huge—indeed {P_k} itself may be huge and its size gets reflected in concrete running times—but the code for {U} itself is small. And it is all known code which can be already debugged.

We could try to do the same for {\mathsf{SAT}} by having {U} solve {\phi} exhaustively if {P_k} fails to find a satisfying assignment. The problem then becomes not knowing when to stop and “trust” {P_k} when in fact the first good one is found. The nub is whether we can use the assumption of {\mathsf{P = NP}} to “bootstrap” the verification of succinct proofs of unsatisfiability. The mere existence of succinct proofs, however, that are verifiable without the assumption implies {\mathsf{NP = co}\text{-}\mathsf{NP}}.

Can we make the short code simulating {N} perfect, without patches? We like the metaphor of {U} as interacting and adapting in an environment of short programs, which it “assembles” into its own code before finding the one that works best.

We are not bothered about the conversion time {n_0} being exponential in the size of the {P_k} that is ultimately selected. After all, compared to the short timespan of recorded human history, the time it has taken us to evolve feels “exponential.”

Open Problems

Can we write code today, on one sheet of paper, that accepts {\mathsf{SAT}} exactly and runs in deterministic polynomial time if {\mathsf{P = NP}}?

Incidentally Ken fit much of a 3-tape Turing machine that does universal simulation of assembly programs onto one sheet for Buffalo’s theory courses.

UTMRAMsimulator

15 Comments leave one →
  1. May 3, 2014 6:08 pm

    I would guess that one page is vastly larger than the provable Kolmogorov complexity limit for ZFC. That is, it is probably impossible to prove that *anything* requires more than a page (maybe even more than a couple hundred bytes?) to represent.

  2. May 4, 2014 3:00 am

    This reminds me of one of my favorite algorithms, Hutter’s algorithm (http://www.hutter1.net/ai/pfastprg.htm), which at first glance appears to answer your one-page SAT challenge in the negative. (Actually, it doesn’t.) Hutter’s idea was to extend Levin’s universal search so that rather than blindly running an exponential (in program size) number of programs until one is verified to work, instead the algorithm blindly tests an exponential (in proof size) number of proofs to find a fast algorithm that must work, and spends most of its time running that algorithm. It’s a bit more subtle than that, but the bottom line is that if there is a *provably* polynomial time algorithm for SAT, Hutter’s short and simple algorithm is also polynomial time for SAT — and its asymptotic slowdown is less than a factor of 5. Contrast that to Levin’s universal search, for which the multiplicative slowdown factor is exponential in the program size. Gargantuan slowdown is not to be avoided though: Hutter’s algorithm just moves it to the additive constant in the upper bound.

    The trouble is that you can’t prove that Hutter’s algorithm is correct, because whatever sufficiently powerful formal system you choose for your proofs, you can’t prove that to be correct.

    Nonetheless, it’s one of my favorite algorithms.

    • June 8, 2014 8:18 am

      Why do you think that the explicit reference to a formal system is a drawback? For the purpose of such an algorithm, one would probably use a formal system equi-consistent to PA or some type theory. PA and most practically useful type theories are weaker than ZFC, so proving the correctness of the corresponding instantiation of Hutter’s algorithm in ZFC should be no problem. Proving an independent runtime bound for Hutter’s algorithm on the other hand…

      If you had a fast SAT solver, you could use it to improve the efficiency of “Algorithm A” in Hutter’s algorithm, which uses an appropriately scheduled exponential search for the SAT instead. This could reduce the constant offset in the runtime up to a factor exponential in the length of the shortest proof.

  3. anon permalink
    May 4, 2014 5:18 am

    Maybe I’m misunderstanding, but couldn’t you just tell the universal search routine to spend “constant” time (say) 10^(10^10) verifying each program and using the first one which passes? (Then I think the only issue for code size would be if the constant required is so large that it cannot be described succinctly, e.g., far beyond recursive Ackermann and into “busy beaver”-type constants.)

  4. May 4, 2014 7:43 am

    We expect our programs to have reasonable probability of being at the same time manageable, fast and easy to find. When we can’t find them, we then look for a proof of non-existence… which must also have reasonable probability of being of manageable size, understandable and not too hard to find.

    And the story doesn’t stop there, because in case of failure we may also look for a proof that none of the above is possible… which proof will in turn be subject to the same feasibility requirements – and so on, ad infinitum…

    In my opinion, P=?NP is best treated by philosophy. Of two things: either NP-complete problems are unfeasible and a reasonably simple proof of this will be found someday – I call it the Hilbert viewpoint – or NP-complete problems are easy but it’s their proof which is unfeasible – I call it the Gödel perspective.

    The disturbing novelty in complexity theory is that none of both viewpoints is more likely to be true the other one.

    • May 5, 2014 7:43 am

      Let’s formalize the idea that proving P≠NP is as hard as solving an NP-complete problem in polynomial time. Given a problem P, is there a Curry-Howard isomorphism Ψ that transforms every proof that P has no algorithm of time-complexity C, into an algorithm of time-complexity C for some polynomial reduction of the problem P? If so, then Ψ would transform every proof of P≠P into an algorithm that proves P=NP – and P≠NP would thus be proved undecidable.

  5. May 4, 2014 6:12 pm

    I vaguely remember that there was a reason why Levin’s “universal search” can’t be patched up today, i.e. without knowing n0. Or perhaps it was just a reason why this would be non-trivial, not why it would imply knowledge of n0. Of course, we can guess at the correct program, but encoding some arbitrarily huge n0 in terms of the Ackerman function, but that’s not how the open question is meant.

  6. May 5, 2014 6:25 am

    This has been in the wikipedia for a long time:
    http://en.wikipedia.org/wiki/P_vs_NP#Polynomial-time_algorithms

    • May 5, 2014 4:06 pm

      It’s one-sided only, similar issue to our post.

  7. phomer permalink
    May 5, 2014 8:23 am

    Not sure if it applies directly, but if you place a bound on the number of variables n, for SAT, you could build a program that is just one giant lookup for all the precomputed answers. You could find a way to insert those answers into the code. There are an infinite number of input strings for any number of variables but those map to a finite number of solutions, most likely in P (indexes and hashing would help). In that way, you could create a machine that would run in P, with exp code size, but only for a fixed n or less. From this perspective, it is easy to see that n would be dependant on our underlying technical progress (memory and disk) and that no general solution of this nature could ever work, neither would compression since the usage is infinite if n is infinite.

    Paul.

  8. May 5, 2014 12:03 pm

    Transport theory broadly conceived — to include momentum as a conserved/transported quantity, such that transport theory encompasses (e.g.) Navier-Stokes dynamics — is particularly well-suited to the discovery of small-size/complex output simulation algorithms, as evidenced by the burgeoning creativity of the computational discipline of discrete simulation of fluid dynamics (DSFD).

  9. May 6, 2014 11:33 am

    A newbie question. If instead we pose the problem as follows:

    Is there an algorithm for {\mathsf{X}} (assuming {\mathsf{X}} is in NP) that runs in polynomial-time and can be written down on K pieces of paper.

    Is this problem in NP?

Trackbacks

  1. Kinds of Continuity | Gödel's Lost Letter and P=NP
  2. proving speedup phenomenon does not apply to any open complexity class separations | Question and Answer
  3. select refs and big tribute to empirical CS/ math | Turing Machine

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading