Skip to content

The Shapes of Computations

May 17, 2015


Or rather, what can the shapes of proofs tell us about them?

032015_CACMpg33_Interview-Juris-Hartmanis.large
April CACM source

Juris Hartmanis did much to lay the landscape of computational complexity beginning in the 1960s. His seminal paper with Richard Stearns, “On the Computational Complexity of Algorithms,” was published 50 years ago this month, as observed by Lance Fortnow in his blog with Bill Gasarch. It is a great achievement to open a new world, but all the more mysterious that after 50 years so much of its landscape remains unknown.

Today we ask what might determine the unseen topography and how much some recent large-data discoveries may help to map it.

The idea for this post arose from a possibly phantom memory that Juris (co-)wrote a short draft survey on “Shapes of Computations” sometime in 1986–1989 when I was at Cornell. I recall the specific phrase “long and skinny” to describe space-bounded computations. An {\ell = O(\log n)} space-bounded computation can explore all of a polynomial {p(n)}-sized undirected graph by saving just the current node and some {O(\log n)} auxiliary information by which to choose a neighbor for the next step. The trace of this computation becomes an {\exp(\ell)}-length sequence of {\ell}-bit strings. A polynomial-space computation doing an exponential search of a game tree has the same long-and-skinny “shape” even though the scale is greater with regard to the input length {n}. Polynomial time-bounded computations, however, can use polynomial space, whereupon they become “short and fat.” Breadth-first search of a graph is a canonical algorithm that hogs space for its relatively short duration.

Which computations fall between these “Laurel and Hardy” extremes? For SAT and the other {\mathsf{NP}}-complete problems, this is the great question. The surest way to separate {\mathsf{NP}} from {\mathsf{P}} and from {\mathsf{PSPACE}} would be to characterize these problems by a third distinct shape of computation. But we have not even separated {\mathsf{P}} from {\mathsf{PSPACE}}, nor logspace from {\mathsf{NP}}, so what can we pretend to know?

Shapes of Proofs

My memory has probably been transferred from a column Juris wrote with his students Richard Chang, Desh Ranjan, and Pankaj Rohatgi for the May 1990 issue of the Bulletin of the EATCS. It has this nice diagram:

ProofShapes

The analogy between computations and proofs has been instrumental since the early days of Kurt Gödel and Alonzo Church and Alan Turing. Proofs do, however, give nondeterminism “for free”; {\mathsf{NP}} is treated the same as {\mathsf{P}} in their diagram, {\mathsf{NEXP}} same as {\mathsf{EXP}}, while “nondeterministic polynomial space” equals {\mathsf{PSPACE}}. Hence I’ve regarded “proofs” as secondary to “computations” as objects for complexity. However:

  • We have a lot more examples of proofs, going back before Euclid;

  • The search for a proof is computational;

  • Interactive proofs reflect a two-player game setting where the emphasis is more on solving; and

  • We now have examples of large computer-generated proofs to analyze.

The EATCS column defines a notion of width of a proof, characterizes {\mathsf{PSPACE}} via polynomial width proofs, and marvels at how the classic interactive protocol for {\mathsf{IP = PSPACE}} retains the “skinny” shape with less length. Indeed, in cases where the verifier is able directly to check evaluations of the unique multilinear extension {\hat{f}} of the arithmetization {f} of a quantified Boolean formula, every proof step involves just two terms {a_i x_i + b_i} for the rounds {i = 1} to {n}. A related form of skinniness had been brought out by Jin-Yi Cai and Merrick Furst in their 1987 paper, “{\mathsf{PSPACE}} Survives Three-Bit Bottlenecks.” The column goes on to emphasize that the {\mathsf{IP}} form of the proof lends itself to quicker probabilistic verification. This aspect was shortly codified in the definition of probabilistically checkable proof, which lends itself most readily to characterize {\mathsf{NP}} and {\mathsf{NEXP}}.

Three Kinds of Proofs

Amid all this development on “long and skinny” proofs, what can we say about “short and fat” ones? Intuitively, such proofs have lots of cases, but that is not their full story. The network of logical dependence matters too. Hence we think there are most helpfully three kinds of proofs in regard to shape:

  1. Proofs with close to one single path to the conclusion;

  2. Proofs with many cases, but the cases have a simple hierarchical tree pattern and can mostly be checked sequentially;

  3. Proofs with a more web-like pattern of logical dependence between parts.

Direct evaluations of quantified Boolean formulas in {\mathsf{PSPACE}} have type 2, while the interactive proof with polynomials gives the “feel” of type 1 to both the prover and the recipient of the proof.

Chess problems prefer types 1 or a limited form of 2 for esthetics. The website ChessBase.com recently republished the longest branching-free “Mate-In-N” problem ever created, by Walther Jörgenson in 1976. It is mate-in-203 with no alternative move allowed to the winner, and virtually no case analysis of alternate defensive tries by the loser either.


JorgensenMate203


However, a chess search often has type 3. Often there will be different starting sequences of moves that come to the same position {P}. The value of {P} that was computed the first time is stored in a hash table so that the later sequences are immediately shown that value, cutting off their need for any further work. This resembles breadth-first search insofar as marked nodes may be touched later along other paths. The dependencies of values become web-like. Injured values from hash collisions can cause huge ripple effects, as I covered in a post three years ago.

The stored-hash tactic is much the same as a lemma in a proof that is used multiple times. We suspect that last year’s 6-gigabyte computer-generated proof of discrepancy > 2 in the Erdős Discrepancy Conjecture has many such lemmas, and hence is more type 3 than 2. The provers Boris Konev and Alexei Lisitsa have an updated page with further computations. They do not link the whole impossibility proof of a length 1,161-or-more sequence of discrepancy 2, but do give some of the “DRUP” certificates of unsatisfiability. DRUP stands for reverse-unit propagation with clause deletion, and that propagation strikes us as largely composed of cases and lemmas. The subcases might be codable at high level via predicates {P(m,\cdots)} for m < 1,161 expressing the unavailability of length-{m} subsequences fulfilling some extra conditions {(\cdots)}, with such predicates being copiously re-used.

One finds an explosion of stored sub-cases in chess endgame tables. However, in many positions a vast majority of them are to prove wins that a chess master would see as “trivially true” in a few seconds. In other cases an alternative by the loser might simply jump to a later stage of the main trunk line, thus merely accelerating the same loss rather than posing a new case. (Similarly, an alternative by the winner might merely allow the defender to wind back to an initial stage, without much need for separate verification.) We wonder how far this pertains to the longest endgame win discovered in the new 7-piece endgame tables–mate in 545 moves. That is, how large is the part of the proof tree that needs to be specified, so that a chess program given values for positions in the tree could verify the rest via its local search?

Computations Short-Cutting Proofs?

This last eventuality prompts our new speculation: Perhaps we can rigorously develop a science of when large sections of proofs can be effectively handwaved. This would appeal to the computational style most often postulated for our human brains: shallow but broad and with great power of heuristic association, analogous to the complexity class {\mathsf{TC^0}} of poly-size constant-depth threshold circuits. Even when yoked with computers the added-value of our grey matter is not to be discounted, as attested in my joint paper last summer—in section 6 showing how human-computer tandems performed better at chess in 2005–08 than computers alone.

We have recently twice covered a conjecture by Freeman Dyson that one feels should lend itself to this kind of treatment. Many other open conjectures in number theory are felt to be “probably” true, where “probably” has a technical sense that might be developed further into some kind of dependent structure: if that handwave is valid then all-the-more certainly so is this one. The idea could be helped by enumeration of exceptions that, once handled, enable the rest of the proof to be executed with a broad brush. As linked from an essay post by Scott Aaronson, Tim Gowers relates a relevant opinion by Don Zagier in a MathOverflow comment. We morph this opinion to say that mathematicians may need a “handwave heuristic” simply because many “obviously true” statements don’t connect to anything substantial that would give a reason for their truth.

This could push many proofs of type 3 toward types 2 or 1. Note that in the way interaction and randomness combine to move type 2 toward type 1, we are already agreeing to tolerate a chance of error. It is the nature of the kind of error involved in converting instances of type 3 into type 2 that needs further consideration. We wonder whether current developments such as homotopy type theory are embracing not just exact patterns but also heuristics for when a search is overwhelmingly likely to succeed—or to fail.

This still leaves our original question of shapes of computations. In the past both Dick and I have undertaken some exploration of conditions under which computations might be similarly “self-improved.” That idea will have to be for another time.

Open Problems

Can we assign particular “shapes” of computations canonically to specific computational problems? Can this help guide concrete attacks, or is it no more than tantamount to solving the big open questions to begin with?

Again we congratulate Juris and Richard on this 50th anniversary of their achievement. We also tip our hat to a comment by John Sidles in our “Case Against Cases” post which partly prompted this further essay.

[word changes to second paragraph of “short-cutting” section, ending, and caption]

A Tighter Grip on Circuit Depth

May 8, 2015


The polynomial hierarchy is infinite for a random oracle

RossmanServedioTan

Benjamin Rossman, Rocco Servedio, and Li-Yang Tan have made a breakthrough in proving lower bounds on constant-depth circuits. It came from a bi-coastal collaboration of Rossman visiting the Berkeley Simons Institute from Japan and Tan visiting from Berkeley to Servedio at Columbia University in New York. Their new paper solves several 20- and 30-year old open problems.

Today we congratulate them on their achievement and describe part of how their new result works.
Read more…

Transcomputational Problems

April 30, 2015


Problems beyond brute force search

Bremermann
Cropped from Wikipedia source

Hans-Joachim Bremermann was a mathematician and biophysicist. He is famous for a limit on computation, Bremermann’s limit, which is the maximum computational speed of a self-contained system in the material universe.

Today Ken and I wish to talk about the limit and why it is not a limit.
Read more…

The Anti-Pigeonhole Conjecture

April 26, 2015


A conjecture about faculty behavior

rdkibzwang
“Dr. Kibzwang” source

Colin Potts is Vice Provost for Undergraduate Education at Georgia Tech. His job includes being a member of the President’s Cabinet—our president, not the real one—and he is charged with academic policies and changes to such policies. He is also a College of Computing colleague and fellow chess fan.

Today I want to state a conjecture about the behavior of faculty that arose when Tech tried to change a policy.
Read more…

Case Against Cases

April 22, 2015


How to avoid too many cases at least some of the time

Unknown

Theon of Alexandria was history’s main editor of Euclid’s Elements.

Today I want to talk about case analysis proofs.
Read more…

Kinds of Continuity

April 15, 2015


Congratulations to John Nash and Louis Nirenberg on the 2015 Abel Prize

NashNirenberg
Combined from src1, src2.

John Nash and Louis Nirenberg have jointly won the 2015 Abel Prize for their work on partial differential equations (PDEs). They did not write any joint papers, but Nirenberg evidently got Nash excited about David Hilbert’s 19th problem during Nash’s frequent visits to New York University’s Courant Institute in the mid-1950s. Nash in return stimulated Nirenberg by his verbal approach of barraging a problem with off-kilter ideas. The Norwegian Academy of Sciences and Letters recognized their ‘great influence on each other’ in its prize announcement.

Today we congratulate both men on their joint achievement.
Read more…

A Quantum Two-Finger Exercise

April 8, 2015


More mileage than expected from a little example

CarrollBlackboard
Cropped from World Science Festival source

Sean Carroll is a cosmologist in the Department of Physics at Caltech. He also maintains a blog, “Preposterous Universe,” and writes books promoting the public understanding of science. I have recently been enjoying his 2010 book From Eternity to Here: The Quest for the Ultimate Theory of Time.

Today—yes, Carroll would agree that there is a today—I would like to share an interpretation of a little quantum computing example that occurred to me while reading his book.
Read more…

Follow

Get every new post delivered to your Inbox.

Join 2,479 other followers