# Guessing

* The role of guessing in complexity theory *

Edward Vul and Harold Pashler are psychologists. They have discovered something intriguing about guessing.

Today I want to share a draft of a piece on the question that is geared toward non-specialists. It views the famous question as a question about “guessing.” I would like to hear from you about what you think of it.

As I was preparing this, I ran across Vul and Pashler’s paper on guessing, which I thought would be fun to share; perhaps you missed it initially as I did. Consider this question:

What percentage of the world’s airports are in the USA?

The idea is to guess at the answer. Then take a minute, drink a soda or a cup of coffee, and make another guess. The claim is that the *average* of the two guesses is usually more accurate the original guess. They show by an empirical study that the increase in accuracy is about 6.5 percent. Waiting weeks ups the accuracy much more—the claim is it is now about 16 percent. Here is a chart from their paper:

Enough on actually guessing, let’s turn to a discussion of our problem as a question about potentially guessing.

** The Power of Guessing **

Some scientific questions are fundamental to our understanding of life, the universe, and/or everything—here by `everything’ we mean the world of mathematical laws. In physics there is the question of what is beyond the event horizon of a *black hole*—does it lead to another world? In biology there is the question of what causes cells to *differentiate*: how do they know to change as they divide? In chemistry there is the question of how do proteins *fold* into three dimensional structures—how do they know how to form these complex shapes? In computer science there is the question of what is the power of *guessing*—why do we seem unable to solve certain important computational problems efficiently without guessing?

The latter question about the power of guessing is called the question. Forget for now what these symbols mean; for now view them as a place-holder for our fundamental question.

An example of the type of problem that is captured by the question is a simple combination lock. I mean the kind of lock that is used to secure anything from our lockers at the gym to classroom equipment to our bicycles. Each lock typically has three numbers that are the combination—yes expensive ones can have more. To open the lock you must dial each number in the correct order: first turn the lock clockwise to 9, then counterclockwise to 6, then clockwise again to 46. In this case your secret combination is 9,6,46—my birthday—probably not the best choice for a secret. Anyone who wants to open the lock must know these numbers in the correct order. If they try 6,9,46 the lock will refuse to open.

The opening of a combination lock captures the essential features of the type of problems we consider in the question. There is a secret: in the combination lock it is the sequence of the three numbers. There are lots of secrets possible: if each number is between 0 and 99, then there are

combinations possible. If you do not know the combination lock’s secret, you will take a very long time trying one combination after the other, until you find the correct one. At one spin of the lock per second, you will succeed in about one month of trying.

The pivotal feature is that should you try the correct combination—9,6,46—the lock will open. You will know immediately that you have the correct secret combination to the lock. This is critical; without it there is no hope at all in even trying to find the secret.

What does this have to do with *guessing*? Everything. Suppose that you were a great guesser, then the combination lock problem would be simple. You would think, make your guess, and then try and see if the lock opens. If you were a terrific guesser, then you would open the lock in one try. Pretty neat.

The question asks, not for mechanical combination locks, but for digital problems, are there computational algorithms that are great guessers? Faced with a digital version of a combination lock, is there an algorithmic way to guess solutions?

One way you could be a terrific guesser for locks would be to have X-ray vision. By analyzing the structure of the hidden tumblers, you could find the combination. What makes the digital problems so surprising is that we already have the X-ray vision—everything that specifies an instance of the problem is given to us. The challenging question is whether there is something about the structure of the instance that enables us to *see* the answer without trying a multitude of combinations.

The ramifications of this question are immense, and I would argue it is at least as important to our understanding of the universe as black holes, cell differentiation, and protein folding. There are two possibilities: there is a digital way to be a terrific guesser, or there is no way to have such a guesser. The first is denoted by —there are good guessers. The second is denoted by —there are no good guessers. The fancy symbol just means that two things are not equal.

Today we have no idea which is true: does or does ? The question is important precisely because the different answers would place us in two very different worlds.

In the first world there would be algorithms that could guess the solutions to problems. Such algorithms would change our world. Computer security would be gone. Messages sent over the Internet, for example, could be read by anyone. There would be no convenient secrecy, no digital combination locks.

There would be good consequences too of . We could design algorithms that could solve almost any problem—not quite any—more on that later. But most. This would lead to immensely powerful algorithms. Some believe that this would lead to machines that could out-think humans. Certainly for many tasks these algorithms could improve the performance of everything that we do today by computer.

In the second world there would not be algorithms that could guess the solution to any problem. Computer security would be safe, at least in some forms. Messages could be sent over the Internet in complete safety from being read by those without the secrets.

There would be bad consequences too of . There would be problems that no algorithm could solve in any reasonable time. These problems abound in science and technology, and they would have no reasonable solutions. Without the power to guess correctly, these problems would be impossible. At best we would be able to “come close,” which is OK for some practical problems, but not for many others.

The question is: which world do we live in? Are there powerful guessing algorithms that can solve almost anything? Or are there problems that defeat any algorithm? We do not know. The solver of this problem, the person who shows that or shows that , would dramatically change our understanding of the world. They would also collect a million dollars from the Clay Institute, since this question is one of their six remaining Millennium Prize Problems.

Of course the above is a simplification of the actual situation as I have discussed before here, referencing Russell Impagliazzo’s detailed description of five different algorithmic worlds. I plan, if I expand on this piece, to make this clear. There is also Scott Aaronson’s survey of Nature’s possible ability to solve NP-complete problems.

** Open Problems **

What do you think about this start? Too simple? Too hard? Just right? Let me know. What other good references are there besides the above two, and how to refine them?

By the way the answer to the airport question is about 30 percent—but I guess you knew that.

“Computer security would be gone. Messages sent over the Internet, for example, could be read by anyone.”

Yikes! Please remove this as it is not true and has led to far too much wrong guessing by slightly informed people!!! There have been good posts on this very blog about why P=NP does not imply something like this.

Markk,

I agree of course. But tried, perhaps not successfully, to start to write up a piece that would be readable by a very general group. I thank you all for the positive and negative comments.

So… “algorithms”. What are they, then?

Turing answered this definitively in the 1930s.

Hello,

Just right imho!

In some way I also understand Markk comment: this introduction assumes that usefull algorithms are never galactic. One could also question if worst case is the most interesting measure of complexity, or what happens if OWF do not exist. But I don’t think one can add this first-hand without loosing most of the targeted readers.

Best,

PS: have you ever discussed what consequences you see if OWF do not exist?

My reaction is that the problem of separating P from NP is vastly harder than “guessing” examples indicate. Specifically, the examples given of “guessing” programs all have decidable runtime bounds.

And yet, it is very atypical that we know runtime bounds … because both P and NP include languages that are recognized (as far as we know) only by Turing machines whose runtime is formally undecidable. Thus, the correct intuition is that generic languages in P have grammars that can never be understood by any formal system of logic.

It seems to me that any popular exposition of P versus NP should emphasize that these “mystery” languages are a major obstruction to separating P from NP. Moreover, it is very striking, that “mystery” languages are *never* encountered in practice. As Dick is fond of wondering, are we sure that we’ve been working on the right problem, in focussing upon P versus NP?

A good reference to these ideas and theorems is Jules Hartmanis’

Feasible computations and provable complexity properties(1978), where these ideas are discussed in-depth. More recently, TCS StackExchange hosted a discussion of this topic by Emanuele Viola.To the best of my (limited) knowledge, no one has *ever* written a popular exposition of Hartmanis’ “mystery” languages and the Turing machines that recognize them … so even a couple of paragraphs on this topic would be very illuminating!

“Moreover, it is very striking, that “mystery” languages are *never* encountered in practice.”

I think Dick said something in his book about this same issue in regards to SAT solvers. In my playing around, I found that the brutally hard problems never seemed to show up from random generation. It wasn’t until I started with the DIMACS ones that I really tested my ideas. The very worst problems were all hand-crafted, and had very noticeable, intricate structures within them. I suspect that this relates to Information Theory, that if I tried to compressed the really hard problems I wouldn’t get anywhere.

Paul.

Actually it would be the other way around: random problems are incompressible (by the usual definition of “random”), and the difficult problems may have a short description (e.g. the SAT-equivalent of factoring 5959).

The difficulty of random instances of SAT is an interesting question itself. See for instance the paper by Selman, Mitchell, & Levesque: “Generating Hard Satisfiability Problems”.

Containing undecidable objects doesn’t mean we can’t prove anything about the class. Kolmogorov complexity, for example, is an incomputable function, and yet it still has applications!

Another argument why “mystery” languages need not be an obstacle to a P v NP proof is to change axioms. Assume that I add an axiom to Peano Arithmetic saying that P != NP. For the sake of argument, assume that P indeed is not equal to NP, so that the axioms are sound. Then there will still be “mystery” languages in the class P–even though we can prove (trivially, since it’s an axiom) that P != NP.

I like this perspective, but I see it slightly differently. One needs to guess, only in the absence of information. If you have the information, then you can go straight to the answer. Some definitions link in an Oracle who when asked will give you the answer. So we get the options of Guess, Ask-the-Oracle or gather enough information to make an informed choice.

In that sense, I see P=NP as a boundary. Either the information exists in an accessible way in the formal system or it does not. From that perspective, it is all about what’s implicitly encoded into the system (and our ability to get to it). Thus it tells us something fundamental about the depth of our abstractions and our world, like Godel’s incompleteness theorems.

I sense too that it not only relates to our abstract formal systems, but also to all systematic behavior. That’s an off-hand way of saying that I think biological systems, although not adhering to rigid formality, could also have more intrinsic information than we are currently unaware of.

To me the question is: Is the information there? Can we find it? It’s not unlike wondering whether or not Fermat really knew the answer (I think he was writing for himself, why would he bluff? Or was it just intuition at play?). Answering it one way or the other sets our future direction for how we continue to solve problems and grow our knowledge.

Paul.

i’d try to be careful not to overblow the importance of the question, or to claim that NP has tremendous power. without explaining the PH or PSPACE, it might be reasonable to mention that there are natural games (no, not “generalized chess”, which is silly) whose generalization provably escapes P. (so even having an NP oracle wouldn’t let a computer easily win them). moreover, the crypto connection is a bit dicey — i don’t know of any NP-complete problems being used as cryptosystems to send email, for instance. that’s not the actual connection.

i like the idea, though. and i think that you can get away with even more in a first pass, if you assume that your reader is interested in science or a scientist.

perhaps getting people excited about complexity theory can be done without trying to introduce the P and NP classes at all, at least not at first.

Vul and Pashler cite the CIA World Factbook, apparently as here:

https://www.cia.gov/library/publications/the-world-factbook/rankorder/2053rank.html

This attempts to include all paved landing strips. It counts 43,973 of them in the world; 34% in the US. Overall, it doesn’t inspire much confidence.

This wikipedia entry lists international airports by country:

http://en.wikipedia.org/wiki/List_of_international_airports_by_country

I count 150 in the US and 1,081 total (14%).

My two estimates, made mere seconds apart, were 25% and 33% (mean 29%). I assumed they were after int’l airpoirts.

-Carl

I thought the combination lock example was going to give a description of NP and it would seem almost optimal for that. But then P=NP would be described as: if a problem can be solved by a good guesser (like Combination Lock) can it be solved by me (who is not a good guesser).

But I think the lock is very productive: e.g. you could have a safe with 2 combination locks one of which is a dummy (to confuse safecrackers). So now 2-lock safe gives a decision problem: does Lock1 have a opening combination? Which is in co-NP because an opening combination for Lock2 would be a No certificate for Lock1. And on to the various intriguing questions about NP\cap co-NP.

So far, every similar attempt at popularization of the P=NP issue has had (at least) one crucial joint, a place at which the reader must make an extreme leap of faith and begin simply trusting the writer. Before the joint, everything is clear; the layman can, with complete honesty, nod along and say, “I understand every word of this.” But then comes the crux, and although the narrative after the crux remains coherent, the layman cannot vouch for anything thereafter.

It is striking to me how well-defined the location of this crux is. I can nail it down to one sentence in our host’s exposition:

The lay reader is unlikely to have any notion of what is meant by

digital problemsandcomputational algorithms. Imagine that you were reading a similar article, identical with the post above up to the crucial sentence, which, however, read:One may rebut: the lay reader

doeshave some intuitive notions about “problems” and “algorithms” that will be appropriately engaged by the existing phrasing, which would not be engaged by nonsense-words like “rhybnor” and “clasmitron”. My question is, is this intuition about the notion of “problem” and “algorithm”soapplicable that it can be parleyed into any sort of understanding of P=NP?I claim not. My position is that the very notions of “decision problem” and “algorithm” are extremely subtle, and

notamenable to (much) lay reasoning, and therefore that any explanation of P=NP that tries to attain comprehensibility by glossing over all of that subtlety is not going to be very satisfying. Well, maybe I’m only talking about my own satisfaction.The real crux lies in the formalization of the intuitive notions of

problemandsolution; we should be trying to find a layman’s explanation ofthat. Anything less, I fear, doesn’t really qualify as an “explanation”.I find, upon re-reading the comments, that Richard Elwes very pithily anticipated what is, essentially, my problem with the essay. Consider my screed an expansion on his foxy question.

Thanks, ACW.

There are some words that cause people’s eyes to glaze over, almost as a Pavlovian response, and I suspect “algorithm” is likely to be one (and “logarithm” another, though this may depend on the age of the reader).

I’d suggest either inserting a couple of lines explaining what an algorithm is, or if you could stomach it, how about using “computer program” as an alternative? After all, almost everyone has at least some basic notion of what that means.

Elwes question is indeed foxy! Because if we sharpen it to “What is an algorithm in P?”, then the default answer of complexity theory is “That class of computer programs whose runtime bounds are certified to be in P by an oracle!”

That P-certifying oracles have no practical physical existence—and even more bizarrely, cannot be instantiated even in principle—constitutes a significant obstruction to explaining the P versus NP problem in common-sense terms.

Because to rigorously separate P from NP, we have to exclude the possibility, that even algorithms whose runtimes we cannot bound (but that the oracle would certify are in P, supposing that the oracle existed) cannot be good guessers.

The broader point is that it is very difficult to explain the P versus NP problem, in any way that does not severely oversimplify the mysterious aspects of the class P. And conversely, if we focus our discussion on subsets of the class P whose properties are less mysterious, then we are not really talking about the problem of separating P from NP, but rather about some simpler problem (that still might be very hard).

This point-of-view owes much to lecture remarks by Ketan Mulmuley, to the effect that the seemingly “simple” complexity class P is perhaps the most mysterious of all complexity classes.

Surely there is great merit, though, in persistently trying to explain these ideas in the simplest, least technical terms. A talk on “Mysterious members of P and NP” would be very enjoyable!

Whoops … there were too many negatives in the passage that

shouldhave read:It is striking (to me)—yet unsurprising too—that nonrigorous discussions of P versus NP generally ignore these algorithms, for the common-sense reason that it is hard to say anything concrete about them. And conversely, in the past 40 years, rigorous analyses of P versus NP have made very little progress … again because it is hard to say anything concrete about the capabilities of these algorithms (without appealing to still more oracles).

So perhaps that logician/philosopher (and engineer!) Wittgenstein was right in the closing statement of his

Tractatus Logico-Philosophicus:“What we cannot speak about we must pass over in silence.” Is the lack of progress in separating P versus NP a concrete manifestation of Wittgenstein’s principle? Forty years ago, did we embrace a definition of the class P that was so broad, as to force silence upon us?My thoughts: Ignoring the guessing component, I think the kind of “trivialising” of terms regarding P, =, \neq, and NP is unneccessary. I think it is possible to describe these concepts in simple words, and I think it’s useful to do so.

Also, perhaps it would be a good idea to do all complexity theorists a favour and explain what is wrong with amateur proof attempts (in a general way).

The comb lock example could cause some folks to conclude:

Any combination lock can be opened in a few attempts if P=NP

This in-turn might cause them to believe in the mystic powers that will unravel if P=NP!

I am sure this has been said in other comments (tl;dr), but still:

You try to give an interpretation for consequences of P=NP (or the converse) that heavily uses the theoretical notion of “problem”, not the real one. Approximation algorithms and/or heuristics solve real problems blazingly fast, if not the theoretical one. Assuming P=NP, the P algorithms for “hard” problems might have runtimes in polynomials with very high degree; those would solve the theoretical problem fast (in the limit!) but would practically fail to solve the real problem in any way (let alone faster). The same divergence exists if P/=NP.

My point is that if you talk to a big audience, you should use _their_ intuition about problems and solution quality, not (y)ours. As an example: my father wants to get to his customer reasonably quickly but does not care if he could have been five minutes faster — especially if he would have had to wait 15 minutes longer for a faster route to be planned!