The rule of three

 Wikimedia Commons source

Robert Southey was the Poet Laureate of Britain from 1813 until his death in 1843. He published, anonymously, “The Story of the Three Bears” in 1837.

Today Ken and I want to talk about the state of ${\mathsf{P}}$ versus ${\mathsf{NP}}$ and the relationship to this story.

The story, as I’m sure you know, is about Goldilocks. She has—no surprise—curly blond hair. She enters the home of three bears while they are away. She tries their chairs, eats some of their porridge, and falls asleep on one of their beds. When the bears return she runs away.

What you may not know is that Southey’s original story had not a young girl but an old woman. She is not innocent but furtive, self-serving, and meddlesome: she breaks the little bear’s chair and eats his breakfast. An 1849 retelling by Joseph Cundall changed her into a girl named “Silver-hair” and changed her motives to restlessness and curiosity. Her hair changed to gold around 1868 but she did not acquire the name “Goldilocks” until 1904.

Of course there is no change to our classic problem: Claims continue that there are proofs that ${\mathsf{P = NP}}$, claims continue that there are proofs that ${\mathsf{P \neq NP}}$, and claims continue that ${\mathsf{P \neq NP}}$—but without offering any proof. What connection can there be to the Goldilocks story? It is in the telling—the literary rule of three augmented with a total order.

Three Goldilocks Principles

The Goldilocks tale is really one of ${3 \times 3}$. It has her try at each stage: chairs, food, and beds. At each stage of the story, one item is too big-or-hot-or-hard, one is too small-or-cold-or-soft, and one is just right. Then the bears follow the same ${3 \times 3}$ sequence in discovering her traipsing.

The “just right” aspect has been named the Goldilocks Principle. Christopher Booker’s oftquoted description of the “dialectical three” goes as follows:

“[T]he first is wrong in one way, the second in another or opposite way, and only the third, in the middle, is just right. … This idea that the way forward lies in finding an exact middle path between opposites is of extraordinary importance in storytelling.”

The Goldilocks Principle however leads, according to this neat history on the LetterPile website, to what it calls the “Goldilocks Syndrome”:

“We are living in consumerism, where big companies non-stop create billions of realities, where everybody … can feel ‘just right.’ … The problem starts when we can’t stop looking for perfect solutions in [this] pretty imperfect world.”

It is not clear whether they have a solution, but they go on to describe and recommend the following “Goldilocks Rule”:

“Balance between known and unknown, risky and risk-free, predictable and unpredictable.”

Our take on all this is: are we trying to be “too perfect” in our approach to ${\mathsf{P}}$ versus ${\mathsf{NP}}$? Can we profitably strike a new balance?

Who’s Been Sleeping In My Bed?

I have argued recently and before that ${\mathsf{P = NP}}$ is possible, but with an algorithm for say ${\mathsf{SAT}}$ that is galactic—see here for our introduction of this term—meaning an algorithm that is completely useless. Here are three perspectives on the power of the two classes.

1. ${\mathsf{P}}$ is too weak to be equal to ${\mathsf{NP}}$: This is one way to prove the conjecture. A concrete point is that proving the following suffices:

Lemma 1 There is a constant ${c>0}$ so that if ${S}$ is in ${\mathsf{P}}$, then ${S \cap \{0,1\}^{n}}$ has a Boolean circuit of size a most ${cn^{c}}$.

Note that the consequence is easy to show: Assume ${\mathsf{P = NP}}$ and that there is such a constant ${c}$. Then this contradicts Ravi Kannan’s famous theorem that the polynomial hierarchy has sets that require ${n^{k}}$ boolean circuits for any ${k}$. (See this 2009 paper for more.) In terms of our theme: ${\mathsf{P}}$ is too weak to be equal to ${\mathsf{NP}}$—the bowl is too small.

2. ${\mathsf{NP}}$ is too strong to be equal to ${\mathsf{P}}$: This is the “classic” idea for proving the conjecture. If we can show that ${\mathsf{SAT}}$, say, requires super-polynomial size boolean circuits, then we have that ${\mathsf{P} \neq \mathsf{NP}}$. Note this uses the old but important insight that any set ${S}$ in ${\mathsf{P}}$ has ${S \cap \{0,1\}^{n}}$ computable in polynomial size circuits. In terms of our theme: ${\mathsf{NP}}$ is too strong to be equal to ${\mathsf{P}}$—the bowl is too large.

3. ${\mathsf{P}}$ is just right to be equal to ${\mathsf{NP}}$: Taken literally, this says that there is an algorithm for ${\mathsf{SAT}}$ that runs in polynomial time. That is a bowl that is just right. Stated this way it seems pretty unlikely. There is however a different take: ${\mathsf{P}}$ is just right to masquerade as ${\mathsf{NP}}$. How to develop this idea is our new thought.

We can start by regarding the “three barriers”—relativization, natural proofs, and algebrization—as effects of such masquerading. Then one can focus further on the extent to which ${\mathsf{NP}}$-objects can be approximated by polynomial-time ones. Many ${\mathsf{NP}}$-complete problems are easy in average case under certain natural distributions. We wonder whether the theory can be structured to say that logspace objects, or ones from ${\mathsf{ACC}^0}$ (not to mention ${\mathsf{AC}^0}$) cannot approximate so well. An example of a technical issue to overcome is that languages like ${\mathsf{SAT}' = \{xy: x \in \mathsf{SAT} \vee y \notin 0^*\}}$ are ${\mathsf{NP}}$-hard but approximated ultra-simply by the language of all strings.

Of course, it would be a huge breakthrough already to separate ${\mathsf{NP}}$ from uniform ${\mathsf{ACC}^0}$, let alone from logspace or ${\mathsf{NL}}$. We’re suggesting instead to think along these lines:

1. Find some simple combination of polynomial-time primitives that do a good job of spoofing ${\mathsf{NP}}$. Say combining polynomial interpolation and (presuming ${\mathsf{RP = P}}$) polynomial identity testing.

2. Formulate analogous combinations of primitives belonging to those lesser classes—say in terms of linear equation solving—and show why those combinations don’t similarly masquerade as ${\mathsf{NP}}$.

3. Profit. (We had to have step 3, of course—and at least our plan has a step 2.)

So, which bowl? Which bed to lie in? Most seem to believe the second is how to prove ${\mathsf{P} \neq \mathsf{NP}}$ but who knows. At least we’re trying not to run away.

Open Problems

The famous front-and-back cover art of the venerable 1979 textbook by John Hopcroft and Jeffrey Ullman is said to depict Cinderella:

We believe Goldilocks fits better: curly hair, wearing boots not slippers, and breaking things. Both of us recall general optimism about solving ${\mathsf{P}}$ versus ${\mathsf{NP}}$ at the time the text was published. Now the artwork seems prophetic on what happens when we tug at the question. Can we get a “middle-way” approach to it up and functioning?

While on the subject of textbooks, we are happy to note that our textbook Quantum Algorithms Via Linear Algebra received a second printing from MIT Press, in which all of our previous errata have been corrected.

May 31, 2017 2:02 am

Another middle way could be P<NP<P/poly.

May 31, 2017 7:24 am

I had wondered about this a few years ago. If we write an algorithm to compute SAT, for some problems it will run in P, for some in EXP. If we set it up to keep track of which, we could start building up a library of the categorizations. Perhaps if we had many different algorithms (all optimized differently), all doing the same tracking, we find that they either overlap or that there is a noticeable reason why the different approaches all create the same subsets.

Paul.

• May 31, 2017 10:25 am

Scott Aaronson’s slides “Has There Been Progress on the P vs. NP Question?”, which are linked from here, illustrate a partial answer to your question. Heuristic algorithms often have a common locus of failure.

June 1, 2017 7:59 pm

When I was playing with SAT algorithms, some problems were definitely harder, but I always that sense that they were relative to the algorithm. At that time I had wanted to build up a database of the different results for each algorithm and the properties which seemed to cause trouble, but since it was a hobby project I kept putting it off. When I was playing with GI, I had a similar feeling.

One could write a specific algorithm, just for a given problem. The sum of all those algorithms at N would be in P if there was a O(n) test to switch. The problem I saw was that as the problems got larger, new properties seemed to emerge from combinations of smaller problems, but I kinda suspected that the couldn’t happen indefinitely. I could be wrong though, my intuition has failed me quite spectacularly in the past 🙂

3. May 31, 2017 10:33 am

hopcroft + ullman, one of the greats, amazing how thorough the ref is and in many ways captured a lot of complexity theory nailed down “early on” (1979) that didnt change much afterwards, before barriers were discovered and also greater “resistance” was widely acknowledged. theres a 2nd edition but much changed. the illustration has a lot of connection to rube goldberg machines. also thought that it actually had an alice-in-wonderland feel to it myself. where the wonderland is not hallucinogenic-like but technological. (sometimes those two things blur, in other art, eg the Matrix etc…)

4. June 9, 2017 6:58 pm

Before comparing sets you have to establish their comparability, and to do so you have to establish their distinguishability.

http://vixra.org/abs/1704.0363