# When Less Is More

* Claiming less may be a good idea *

Robert Browning was an English poet of world renown in the Victorian age. His fame rests mostly on his dramatic monologues, in which characters reveal indirectly much about themselves. Here you can actually listen to a recording made in 1889 of him trying to read from his poem “How They Brought the Good News from Ghent to Aix,” which was one of Ken’s father’s favorites. After Browning’s passing on 12/12 of that year, this was said to have been the first recording preserving the voice of a deceased person.

Today I want to talk about claims of proofs that would resolve the vs question, and may reveal more about the provers than about the problem. I also would like to make a modest suggestion.

The phrase “less is more” is first found in print in Browning’s poem “Andrea del Sarto” in 1855:

Who strive – you don’t know how the others strive

To paint a little thing like that you smeared

Carelessly passing with your robes afloat,-

Yet do much less, so much less, Someone says,

(I know his name, no matter) – so much less!

Well, less is more, Lucrezia.

Right now there are a slew of papers that are circulating that have proofs that . Currently we have fewer papers that claim to prove that they are unequal. Historically, of 96 proof claims logged on Gerhard Woeginger’s P versus NP page, exactly half are for “equal,” with the other 48 divided into 42 for “not equal” (plus a variant on one of the “equal” claims) and a handful for “undecidable/other.”

## The Tilt Toward Equal

The near-parity of the numbers is striking. Ken and I see two factors at work. One is “demand-side”: philosophically many people accept the position that being able to verify a needle in a haystack does not confer the ability to find it, and this is reflected in “not equal” out-pointing “equal” by 126–12 in Bill Gasarch’s latest poll of experts. In a SlashDot poll open to the general public, the current status is approximately 5,000 for “equal,” 10,000 for “not equal,” and almost another 10,000 for “undecidable.” The demand-side favors “not equal.”

The other is “supply-side”: to show “equal” one “only” needs to present an algorithm that one claims can solve some particular -complete problem. To claim that they are unequal you need to claim to prove that there is a lower bound. Since we really have no idea how one would even start a proof that there is no algorithm at all for some problem like SAT, the lower bound claims are harder to make. The logical sentence expressing begins with an existential () quantifier, and existentials feel more appealing for the human mind to try to carry out.

As I have described many times before, the typical equality claim goes like this: One picks a favorite complete problem, reduces its solution to some powerful subroutine, and bingo we have a “proof.” Often, but not always, the subroutine is linear programming, sometimes it is convex programming. I once had a FOCS submission to review that reduced the complete problem to the shortest path problem.

## The End Of the World?

Since I am writing this we have dodged the first hurdle of danger: 12/12/12 at 12:12.12 pm. It was not empty numerology: a sizable asteroid passed between earth and the Moon two minutes before that time in New Delhi. Of course the really big day 12/21/12 is looming, and Ken and I plan to discuss that soon. Our plan is to get that discussion out well before the end-of-days day. If things really do end then, well we have had fun with GLL and it would be our last chance to comment. So we plan to make the most of it.

## A Modest Suggestion

My suggestion is to the claimers of big proofs like . Claim less. Really. Remember “less is more.” Here is a guide as a list:

Rather than claim , show that or has an algorithm that runs in time better than say . This would be great, would be a breakthrough, and might be read by more people. It still would be a great result.

Rather than claim , note the , show that logspace is not equal to . I would be greatly impressed with such a result and it would again be a breakthrough. Some of the “not-equal” claims “prove” this is more along the way. Just do this and we would be immensely impressed, and we would take the rest of your claims much more seriously.

Rather than claim that the Riemann Hypothesis is true, to bring in a problem from number theory, “just” show that there is no zero of the zeta function that has real part greater than . This would be another huge advance and would again be hailed as a result of first magnitude.

## Open Problems

The main open problem is why do people who think they have resolved famous open problems never seem to check whether their methods can give simpler proofs of partial results? Why is always all or nothing? I wonder? Perhaps there is some basic principle at work here. What do you think?

My guess would be that the underlying motivation for tackling these types of problems drives one towards a full solution. An all-or-nothing attitude. Both problems have significant prestige attached, so perhaps it could just be the fear of being the shoulder that was climbed upon …

… or possibly that they are just completely unaware that 2^(n/10) is a significant milestone :-) (why n/10?)

Paul.

As Ken probably experienced, every chess player has a stuck move, obviously loosing move which good chess player would never do in the life, but nevertheless sometimes does. If someone would help to drive thoughts away, this move probably never been done. You can recall Gelfand in this respect. So the problem starts that someone convinced himself of the right move, where it is obvious for others this is loosing move. Next, things build up. Claimer start to wander around, asking some random people to help. You know the result – “this is notoriously difficult problem, if it would have a solution, some smart people would already found it”. But you see that your move is correct. You start to look help further – find the top scientist in the field related to your solution. You write him/her. Certainly, it is busy and polite person. You get a response – “You approach is interesting, but this and this (both are not clearly understandable) is hard.” Ok, you have positive response from the top scientist, something is left unexplained – the top scientist is busy with his own problems to explain graduate level questions. The only case you are lucky, if by some unimaginable circumstances you meet a curious postdoc of the best scientist, who would be able to explain, where the move is obviously loosing. Overall, if you are on constructive side of cranckery, it is very hard to find the source that explains wrong move. It seems to be much easier to get things working, is actually to make wrong move, e.g. claim big, and get an explanation. The side part of it, P vs NP appears in many engineering problems, in the forms not obviously recognisable as P vs NP. You may find a “solution” to your problem, than recognize the connection to P vs NP, and then look at the beginning. As I already told there is no place to really discuss your problems, they start to appear in the form of stackexchange, stackoverflow, etc. There is a reference book – wikipedia, there is no analogous textbook/practicebook.

The example with monomials – Pips are busy, I understand their unwilliness to spend time on it – they have their own programme. Most of others just look over it, The option I left with, find crank “proof”, touching something interesting, put it in arxiv, post in blogs, and other possible sites. get angry responses, and try to catch where is the reasonable part in that responses.

My motivation is left behind the scene, but the crankery behavior is obvious.

2^(n/10) for TSP? Well, if P=NP (which of course I do not expect), then it may well be that the algorithm which does it builds on some mathematical trick, say, which gives an “all-or-nothing” result, i.e., with it you get a benevolent polynomial time algorithm, without it you get only 2^n.

Example (close but not exact): Emo Welzl pointed out such an effect for sparse k-SAT formulas. With the Lovasz Local Lemma, you can show that every instance where all clauses have length k, and every variable occurs at most 2^k times (or so?) has a solution. Without this lemma, you’re stuck trying to build a solution via direct combinatorial methods, which brings you max-degree k by matching arguments, maybe k^2 if you’re extremely clever.

This is a very good explanation.

Essentially, the results like 2^(n/10) for TSP assume that one is adopting some kind of linear progression in their approach and there are intermediate results.

Ok, I posted a question on http://cstheory.stackexchange.com/questions/14619/proof-of-nullstellensatz-certificate-degree , and accepted the answer, in recognition the fact that somebody, most probably me is not understanding the core of the problem. On the other hand, I see the contradiction, the propositional proof systems are looking at proving tautologies, one by one, via derivation rules, but I see that we have an experimental device – human brain, that solve problems, looking at completely different question. They prove theorems, i.e. classes of tautologies or refutations with usually infinite number of members. What is more, they are not only proving them, they are able to find those regularities, among the noise. I see completely different language. I do not care about reputation, if I would be able to shift attitude of technically capable people, I would be more than satisfied. If we speak about human beings, one of the theory says that human visual system is tuned to represent statistical regularities of external world, i.e. the language here is probability theory. One of the core tool in sums of squares the moment matrices, that connect positive measure with sums of squares of polynomials. Did anyone tried to go in opposite direction and try to represent measures (read probabilities) by the sums of squares, or actually as a set of polynomials representing the statistics of external world (implying the sums of squares)? There is another problem here, it is not known what cliques one should use. The candidate here is Renormalization theory with the intuition given by Kadanoff construction, with the exception that here we need to go in opposite direction with additional ingredient – at each scale one need a representation. Why anyone would even think about it? I would ask even the question whether human brain perform classical computation. Ok, I know about microtubules theory. I would probably be on the side of skeptics here, but on the other hand we know that in fractional quantum Hall effect, effective particles behave in the way normal particles could not, they exhibit fractional charge statistics. Is there is any way to show that human brain perform classical computation. We only know about Bell’s inequality that distinguish classical theory from quantum theory, but as far as I understand, it is not directly translate to computational properties. If you know one, I’m ready to perform experiment that would show that human beings are plain classical or effectively not. Tell me where is the place I can discuss these questions. Everybody talk about energy crisis. We experimentally know that inertial confinement works. There is conservation of energy and moments, make drums with many biting elements on the surface of the chamber, synchronize them, that conservation laws require them to accumulate enough energy at right time in right place to overcome coulomb barrier, and beamstreaming will work for you, transferring the energy away from reaction place, making the device safe, and collect energy on the surface of the chamber. You need a good computational device, that will optimize drumming time in real time. Would the optimal design resemble the neuronal network in human brain? Is it the cs theory question? The answer is simple – I’m technically incapable crank even to talk with me about it.

The term “Obsessive Compulsive Disorder(OCD)” comes to mind, i.e. Captain Ahab and Moby Dick.

Pilots of dive bombers called it “Target Fixation”; you get so fixated on delivering the bomb accurately that you forget to pull up in time to avoid the blast.

This is more than helpful comment, and is the probably the most useful way to decrease crankery. There are two things that are infinite, and there are some doubts about the universe.

I can’t remember which mathematician said that “the more to do or to prove, the easier the doing or the proving”. So why answer philosophical questions with numerical approximations? Moreover, a proof of a “modest” result with (2^n)/10, if found, would probably be impossible to change into a proof with n^k. So why bother?

In the same vein, you know that driving too fast is dangerous but you don’t know exactly the speed at which you have more chances to crash your car than to arrive safe. Then again, a general principle is much more obvious and useful than any numerical precision…

2^(n/10), sorry. :)

That is an observation recorded among George Pólya’s heuristics that applies to inductive proofs and recursive programs. It works because you can often get more leverage by using a stronger inductive hypothesis or recursion base.

As a counterpoint to this article, there’s also the Inventor’s Paradox:

http://en.wikipedia.org/wiki/Inventor's_paradox

It’s been over a decade since I read Polya’s “How to solve it”, but I think the book offered advice to not only try more specific (and hopefully easier) versions of the problem, but to also try more *general* versions of the problem.

Bohr’s aphorism, characterizing a “deep truth” as one whose opposite also contains a deep truth, apparently applies to heuristics, too.

From the perspective of someone who’s got that “Target Fixation”, it’s a puzzle. It’s something that nobody knows, something to be examined, and something that has survived examination by some of the greatest minds around. Even if it never get’s solved there is a tremedious ammount of fun to be had in the attempt.

Of course, it would be a great deal more fun to solve it, but still. Examining the problem,and feeling smart when you have an idea why it’s equal or not. Looking into it, asking around, then when someone who’s got a better handle on the situation is willing to go over it with you, then feeling even smarter when you understand why the idea was incorrect. Then you start over again.

I’m sure it can be frustrating to those who get the e-mails, papers, ect… but I can assure you, the time is greatly appreciated. It is beyond amazing to see individuals take time to, at the least humor, and quite often in my experience go so far as to help individuals better understand the material.

Personally though, my thought on PvsNP is that a person must be able to provide a proper classification to an undetermined complexity class if they can actually solve PvsNP. If they can’t do that, then they don’t have the answer. If someone want’s to prove PvsNP (In its entirety) then they should prove the “smaller” problem of an undetermined complexity class.

I have earlier here

pointed out, modestly, that Tarnlund has a proof here

that P is not equal to NP.

Now since 12/21/12 is looming, I would not like to miss to point out that Tarnlund has verified his

proof here

using a theorem prover that, in fact, proves that P is not equal to NP in Hilbert’s proof theory.

Reading the paper, the proof of the theorem prover is rather comforting.

I certainly know very little about lower bound arguments, but I suspect getting even an upper bound, such as 2^(n/10), must be connected to the the lower bound kind of arguments.

On the other hand, what do we know why does the counting problem associated with a P-time search problem become a #P-complete problem, i.e., possibly even harder than an NP-complete problem?

Is there an assumption that a result such as NP==P, must come from solving an NP-complete problem (and not from an #P-complete problem) in P-time? Of course we do know that there are counting problems in [F]P.

And yet we

dohave concrete ideas regarding postulates that simultaneously account for this obstruction and circumvent it. E.g.Here one point is that a proof of the Hartmanisesque postulate would formally have no bearing on the separation (or not) of from … and so in particular such a proof would not win any million-dollar prizes.

And yet, on the other hand, a proof of the Hartmanisesque postulate

wouldtell us what (in practice) wereallywould like to know, namely, that we can be entirely confident that no algorithmist will ever astound the world with a -validated code that solves in polynomial time! :)This reminds me of Andrew Wiles’ disguising his lectures leading up to Fermat’s Last Theorem with technical and jargon-heavy titles and presentations (at least in one graduate class intending to weed out everyone possible except for one particular colleague). Perhaps what Dick is saying is to use technical “coded language” for communicating discoveries with a profesional. Even if you prove P != NP, don’t say that in the subject — say something more technical and limited, something the cranks wouldn’t know about, and thereby using that as something of a spam filter workaround.

You’re right. So much so that even those who’ve claimed P!=NP from the very title of their paper haven’t been able to produce it as a valid conclusion until now.