Polls And Predictions And P=NP
Comments on a poll on P=NP
Bill Gasarch is doing a poll again on the question. See his post on the Computational Complexity blog for details and the rules of how to respond to the poll.
Today we (Dick and Ken) would like to talk about truth-forecasting polls in general, and Bill’s in particular.
A Polling Tall Tale
One of the things that we have discussed several times is how wrong people tend to be in guessing the truth of mathematical statements. Here are the results of a putative old poll on the question of whether polynomial-size bounded-width branching programs could compute the majority function—or more ambitiously, evaluate a given Boolean formula on a given assignment. The latter was symbolized as the question of whether .
Let’s say the poll were conducted at the August 1983 Fundamentals of Computation Theory (FCT’83) conference on the island of Öland off the coast of Sweden. This is where the discovery of Interactive Proof Systems was presented with high regard during lunches with Ken and other participants, although it would not be accepted for publication until STOC 1985. After storms on the last day disrupted ferry service and the bridge to the mainland, the conference participants could have been reduced to eating huge piles of fresh-caught shrimp called reker, provided on condition of filling out the following questions:
- Do you think or not? You may give other answers as well—such as, can polynomial size constant width programs compute the majority function? A total of stated, “no way.” The common point was: how could a constant-width program “remember” the number of 1’s in a bit string? Impossible. Overall 61 of 70 who gave a definite answer believed .
- When do you think it will be resolved? A total of felt that it would take decades to resolve the problem. The difficulty is that in order to solve the problem a new method would have to be discovered for proving super-polynomial lower bounds. While constant-width poly-size branching programs are restricted and seem to be weak, lower bounds against them were believed difficult then—and have not been obtained in the 28 years since, either.
- What kinds of techniques do you think will be used? Many respondents cited graph-theory topics, such as concentrators and the concept of bounded tree-width. Others mentioned progress on lower bounds against constant-depth circuits. No one mentioned group theory, or any inkling that techniques over 150 years old would solve the problem.
- Will the problem still be relevant given advances in algorithms and in SAT Solvers? Back then people were dubious that constant-width BWBP’s would be important, but this was before the topic now better known as Binary decision diagrams really took off as a research subject.
To be sure, the famous theorem by David Barrington showing that in fact the classes are equal involves a quadratic size blowup in its original proof. Richard Cleve proved that the exponent can be pushed arbitrarily close to 1, but at the cost of other complications. This vastly improved an earlier result of Dick and Jin-Yi Cai.
Thus the intuition that there is no supremely practical way for BWBP’s to solve the problem may have been correct. Theory, however, first cares what is true, and wisely leaves judgments of practicality to the future.
Our Answers To Bill’s Poll
Ken has maintained his answers from the previous poll, and Dick’s are similarly the same as his opinions ten years ago. Note that they are opposite to each other. We actually haven’t tried to convince each other to either side; it is enough that we have good reasons for our views. We tend to discuss more the intermediate problems talked about further below.
The wording of Bill’s poll questions 1.–4. is the same as in our telltale tall-tale version above, except that 1. is Do you think P=NP or not? You may give other answers as well.
- Dick: P=NP. Ken: P NP.
- Dick: December , 2012. Ken: 2030–2040.
- Dick: The approach based on upper bounds, as used by Ryan Williams, seems to me to be the best chance. Ken: Algebraic geometry, including deeper aspects of cohomology theory than have been used before, for which see this 2005 survey paper by Saugata Basu, Richard Pollack, and Marie-Françoise Roy, and work by Joel Friedman linked from this StackExchange item.
- Dick: Yes, relevant. Ken: Yes, relevant—I have not yet seen a treatment of the idea that SAT instances resulting from natural reductions to SAT tend to be harder than instances of the same size that show up in test suites for these solvers.
Note that as for when P vs. NP will be resolved, both of us feel “The End Is Near”—2030 is only 19 years away, now compare that the “Natural Proofs” paper by Alexander Razborov and Stephen Rudich was almost 19 years ago. It seems to follow that any civilization advanced enough to contact us, such as the Vegans in Carl Sagan’s novel Contact, or the Vogons in the Hitchhiker novels, would already know the answer. We differ on whether they would tell us the answer, with Dick saying yes, just before sending us off to coincide with the Mayan quinquemillennium.
Bill’s Other Question
Bill has included another question besides 4. that was not part of his original poll ten years ago:
5. Feel free to comment on anything else: Graph Isomorphism, Factoring, Derandomization, Quantum computers, and/or your own favorite problem.
Here are our comments—you are welcome to follow up in the comments section of this blog post, but then please also communicate them in direct response to Bill.
- Dick: Factoring is in . Actually let’s go all the way and guess that it is in polynomial time. As to when it will be done, I think there are two times: when it is solved and when it is made public. The first is now, and the second is 2020.
- Ken: Graph Isomorphism is in P. Am sympathetic to the view that Factoring is in P, or at least in BPP. Not sure I believe the strong hypothesis of size lower bounds on non-uniform circuits for problems in exponential time, on which major de-randomization results are based. Something tells me a upper bound may be possible. I believe that when scalability of quantum computers is understood in full light of physical theory, the standard quantum-circuit model will be found to be over-optimistic by a quadratic factor. Generally I agree with Gil Kalai’s suspicions—note the new paper in his “How Quantum Computers [Can] Fail” series, and his slides for seminars he gave last January. I believe that super-linear circuit size lower bounds for some problem in will come first.
Mind you, all of these projected answers should be treated as “quantum noise”—we aren’t even calling them predictions. To quote Anil Nerode’s remarks in Bill’s original poll:
Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required.
Open Problems
What are your guesses? Please remember to send your answers directly to Bill.
Also, what questions would you add to the poll?
[made clearer the BWBP poll is a tall tale.]
Trackbacks
- Eighth Linkfest
- Weekly Picks « Mathblogging.org — the Blog
- Name That Graph « Gödel’s Lost Letter and P=NP
- My Quantum Debate with Aram Harrow: Timeline, Non-technical Highlights, and Flashbacks I | Combinatorics and more
- Do the undecidable attributes of P pose an obstruction to deciding P versus NP? (answer: maybe) | Question and Answer
- A World Without Randomness | Gödel's Lost Letter and P=NP
- Could We Have Felt Evidence For SDP ≠ P? | Gödel's Lost Letter and P=NP
The storms on the last day of FCT’83 and there being 5-foot high piles of shrimp are true, just the connection between them and the poll are not :-).
Here is a MathOverflow item that (among other things) links cohomology to Ketan Mulmuley’s GCT programme.
Wonderful post ! … and the post might also mention, that GASARCH’s poll closes October 31, 2011.
And this lengthy window creates some interesting strategies.
Almost certainly, there will be no shortage of people following Strategy 1 … therefore the optimal prediction strategy (at the individual level) “obviously” is Strategy 2 … it is only necessary to be the first to post it!
Therefore it is hereby predicted:
Predictions 2 and 3 amount to the assumption that history will repeat itself … after all, didn’t Alan Turing write “On Computable Numbers, with an Application to the Entscheidungsproblem” between the ages of 21 and 23?
A key question (for the general public, and the students too) is whether a proof of undecidability would win a Clay Prize. As I read Stephen Cook’s official problem statement, the answer is “no” … the possibility of undecidability is not even mentioned (which is remarkable in itself) … and so these student(s) will have to keep eating macaroni-and-cheese! 🙂
Supposing that the proof depended upon technical attributes of P or NP that could readily be amended by minor definitional adjustments, it even could be argued that a “no-Prize” ruling would be both technically correct and just. Or should the Clay Institute revise the official problem statement, to eliminate any possibility of controversy?
Obviously the above is written in a light-hearted vein, but a serious point is intended too: there are no grounds (that I know) to regard the above scenario as less likely than any other … and I would very much enjoy cheering for the students! 🙂
Hi John,
What do you mean when you keep saying that the P vs. NP problem is undecidable? It is not a decision problem, merely an instance of a decision problem, so in that sense it is obviously “decidable”. Do you mean independent of some particular set of axioms, like ZFC?
“Yes” … independent of ZFC (for example) … though it would be surprising if the undecidability were sensitive to the axiom system.
From a practical engineering point of view, undecidability arguably would be the best possible outcome to the P versus NP quest (as it is presently formulated) in the following sense. Almost any conceivable amendment to the definitions of P and NP (that is conceivable to me, anyway!), that was undertaken with a view to preserving our intuitive sense of these classes while at the same time restoring decidability, would (if we were fortunate) act to restrict the many undecidable attributes of these classes … attributes which make complexity theory difficult to apply.
Strictly from a practical engineering point-of-view, this would be a good outcome…. and very definitely, I do *not* foresee that we will stop working on this class of problems.
Undecidable always is sensitive to the axiom system. If P!=NP is undecidable in some system, you could always add an axiom that P!=NP and have a consistent system.
Also, I feel like I should point out that PvNP, unlike set theory questions such as the Continuum Hypothesis, is a statement of the natural numbers. And the natural numbers, unlike the universe of sets, have a “standard model”. PvNP has a definite truth value in the standard model of natural numbers–it’s just a question what axioms are strong enough to unlock that proof.
1. I used to think yes, now I am unsure
2. Either this year, or way later (depending on what the proof is …)
3. Something new and previously unknown (new definition, new formal system, etc.)
4. If yes, then it will have a huge impact.
Paul.
Dick, thanks for telling this interesting story from 1983 , I don’t knew this.
In fact, this Barrington’s surprise has changed my entire view at branching programs. I think that the main insight of Barrington was NOT to apply group theory. As I now see this, his insight was to observe that our intuition of how branching programs do compute (gradually collect information going from the source node) was just wrong. The intuition was influenced by the way Turing machines compute. But branching programs is a non-uniform model. They do not “compute” at all! They just recognize the difference. When input string comes, we obtain (at once, no time effects) a subgraph of the program. It is therefore enough that subgraphs for accepted and for rejected inputs are different. I think that the fact that David used permutations to detect this difference was just technicality.
Thanks—actually it’s my story—and the point when I tell it is really that Interactive Proofs were talked about and (already) hailed by people there, in mid-1983. BWBP’s were not talked about, at least that I recall. (Just to repeat, that poll was not true, and the words “The poll were conducted…” are legal English grammar for a counter-factual subjunctive, as in the famous song, “If I Were a Rich Man…”)
My guess on Barrington’s insight was he thought what equation(s) would be needed to simulate a binary AND (or OR) gate with constant-factor overhead, then realized all he needed was a simple group-theory property. I’ve likely asked him once directly and forgotten what he said…
I’d strongly recommend rewording the post to clarify that the poll wasn’t real, since I certainly took it as real, and even looking back at the post it doesn’t seem clear.
There are a few clues: the question mark in the section title “A Previous Poll?”, the use of “putative” to describe the poll, the phrase “The poll were conducted”, and the phrase “our joke version above” in the next section. (Were there others I missed?)
When I first read the post, I didn’t pay any attention to the question mark in the title. I thought “putative” was an odd word to use for a story being told by a first-hand participant, since it suggests that the story is widely believed or accepted in the absence of definitive proof, but it doesn’t mean the story is actually false; I just wrote it off as a linguistic quirk. I thought “the poll were conducted” was a typo. I don’t know how lawyers use the subjunctive, but I don’t think I’ve ever heard anyone use the subjunctive “were” outside of an “if” clause. Finally, I didn’t notice the bit about the “joke version” in the next section – I think I was too distracted by the fascinating historical poll, which was (were?) such a great example of how useless scientific polls can be, and I was just skimming through the section on responses to Bill’s poll.
So I almost walked away believing that the poll was real. Maybe I’m unusually gullible, but I bet many other readers are too.
We tried to make it clear that was a made up pol.
But the results of the imagined poll are what I think people believed. Barrington’s result was a huge surprise—at least I believe that.
I have to agree with anonymous about the quasi-use of the subjunctive. “If I were a rich man” is absolutely fine, but “I were a rich man” is not a grammatically correct way of saying “I am not a rich man” (or “I was not a rich man”). My reactions as a reader were more or less identical to those of anonymous, not that I mind too much having been fooled.
Yeah, I was also completely taken in by the “joke” poll, and feel that could use clarification. In the same vein, I think they’re joking about “were” being acceptable grammar. 🙂
Wow, these comments were interesting for me to see, as I had inserted several more markers over the previous draft—besides what “Anonymous” noted above, the idea of being reduced to eating piles of shrimp and having to fill a poll (with the same wording as Bill’s decades later) to get it was meant to be transparently fantastic. With a nod to Bill’s comment below also, this all was not meant to fool him himself.
This helps me gauge how much safeguard to put on counter-factual writing for-effect (at least on days other than 1 April). The answer seems to be that meaning necessary to truth recognition should be topologically sorted, as with papers, so I edited the post to flag the section as a “Tall Tale” in its heading.
I maintain my stance on the grammar. Try adding the word “say” before it, as in “Say the poll were conducted…” Now I think most English speakers will recognize “were” as correct, even without an “if”. I added “Let’s” before “say” above. Another legal form is the inversion, “Were the poll conducted…” (the next word could be “today” or “then”). In Italian one sees “sia” and the past form “sia stato” used without inversion or other counterfactual-indicating words, so I think in English such a pattern has become archaic/stilted/sly rather than incorrect.
A tacit condition for the problem of NP=P to be “solved” is that the problem must be solved by an established researcher, specially if the proof is claiming NP==P.
So what happens when an unknown person solves the problem with a claim something like #P== FP?
Turing was an unknown student when he solved the Entscheidungsproblem. He took great pains to develop his ideas rigorously, write up them up clearly, present them to colleagues, and submit them to peer review. Even so, only two people responded to the article that Turing had written so laboriously (Heinrich Scholz and Richard Braithwaite).
The difficulties that Turing faced arose (perhaps) not so much because people perceived that his ideas were mistaken or insignificant, but rather because in the early years it was not evident that good further theorems could be proved by embracing Turing’s ideas. To get over this hurdle, there are no substitutes for rigor, clarity, persistence, collaboration, adaptation, and practical application …traits that Turing himself subsequently displayed in abundance. And of course, the idea itself will be strengthened, refined, and extended during these trials … if it is a good one, that is.
That is why so many essayists and historians have concluded, that in mathematics as in show business (and many other disciplines) “overnight success” generally requires decades of effort. William Thurston’s essay On proof and progress in mathematics (1994) is widely acknowledged to be one of the best articles on this topic.
To put it bluntly, no, it’s not a tacit condition that the problem be solved by an established researcher.
It’s entirely possible, though unlikely, for an amateur to solve the problem and write it up to academic standards. Such a paper will eventually (quickly, I might say) receive the attention it deserves, even if it’s by an unknown author. I think more people read the amateur PvNP “proofs” than publicly admit it.
Luke, being one of those cranks, contacting professional mathematicians, I think you are right (especially if P!=NP, everybody are equal – you need to guess). The problem here is to write it up to academic standards, and it is hard to do it without help. It is even harder to get the pointers to the literature, dealing with similar problems.
In particular, I cannot find references dealing with positivity of linear combination of squares of quadratic forms, and their representation as a sum of squares. The problem is basic, and should be addressed somewhere, so I guess, I do not know the right language to ask.
Well– I meant to say that some recognized body has to read/review the paper before it can be known that the problem has been solved.
The difficulty here is that a natural bias referees will always have is “the willingness to read depending on how much the referee knows that person”. When the claim lies outside the majority’s belief the bias will become even further skewed.
I had submitted my paper to the online journal, TOC, two years ago. After a period of 6 months when I inquired, the chief editor Babai said nobody was even willing to read my paper.
Dick,
you do not really believe, it is a game, otherwise you would help some of us to find formal way to prove, or find what Lance call something useful. Of cause, most of us are crank, the rest are trisectors. Some of us lack education, some lack techniques and language. It take too much time to teach your students, and there is no time to teach someone else. We need to play your game, to be talked back. I do not care. There is may be a reason (I see one in the gap in the language), or may be not. No one want to tell the real reason, I again guess because of the gap in the language, why sum of squares approach does not work. Well, there is a prove that it is not possible to trisect …, but no one want to tell the reason why not sum of squares. Give the answer, point to literature, if you know. One page summary does not work, it is ignored – there is should be error, no matter what. Happy poll.
Graph Isomorphism is in P. Recently I revised the paper: http://arxiv.org/abs/1004.1808
I COMPLETELY fell for your story about the poll on Branching Programs. BRAVO!
I can only HOPE that a similar thing really does happen with the P=?NP poll- that
people mostly think P\ne NP and it will take a while, but it turns out that P=NP and
we find it out soon.
GASARCH
While professional mathematicians overwhelmingly believe that NP!=P, and the subgroup of people I (subjectively) regard as the best complexity theorists overwhelmingly believe that NP!=P, the small number that believe that NP=P is a larger percentage within the more select group.
I therefore suspect that conventional wisdom favors NP!=P, but more of the smart money favors NP=P than the conventional wisdom money.
For what it’s worth…
1. NP=P
2. By 2020
3. Compressed proof strings for satisfying instances of 2CNF in Ternary logic will be found to be bounded from above by a polynomial function of the problem size.
4. Yes
5. There is a polynomial reduction from quantum computing into classical computing. I agree with Dick that factoring has likely been broken, but we haven’t been told about it yet.
Luke G, the integers and reals both have natural models (obviously) … and yet they both have plenty of undecidable properties. The MathOverflow question What are the most attractive Turing-undecidable problems in mathematics? provides plenty of wonderful examples.
Similarly, the sets of languages in P and in NP both have natural models (obviously) … and yet they both have plenty of undecidable properties. So it is natural to ask:
Mainly as a fun exercise — but there are practical motivations too that arise in quantum systems engineering — I am (slowly) composing an answer/essay to this question wholly from quotations from Juris Harmanis’ 1978 monograph Feasible Computations and Provable Complexity Properties (which is a work that I admire greatly and recommend highly). It was very surprising (to me) to discover from GASARCH’s poll that most people believe that PvNP separability is decidable … and yet AFAICT no article or text (aside from Hartmanis’) considers this question in-depth.
Eventually this Hartmanis-inspired answer/essay will be posted either here on Gödel’s Lost Letter, or alternatively in answer to a question that Scott Aaronson asked on Shtetl Optimized, or as a formal question on MathOverflow or TCS StackExchange … which one depends upon how satisfied I am with the essay, and whether it raises concrete questions that have verifiable answers.
If anyone cares to spare me this effort, by composing a similar answer/essay … please do so! 🙂
After some consideration, I have posted a selection of Juris Hartmanis quotations on TCS StackExchange. The quotations were selected mainly with a view to providing a tribute to some excellent and thought-provoking proof sketches provided by students Travis Service and Alex ten Brink. Any errors in the commentary upon their proof sketches are mine alone. And needless to say, the problem of the decidability of the separation remains open.
One might investigate the decidability of P vs NP in weaker theories than ZFC or PA. In fact, this is studied in proof complexity.
Any proof that tautologies do not have poly-size proofs in a (sufficiently strong) propositional proof system P implies the hardness of for P. (Here, stands for propositional formulas where D(y) is a poly-size circuit and SAT(x,y) says that the assignment encoded by x satisfies the formula encoded by y. Therefore, if P proves the implication efficiently, it can prove efficiently any tautology T using .) For more interesting statements see http://www.karlin.mff.cuni.cz/~krajicek/scan19.pdf
On the other hand, there are propositional formulas encoding circuit lower bounds, so one can try to prove the hardness of the superpolynomial circuit lower bounds on SAT. See the introduction in http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.4404&rep=rep1&type=pdf
This is all about the propositional hardness. However, many proofs in first-order theories can be rewritten into short propositional proofs. E.g. the hardness results for the extended Frege proof system can be translated into the unprovability results in a fairly strong theory (a fragment of PA with restricted induction scheme). The correspondence is studied in the field of Bounded Arithmetic.
Since no-one else has asked; I will. Dick – What is your basis for claiming that factoring is already solved?
I don’t agree with his claim, but if I did, I think my basis would be on analogy with public key cryptography – found in the black world a decade or two before they were publicly invented – or with DES (https://secure.wikimedia.org/wikipedia/en/wiki/Data_Encryption_Standard#NSA.27s_involvement_in_the_design) – where the need for the s-boxes were only understood publicly in 1990. For a prediction by 2020, Dick only needs 9 years of secrecy.
Good question.
If I had to guess, it would be based on existing precedent with the black world: how many decades was public key crypto known before it was re-discovered? How many decades did it take DES’s s-boxes to be understood? And so on.
Lord knows the black community has enough incentive to factor integers already. A prediction for 2020 means Dick only needs a minimum of 9 years of secrecy, which is not all that much. The hidden government is extremely powerful these days and quite good at keeping secrets, thanks to the War on Terror*, despite things like Wikileaks.
* I was reading one of the Congressional Research Services PDFs (which collection Wikileaks distributed a few years ago) about the CIA and covert action. It struck me as hilariously utopian – no way the CIA would permit its covert action directorate to be spun off into a separate agency or give up subversion entirely! – until I checked the date and realized it was from the ’90s after the Cold War ended.
I have yet to come across any good reason to think that factoring is fundamentally difficult, other than that no one has announced a fast algorithm. Has anyone else?
Here is a question. (I asked it over Scott’s blog and it is related to a question I tried to ask here about cryptography some time ago. I suspect it is a well charted territoty, but I dont know.)
Consider a theorem of the form “Linear programming is in P”. Suppose that it has a short proof. Now for general theorems we know (by believing NP=!P) that having short proofs does not guarantee that we can ever find such a proof. Does this mean that for a some/general algorithmic question in P finding a polynomial algorithm is, in principle, hopeless?
(I neglected any asymptotic feature of the original theorem, but still isnt it “morally” a consequence of NP=!P that for general polynomial in P finding the polynomial algorithm hopeless?)
It seems that inside P we can have a sort of shadow questions which represents large chunk of complexity/computability “real” hierarchy.
(One sentence has too much typos that it becomes meaninglless) isn’t it “morally” a consequence of NP=!P that for general decision problem in P finding the polynomial algorithm hopeless?)
A proof sketch given by Joel David Hamkins in answer to the MathOverflow question “Can a problem be simultaneously polynomial time and undecidable?” indicates that the answer is “yes.”
Neil Robertson and Paul Seymour referred to their forbidden-minor work as “The Loss of Innocence of P” because it yielded a whole range of cases where one can prove the existence of a polynomial-time algorithm—based on the existence of a finite obstruction set—but finding the set and thus creating the algorithm is a different story. I think they even showed a sense in which finding the obstruction set is uncomputable(?—sorry, not taking time to hunt today). Does this impact your question? (I trust I summarized your new quantum material accurately in the post itself.)
Dear Ken, Hmm, I think it is related. I will think more about it.Thanks for mentioning and linking to my quantum paper/slides, and I am pleased to be agreed to some extent. Overall, I dont think my work and other skeptical works should change people’s a priori belief on the feasibility of quantum computers especially with the traditional qubit/gates/quantum fault tolerance approach. (What I conjecture is that in any such implementation certain specific things would go wrong. But I dont see any way around checking each suggestion on its own.)
I do think my paper (plus a certain math fact that I am not sure about) can lower a priory beliefs regarding some ways to shortcut the traditional qubit/gate approach. Especially regarding topological quantum computing.
which mathematically are among the most exciting things that QI gave us. Certain non Abelian anyons allows universal quantum computation. I think I make an appealing case that standard experimental method to create them will lead to a certain noisy version of these anyons which are no longer useful. (The fact that these noisy anyons (which are not what traditionally topological quantum theorists refers to as noisy anyons) are not useful for universal or superior quantum computation is the missing math part.)
Putting spemticism in QC aside, I think I can refer to a question that should be regarded as decent QI question: We know that implementing quantum error correction allows computationally superrior quantum algorithms. Is the converse also holds.
Gil, that is a fine post. What you call “shadow questions” are what our QSE group call the “wild” languages of P and NP (by analogy to the the “wild” manifolds of topology). As Scott said recently on *his* weblog
And everyone appreciates that wild languages frustratingly wrong-foot us via Wittgenstein’s dilemma:
We know so little about this wild kingdom of languages ;… is it any wonder that P and NP are difficult to separate?
And even in principle, there is little will ever be able to say about wild languages ;… so is the separability of P and NP thus undecidable?
And if so ;… will we be clever enough in the 21st century to prove separability’s undecidability, and then adjust our definitions of P and NP to exclude these wild languages ;… while retaining in P and NP the languages that we really care about?
In tribute to both Lewis Carrol and Martin Gardner, perhaps we should informally call these wild Turing machines “boojums”:
Juris Hartmanis’ monograph Feasible Computations and Provable Complexity Properties (1978) summarizes much of what we know about the boojum class of Turing machines, and the languages that boojum machines recognize. Moreover, if in Hartmanis’ monograph one substitutes for “computation of functions” the phrase “simulation of dynamics”, one obtains a very interesting treatise on the complexity-theoretic limits to systems engineering … which is the practical reason why we engineers care about boojums and their languages.
Dear John
I agree that these are very related issues. I somehow suspect that the complexity/complexity shadow classes e.g. questions in P, where you can present and test the P algorithm for them but not effeiciently the algorithm are more relevant to computational complexity and the complexity of similations.
Actually my comment refers to John’s remark on Joel David Hamkins’s result. (and it should be “but not effeiciently find the algorithm”) I am all for defining tamer and tamer versions of P. In the SCstackexchenge I suggested to start with circuits in P move to language in P (i.e. add uniformity) and go further and further by adding uniformity (but I dont know how).
I think this issue also have a strong taste of cryptography. (As I said I suppose it is a charted territory and I would be interesting to learb about it.)
Scott Aaronson too has been commenting upon these issues (intelligently and entertainingly, as usual). As for me, I freely confess to being confused … but over and above this confusion, it has been eye-opening to appreciate (largely thanks to Hartmanis’ monograph) that this class of P-related questions is generically subtle, open, and difficult.
[Moved below by KWR.]
By mistake, my last comment is with my Son’s name Hagai who was logged on.
[I moved the content of that comment here—I approved it without seeing this discussion, and rather than erase it, I’ll blank it out. —Ken R.]
Dear all, Here is an example to explain further the type of behavior I want:Suppose that you have a decision problem depending on n that admit an algorithm with running time $nk2^{n/k}$ when kk) is computationally hard as k grows. (So you dont have huge constants as in other problems mentioned here.)
In the context of complexity versus modeling and simulation. When the agorithm required to solve a problem is not efficient nature cannot sove it harder. When finding an efficient algorithm is utterly intractible like what John’s talked about again nature cannot solve the problem either. But when the computational complexity of finding the algorithm are between the P/NP territory it may still be relevant. This is because in nature it seems that algorithms are the fundamental notion while decision/search problems are emergent. There can be a gap between running the algorithm that is already there and finding the algorithm when you do not know it.
So, Quantum Computer is ready?
http://www.dwavesys.com/en/pressreleases.html#lm_2011
Respected Sir
Richard J. Lipton
(Great Professor, Researches, Computer Scientist )
I would Like one Question Can a Better Search Engine Than Google Be Made?
Awaiting reply
Have a nice day;
Yours Respectfully
Bittu Gandhi
(Author, Website Developer, Researches, Record Holder)
(India)