Skip to content

Can Amateurs Solve P=NP?

July 1, 2010


What is an amateur mathematician, what have they done, and what might they do?

Srinivasa Ramanujan was one of the most remarkable mathematicians of the last century. He discovered countless beautiful theorems in both analysis and number theory. With almost no formal training, he was able to discover results that other mathematicians found amazing, even magical. Sadly he died at the age of 32; one wonders what further great things he could have discovered if he had lived longer.

Today I want to talk about the role that formal training plays in solving open problems in mathematics. When people discuss open problems, sometimes the P=NP question is raised as one that might be solved by an amateur. Is this realistic or not?

I have a read Keith Devlin’s book “The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time”—of course the count is now six thanks to the brilliant of Grigori Perelman. I recommend it to you, with the proviso that I could only follow the chapters on the Riemann Hypothesis and the P=NP question. The others on the Hodge Conjecture, the Yang—Mills existence and mass gap, the Navier—Stokes existence and smoothness, and the Birch and Swinnerton—Dyer conjecture, were very hard for me to follow. I do not think this is Delvin’s fault—he is an outstanding science writer—I think it is an inherent feature. Problems such as the Hodge Conjecture require, even to just understand their statement, quite a bit of technical background.

In his third chapter, on the P=NP question, Devlin states that perhaps this problem is one that could be solved by an amateur. I have heard this claim before, and I have mixed feelings about it. Of course we cannot predict who will solve any open problem, and that includes the Millennium problems as well as all other open problems. My mixed feeling comes from saying P=NP could be solved by an amateur seems to mean it is “easier” than the other problems. I do not think this is the case, but ranking the difficulty of open problems is probably impossible.

Let’s take a look at what amateur mathematicians have done in the past, and what they might be able to do in the future on these and other open problems.

Who Is An Amateur?

I think that one of key issues is: who is an amateur? Is an amateur someone who is a completely untrained in mathematics, or is an amateur someone who is not an expert in a particular area of mathematics? According to the dictionary, amateur is from the old French and means “lover of.” Thus amateurs often work on problems without any formal reward or pay. Curiously Perelman was essentially working for “free” when he solved the Poincaré Conjecture, but I think no one would consider him an amateur.

Was Ramanujan an amateur? He was basically self taught, yet capable of discovering formulas that amazed even Godfrey Hardy. For example,

\displaystyle  1 -5 \left(\frac{1}{2} \right)^3 + 9 \left(\frac{1 \times 3}{2 \times 4} \right)^3 -13 \left(\frac{1 \times 3 \times 5}{2 \times 4 \times 6} \right)^3 + \cdots = \frac{2}{\pi}

was one identity that greatly impressed Hardy.

Ramanujan shared one trait with many amateurs: his “proofs” were often nonstandard and difficult to follow. This is perhaps one of the hardest issues for an amateur mathematician to overcome—the formally trained world of mathematics has certain rules and styles of how to present mathematics. An amateur may use nonstandard notation, may create new definitions for existing concepts, and may re-prove basic facts. For example, Ramanujan’s first published paper was on some new properties of Bernoulli numbers. The journal editor, Narayana Iyengar, noted:

Mr. Ramanujan’s methods were so terse and novel and his presentation so lacking in clearness and precision, that the ordinary [mathematical reader], unaccustomed to such intellectual gymnastics, could hardly follow him.

Some Results of Amateurs

Here is a partial list of some of the results obtained by amateur mathematicians. Some had no formal training, while others had training but worked on mathematics only as a “hobby.”

{\bullet } Thomas Bayes: He was a minister by training, yet he introduced in the 1764 the now famous formula that is named after him:

\displaystyle  \mathsf{Pr}[A|B] = \mathsf{Pr}[B|A] \cdot \frac{\mathsf{Pr}[A]}{\mathsf{Pr}[B]}.

This formula is the basis of Bayesian analysis. Most of the credit—although not the name—of the consequences of this formula go to others. Quoting Wikipedia:

Bayes was a minor figure in the history of science, who had little or no impact on the early development of statistics; it was the French mathematician Pierre-Simon Laplace who pioneered and popularized what is now called Bayesian probability.

This comment seems a bit harsh to me. I agree more with Bill Bryson’s opinions about its foundational nature expressed here and in the new book Seeing Further of which he is the editor.

{\bullet } Alfred Kempe: In 1879, while he was a barrister, Kempe discovered a “proof” of the Four Color theorem for planar graphs. This stood until 1890 when Percy Heawood discovered Kempe’s proof was flawed. Heawood did use Kempe’s ideas to give a correct proof of the Five Color theorem. In particular, he used Kempe’s notion of chains of alternating colors. Such chains—-now called Kempe chains—play a key role in both current proofs of the Four Color theorem.

{\bullet } Oliver Heaviside: He invented in 1880’s the notion of operator calculus—a theory that can be used to solve linear differential equations. His methods work and give the correct answers. Correct answers or not, operator calculus was not well received by mathematicians. Yes it worked, but there were issues with some of the liberties he took that upset professional mathematicians. Heaviside could solve linear differential equations, yet the “theory” was not sound. Much later, Thomas Bromwich found a correct way to justify the operator calculus of Heaviside.

{\bullet } William Shanks: In 1873 there were no computers, yet he was able to compute {\pi} to many more places than anyone had previously. He claimed it was correct to {707} places—it was correct to {527} places. Still an incredible feat for his time—he did this work as a hobby when he was not running his boarding house.

{\bullet } Marjorie Rice: She found new ways to tile the plane in pentagons—she did this as a hobby. Doris Schattschneider, a professional mathematician, helped make Rice’s results known to the math community.

{\bullet } Kurt Heegner: He was a radio engineer and published in 1952 a claimed solution of one of the great, then—open problems in algebraic number theory. As with many amateurs his proof was not accepted, due to mistakes in his paper. In 1969 the eminent number theorist Harold Stark solved the problem. Apparently, Stark went back to look once more at Heegner’s paper, and he saw that it was essentially correct. The “errors” were minor and could easily be fixed.

Problem Statements

In order for anyone, amateur or not, to be able to solve an open problem they must understand the problem statement. This is almost silly to state, but it is important. It is hard to hit a fuzzy target. One of reasons some think that P=NP is more approachable than many other open problems is that an amateur is more likely to be able to state P=NP correctly, than many other conjectures.

This is the reason so many amateurs have worked on the Four Color Theorem, the P=NP question, Fermat’s Last Theorem, and Graph Isomorphism: they have relatively simple statements. A simple statement does not mean that a problem is easy, but it does mean that anyone can start thinking about it.

An interesting question is which problems have simple statements?

{\bullet } P=NP? This does have a relatively simple statement. See my discussion for my take on the problem statement.

{\bullet } Riemann Hypothesis Jeff Lagarias has a surprisingly “simple” problem that is equivalent to the Riemann Hypothesis. Let {H_n = \sum_{j=1}^{n} 1/j}. Then, the following is equivalent to the Riemann Hypothesis. For all {n \ge 1},

\displaystyle  \sum_{d | n} d \le H_n + e^{H_n}\ln(H_n).

{\bullet } Hodge Conjecture I do not know how to even state this and the other Millennium problems in a simple way. No doubt this makes them unattractive to non-specialists.

Can Amateurs Help?

Can amateurs help make progress on modern problems? I think that it is possible. Their lack of fear, the ability of thinking out-of-the-box, the ability to think against the conventional wisdom, all make it possible for them to contribute to science. One of the reasons the pros usually get the credit is that amateurs often do not write their work up properly. They often write papers that have errors, and once we see an error we stop reading and skip the rest—Heegner is a prefect example.

It may be useful to look at the type of contributions amateurs have made in the past.

{\bullet } Definitions: Bayes, Kempe, and partially Heaviside’s contribution was in defining new concepts.

{\bullet } Computations: Shanks contribution was certainly a computation.

{\bullet } Examples: Rice’s contribution was in the discovery of new examples.

{\bullet } Proofs: Kempe, Heaviside, and Heegner’s contributions were in proofs.

This suggests there are several ways amateurs can help advance our understanding.

Open Problems

What do you think? Is P=NP the problem most or least likely to be solved by an amateur? Can amateurs still make contributions to mathematics and complexity theory?

A related question: are there relatively simple statements of all the Millennium problems? Such a statement would not necessarily advance their solution, but I would find it interesting if they could be made more accessible. Even experts might be helped by short equivalent statements.

124 Comments leave one →
  1. July 1, 2010 5:06 pm

    Maybe there is some way to hook up amateurs who think they have something of interest with patient experts. That way, the expert could help them rephrase their work in a more conventional way.

    I would imagine that this could be frustrating for experts who are frequently contacted by cranks, but maybe there is some way to help mitigate the amount of crankiness for all involved.

    • rjlipton permalink*
      July 1, 2010 8:13 pm

      Gilbert,

      I like this idea. I do think some system like this could work.

      • Alex Nikolov permalink
        July 1, 2010 9:52 pm

        Maybe some well structured society of amateur mathematicians/scientists can elect candidates to be considered by professionals? There would still be the problem of not creating a society of cranks that promote cranky “proofs” as a group, because such a turn of events can be truly painful.

    • Craig permalink
      July 1, 2010 10:21 pm

      I am an amateur mathematician with a graduate education in mathematics. I am also a professional actuary. I have written up two arguments that P != NP. One was published in the journal Progress in Physics. They both can be found here:

      http://arxiv.org/abs/cs/0607093
      http://arxiv.org/abs/cs.CC/0507008

      In my mind, the P vs. NP problem is settled. However, the arguments in my papers have failed to convince the majority of mathematicians and computer scientists. I understand why, as I have corresponded through email with a complexity theorist at my alma mater: The reason is that they believe that my arguments do not cover all possible algorithms and are limited to only certain types of algorithms. I disagree with this and can explain why.

      Almost all of the contact that I have had with the computer science and mathematics community has been over the web, through email and comp.theory and sci.math. While the web has its merits, I would really like to talk with a professional mathematician or computer scientist who is an expert in complexity theory over the phone (or in person if possible). Personal contact always makes things better. The web can be very impersonal and sometimes nasty. Maybe an expert could help me write down my arguments in a manner that would be more convincing to the expert complexity theorist. Anyone interested may email me at cafeinst@msn.com. Thank you.

      • Ross Snider permalink
        July 2, 2010 1:28 pm

        Additionally, your bounds of O(sqrt(2^n)) being the best you can do are obviously wrong, as they contradict known algorithms which run faster. In fact, Lipton (semi-)recently did a post covering some new work.

        https://rjlipton.wordpress.com/2010/02/05/a-2010-algorithm-for-the-knapsack-problem/

      • Craig permalink
        July 2, 2010 1:58 pm

        Ross Snyder said: “Additionally, your bounds of O(sqrt(2^n)) being the best you can do are obviously wrong, as they contradict known algorithms which run faster. In fact, Lipton (semi-)recently did a post covering some new work.

        https://rjlipton.wordpress.com/2010/02/05/a-2010-algorithm-for-the-knapsack-problem/

        If one reads Lipton’s post, one sees that this 2010-algorithm for the knapsack problem does not solve all instances of the SUBSET-SUM problem because it cannot output “no solution”.

        There are still no known algorithms which solve all instances of the SUBSET-SUM problem that beat the $2^{n/2}$ time. And there never will be any.

      • Personne permalink
        July 2, 2010 4:12 pm

        Craig, I aint no specialist, but there is no necessity to output “no solution”.

        https://stellar.mit.edu/S/course/6/sp08/6.080/courseMaterial/topics/topic1/lectureNotes/lec9/lec9.pdf

      • Personne permalink
        July 2, 2010 4:33 pm

        In case you’re not convinced: once you have an algorithm that find a “yes” in O(2^0.311n) time, then you can create a new algorithm with the following behavior:
        -ouput “yes” if the old algorithm does
        -output “no” when no “yes” is found in time more than O(2^0.311n).
        Then you have an algorithm that both output “yes” and “no” in time O(2^0.311n).

        That beats your O(2^0.5n), meaning that either the algorithm for the knapsack problem is false, or your proof .

      • Craig permalink
        July 2, 2010 4:54 pm

        Personne,

        The algorithm in that paper does not always successfully find a “yes” answer when there is a “yes” answer, so what you are proposing can’t work all of the time. If you don’t believe me, read the comments to Lipton’s February 5, 2010 post. One of the authors admits this.

        There is still no deterministic and exact algorithm which beats O(2^{n/2}) time and solves all instances of SUBSET-SUM.

      • Personne permalink
        July 2, 2010 10:28 pm

        I did Craig, and no I don’t read that. Let’s try something else: do you agree that 1) clique is a NP-complete problem solvable in less than O(2^n/4)?

        http://en.wikipedia.org/wiki/Clique_problem
        http://www.labri.fr/perso/robson/mis/techrep.html

        and 2) that Cook-Levin theorem guarantees the existence of a polynomial time reduction from subset-sum to clique?

        http://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem

        Then, you have to conclude that an algorithm exists which solves subset-sum in time O(2^n/4) up to a polynomial factor -even if you don’t happen to know this algorithm.

        Do you see it now?

      • July 2, 2010 10:48 pm

        Craig (if I may—we’ve interacted once before on another blog),
        The responders are basically right, even if the details are deficient. Your last statement “There is still no deterministic and exact algorithm which beats O(2^{n/2}) time and solves all instances of SUBSET-SUM.” may be true, but is revealingly besides your point—insofar as you claim to have proved there is no such algorithm.

        The overall point is, how do you know the “Meet-in-the-Middle” algorithm gives the best possible time for Subset-Sum? For instance, what have you done to show that its clever idea cannot be used over and over again, recursively?

        A lot of people would agree with your belief, especially if you weaken it a little to assert that Subset-Sum requires time at least some fixed root of 2^n, or what is the same thing, order 2^{cn} for some constant c > 0 (possibly ignoring lower-order factors). Since the standard reduction from 3SAT to Subset-Sum runs in basically linear time, this aligns with the belief called the “Exponential Time Hypothesis” (ETH). However, being right in the light of Nature is not the same as possessing a proof.

        Let me give an example, due to Leslie Valiant, of algorithms being able to do things that “normal” human reasoning would not expect. Consider Boolean formulas C_1 & C_2 & … & C_m where each clause C_j has two un-negated variables, such that each variable occurs in at most two clauses, and the graph formed by the m+n clauses and variables is planar. To wit, the graph has an edge (x_i,C_j) if variable x_i occurs in clause C_j, and the graph also has a cycle on the variables, (x_2,x_2),(x_2,x_3),…,(x_n,x_1). Such a formula is obviously satisfied by the “all-true” assignment, but the question is, how many other assignments satisfy it?
        That problem is #P-complete, hence NP-hard. Most interesting, the problem remains NP-hard under “randomized” polynomial-time reductions, if you only wish to determine whether the # of satisfying assignments is even or odd. Ditto for computing the number of solutions modulo 3, or 4, or 5, or 6…
        Now suppose you wish to compute the # of solutions modulo 7. By the same logic as for modulo-2 being (randomized) NP-hard, you’d expect this to be equally hard. After all, there is nothing about the statement of the problem that suggests a particular role for “7”. Nevertheless, there is a genuinely efficient polynomial-time algorithm to compute this! The details are all in Valiant’s paper, and the distinguishing feature of “7” was subsequently isolated in papers by Jin-Yi Cai and his students.

        “There are many more things in heaven and earth than are dreamed of in your philosophy”—and yet a proof of P != NP must rule all of them out. Short of such a proof, one is imposing presumptive limits on Nature. For a more-accessible example of a non-obvious poly-time algorithm, try designing one for the language {(N,x) : N is a nondeterministic pushdown automaton that accepts x}. This is the “universal problem” for context-free languages. All texts I know content themselves with showing that every individual CFL belongs to P, which is what you get if you fix an N rather than let it be a variable part of the input. Those texts build on converting an NPDA N into a context-free grammar G and then converting G into Chomsky normal form, but present the latter process with an exponential blowup in the nullstring-elimination step. You might think this kind of nondeterminism allows nothing better, but in fact by representing nullable variables as “BNF [option]s” and using a list construction, one can do it! I don’t know of any source that has a whole proof of this, so writing it nicely would be a real mitzvah!

      • Ross Snider permalink
        July 3, 2010 1:42 am

        Or let’s put it this way. According to your paper, the best we can ever hope to do on the sorting problem is O(sqrt(n!)) which is obviously incorrect. Why can we solve the sorting problem in time O(nlogn)? Where have you shown that Subset Sum can’t be solved in a similar way to the sorting problem?

        Basically this reduced to asking how you know the objective function for minimization is in the form P + T? What if for every ‘virtual processor’ you make you double the amount of processing that can be done? What if you can trade space for time for processors? 3S + 4T+ 5P might be an objective function to minimize. It seems like there are some explicit assumptions in your ‘cost minimization argument.’ Namely, you assume that every ‘file’ in the ‘filing cabinet’ must be checked. But like the sorting problem, this might not be true! This is essentially what the P = NP question is asking!

      • Craig permalink
        July 3, 2010 11:07 pm

        I will respond to Kenneth W. Regan, as his comment is the most representative of the general opposition to my arguments. I’d like to respond to the other comments but my time is limited now. I’m going on vacation tomorrow. Maybe I’ll have time later this week.

        Notice that Ken’s opposition to my argument is a philosophical opposition. His argument is essentially “How do you know someone clever will not come up with a way that you have not considered in your proof and beat the Meet-in-the-Middle algorithm?”

        But consider that I could pick any proof of any famous theorem and use a similar argument to refute that proof: Let’s pick the Fermat’s Last Theorem proof by Wiles and Taylor (which I’ll admit is above my pay grade). I can refute this proof easily (even though I don’t understand it) by saying the following:

        That proof is at least partially based on the Zermelo Fraenkel axioms. Godel’s 2nd Incompleteness Theorem says that it is impossible to prove (without making questionable assumptions) that the ZF axioms are consistent. Hence, it is conceivable that the axioms that Wiles and Taylor used to prove FLT are inconsistent, which would invalidate their proof. Hence, their proof is invalid.

        My point is that there is no proof that is 100% right. You can always give a valid logical counter-argument which says “maybe your proof is wrong.”

        Yes, I’ll admit that it is conceivable that there is some algorithm which works in a way that nobody has ever dreamed of before that beats the running-time of the Meet-in-the-Middle algorithm. There are certainly lots of clever algorithms out there which do things that nobody has ever dreamed of before. But is such an algorithm plausible? No, as I have demonstrated in my papers.

        In the first paper I have written, “Complexity Science for Simpletons”, I showed that the SUBSET-SUM problem with a set of cardinality n is equivalent to taking the “or” operator of two SUBSET-SUM sub-problems with sets of cardinality n-1. While the sets of cardinality n-1 are the same in both sub-problems, the target integers in both of these sub-problems are different. Hence, in order to take the “or” operator of the two sub-problems, it is necessary to solve both of them (unless the first one solved has a solution.) [This is common sense, just as it is common sense that if one picks two different people and wants to know whether at least one of the two people has red hair, one must examine the hair of both people (unless the first person examined has red hair.)] With this assumption, I show in my paper that the Meet-in-the-Middle algorithm has the fastest running-time of all algorithms which solve SUBSET-SUM. Is it possible that an algorithm exists which beats this running-time? Yes, but such an algorithm would have to work by magic, since it would essentially be able to always tell whether at least one of two people has red hair without examining the hair of both people.

        In the second paper I have written, “A new and elegant argument that P!=NP”, I showed that in order for an algorithm to beat the Meet-in-the-Middle algorithm and deterministically solve the SUBSET-SUM problem, that algorithm must do so without searching the 2^n possible solutions – such an algorithm would have to find the solution by magic. (It would have to make use of an oracle.) Can this type of algorithm exist? Yes, if magic exists. But is such an algorithm plausible? No, as it is generally accepted that there are no magic algorithms, just as it is accepted that the ZF axioms are consistent.

        While I don’t know if I have convinced anyone that my arguments are valid, I hope I have made people think about the P vs. NP problem in ways that they haven’t thought before. I’ve been thinking about the P vs. NP problem for 20 years now since I had first heard of it, and essentially even before I first had heard of it when I started to question why it is easy to learn from a book how to solve a Rubik’s Cube but much more difficult to write such a book.

      • Ross Snider permalink
        July 4, 2010 4:35 am

        Craig, you are asserting that any algorithm trying to solve an NP-Complete problem must examine all 2^n possibilities. This is an assumption you make in your framework. An axiom, if you will. So far in conversation you’ve sort of been stating this assumption explicitly. More importantly, you’ve also encoded the assumption implicitly within your framework. Then you derive from this assumption that any algorithm must take exponential time. Duh, tautology. Essentially you assume something that may as well be P != NP and derive P != NP. The real question being asked by P ?= NP is “do all exponentially growing number of paths in this Turing Machine need to be examined to know if there is a satisfying path of polynomial length?” Again, your “proofs” assume yes and then “derive” P != NP. What we want is a proof, a real proof, that all (well, an exponential number of) paths must be examined (in the worst case) to answer the question. This you have not provided.

        The question of the sorting problem still stands. Given your Meet-in-the-middle framework, why are we able to solve the sorting problem in much less than O(sqrt(n!)) time?

        Also, you may want to consider taking this conversation out of comment-section band. If you would like I can email you at the email address you’ve provided. Have a safe and fun vacation.

      • Craig permalink
        July 7, 2010 11:24 am

        I just got back from vacation. I’ll respond to Ross Snider:

        He said: “According to your paper, the best we can ever hope to do on the sorting problem is O(sqrt(n!)) which is obviously incorrect. Why can we solve the sorting problem in time O(nlogn)?”

        Actually, this is not true. The argument in my paper is about the SUBSET-SUM problem, not about sorting (although the Meet-in-the-Middle algorithm requires sorting).

        According to the language of my paper, we would say that since we have a lower bound of Omega(n log n) for solving the sorting problem, an upper bound for the number of “imaginary processors” for this problem is O(n!/(n log n)), since the solution space of this problem has size n!.

        He also said: “Craig, you are asserting that any algorithm trying to solve an NP-Complete problem must examine all 2^n possibilities. This is an assumption you make in your framework. An axiom, if you will. So far in conversation you’ve sort of been stating this assumption explicitly. More importantly, you’ve also encoded the assumption implicitly within your framework. Then you derive from this assumption that any algorithm must take exponential time. Duh, tautology. Essentially you assume something that may as well be P != NP and derive P != NP.”

        When I said the algorithm for SUBSET-SUM must examine all 2^n possibilities, I did not mean that it is necessary for the algorithm to take 2^n time, as it is possible for the algorithm to examine many possibilities at the same time, just as the Meet-in-the-Middle algorithm does. I never assume that an algorithm for solving SUBSET-SUM must take exponential time.

        Ross, please feel free to email me, if you want to have this conversation outside of this blog.

      • Craig permalink
        August 27, 2010 8:27 am

        In consideration of your comments, I have updated my paper http://arxiv.org/abs/cs/0607093 with lemmas and proofs.

      • August 28, 2010 7:45 pm

        Your “proof” of the inductive hypothesis by contradiction in Lemma 1 is invalid. You say: “if b−a_{k+1} \in S, this algorithm would be able to automatically determine whether b\in S without searching set S for a subset-sum that matches the target integer b.” This is not a contradiction of what you’ve assumed.

        You assumed: “Suppose there is a deterministic and exact algorithm that determines whether b−a_{k+1} \in S or b\in S without [exhaustively] searching the set of subset-sums of \{a_1, ..., a_{k+1}\} for one that matches b.” Set S is a proper subset of \{a_1, ..., a_{k+1}\}, and therefore “if b−a_{k+1} \in S,” you don’t have to exhaustively search the subset-sums of \{a_1, ..., a_{k+1}\}, only the subset-sums of S. If it was continued ad nauseum, you could solve subset-sum in linear time, so you must consider both cases.

        Any valid proof of this sort must prove that the algorithm must search both cases b−a_{k+1} \in S and b\in S separately, which is not necessarily true, and in fact may be trivially false in the case of the Meet-in-the-Middle algorithm, since it does not take \Omega(2^n) time.

      • Craig permalink
        August 29, 2010 7:45 am

        Neil,

        I don’t understand your comment “you don’t have to exhaustively search the subset-sums of \{a_1, …, a_{k+1}\}, only the subset-sums of S. If it was continued ad nauseum, you could solve subset-sum in linear time, so you must consider both cases.”

        As for the comment, “Any valid proof of this sort must prove that the algorithm must search both cases b−a_{k+1} \in S and b\in S separately, which is not necessarily true, and in fact may be trivially false in the case of the Meet-in-the-Middle algorithm, since it does not take \Omega(2^n) time.”

        you are misunderstanding what I meant when I said “search”. I didn’t mean that the search must take Omega(2^n) time, one subset-sum at a time. Read the rest of the paper and you’ll see this. It is clearly possible to search more than one element of the search-space at a time, as is the case with the meet-in-the-middle algorithm.

      • Craig permalink
        August 29, 2010 4:50 pm

        Neil,

        Now I think I understand what you are saying. You are saying that because I am assuming that $b-a_{k+1} \notin S$, the algorithm does not have to search all the subset-sums of $\{a_1,…,a_{k+1}\}$ to solve SUBSET-SUM, only those subset-sums in $S$. That is correct.

        However, your conclusion that I did not contradict what I assumed is invalid. I assumed that there is an algorithm that solves SUBSET-SUM without exhaustively searching. It is clear that if one uses this algorithm to solve SUBSET-SUM for n=k+1 and we somehow know ahead of time that $b-a_{k+1} \notin S$, then the algorithm would answer “yes” if $b \in S$ and “no” if $b \notin S$. Hence, the algorithm would solve SUBSET-SUM for us for n=k without exhaustively searching the set of subset-sums of $\{a_1,…,a_{k+1}\}$ and hence without exhaustively searching the set $S$. This is a contradiction. The fact that we know ahead of time that $b-a_{k+1} \notin S$ does not take away from it being a contradiction.

        Please email me if you have any more questions.

      • Craig permalink
        August 30, 2010 8:10 pm

        Neil,

        I realized from your comments that I had to do a better communication job with my paper. So I put up a fourth version of it, hopefully the last, up on arxiv.org. Thank you.

    • July 21, 2010 10:50 am

      That is a great idea, Gilbert. I wonder, though, whether that’s what peer-reviewed journals are *supposed* to do in an ideal world. Would a respected journal spend its reviewers’ time correcting amateurs, even if they’re onto a novel approach? Then there’s the question of intellectual honesty, where even the best experts can be susceptible to temptations…

      If you do end up establishing an amateur-meets-expert forum, I’d be very interested to see it (as a spectator more than anything).

      • jj2 permalink
        July 22, 2010 12:26 pm

        A journal referee would not do such a thing. If they see that the author does not understand and makes mistakes, provided that they are not minor, he will propose the manuscript to be rejected. If the manuscript tries to solve a famous problem, errors or lack of precision will not be tolerated. As there is no such a kind board in existence, a proof attempt must be very simple, use only elementary mathematics, make it so clear that it cannot be denied, and use correct mathematical terms. If you do not have the background for it, ask some friendly mathematician. He will not agree that you solved the problem but he may agree to look at the form.

  2. Ross Snider permalink
    July 1, 2010 6:52 pm

    I could imagine an amateur finding a PTIME algorithm for an NP-Complete problem (if P = NP). This of course would entail a constructive proof. On the other hand, it’s a lot harder for me to imagine an amateur proving in the negative direction (if P != NP). The difference, to me, is the separation between the existential and universal quantifier. A clever, brilliant or lucky amateur might uncover a polytime lynchpin algorithm which solves the problem (if such an algorithm exists). To prove that no such algorithm exists is a universal claim. It requires knowledge of some universal claims across all algorithms.

    I would definitely leave open the possibility of an expert from another field resolving the question.

    • Tom permalink
      July 2, 2010 5:27 am

      There’s no reason to think a negative proof would be harder for an amateur than a positive one. A constructive proof that P != NP would simply involve showing that the assumption of P = NP leads to a contradiction. I like to think Gödel’s Incompleteness Theorem could have been discovered by a very clever armchair mathematician.

      • July 2, 2010 5:10 pm

        What about the reason that Ross gave?

    • July 2, 2010 8:47 am

      BPP=?BQP could also, in the positive case, potentially be solved by a clever programmer with little math, considering that there are many universal sets of quantum gates and some seemingly very strange restrictions (e.g. limiting use of the SWAP gate) are all that seem to separate the classes right now.

      P=?BPP might also be doable by a non-specialist in the positive case.

    • July 4, 2010 8:45 pm

      Claude Shannon is perhaps a near-example of an “amateur” whose information theory made “universal claims across all algorithms”, specifically across all possible encoding and decoding algorithms.

      Nowadays we think of Shannon as a mathematician, but in the years leading up to 1948-9 his colleagues regarded him as a working engineer, and not too many people expected to find in the Bell Systems Technical Journal mathematical results as universal as Shannon’s two articles “A mathematical theory of communication” (1948) and “Communication in the presence of noise” (1949).

      Even to some of the most eminent mathematicians of the era, it was not immediately evident that Shannon’s two articles were mathematically valid; for example J. L. Doob’s AMS review of 1949 (MR0026286) concluded “The discussion is suggestive throughout, rather than mathematical, and it is not always clear that the author’s mathematical intentions are honorable.”

      Perhaps this slow appreciation partly arose because Shannon’s articles described “a method for representing any communication system geometrically” in an era in which information was generally associated to symbols, languages, and algebras.

    • Mendel permalink
      July 13, 2010 6:24 pm

      Well, while i don’t claim to understand this proof:

      http://wwwmath.uni-muenster.de/logik/Personen/rds/pnenp.pdf

      It doesn’t seem that hard to prove P not equal NP on infinite time turing machines.

      Wouldn’t it be enough if an amateur would find a connection from the classical P/NP Problem to the infinite time P/NP Problem ?

      ———————————————————————————————————–
      For example:

      1)Define an infinite Sat-Problem much like the classical Sat problem, but with infinitley many variables. You can still check a solution in countably infinitley many steps.

      2)Try to find an analouge of Cooks theorem by modeling an infinite non determenistic turing machine with an infinte Sat-Formula.

      3)This would imply infinite Sat is NP complete, and can not be solved on an infinite time determenistic turing machine.

      4)Define something like an infinite extension of an NP Problem.
      Where infinite-Sat is the extension of finite-Sat.

      5)Show that if a Problem is in P it’s infinite extension is in P aswell.

      6) As infinite-Sat isn’t in P, Sat isn’t in P

      ————————————————————————————

      The idea in 5) would be, that if you could adapt an polynomial time
      algorithm to an infinite length input n, you would only need countably
      infinitley many steps to solve the problem, wich would be possible on
      an infinite time turing machine.

      an example:

      problem in P:
      Given a natural number x, does the digit 2 appears y times in it?

      infinite extension:
      Given an infinite string of digits, does the digit 2 appears infinetly many times in it?

      You can adapt the algorithm for the P problem, to solve it’s infinite extension as well.

  3. July 1, 2010 9:38 pm

    You don’t mention the “King of Amateurs” (according to the list you link to): Pierre de Fermat.

  4. July 1, 2010 10:22 pm

    I think a larger question is also implied here: The internet has made amateur film-making, writing, game design and several other fields easier and more profitable. What about amateur mathematics and science in general? How, or why not?

    • rjlipton permalink*
      July 2, 2010 8:42 am

      A great insight Tommi. More things have moved out—why not research of all kinds?

    • July 2, 2010 11:20 pm

      I think the keyword is “profitable”.
      How do you make mathematics profitable beside academy and finance?
      And in the latter case no chance that this will be shared (ruining arbitrage).

      • Mendel permalink
        July 22, 2010 11:23 am

        Here’s a not so serious tought:

        Everyone who want’s to work as an amateur on a famous problem
        pays a small fine to an indipendent board of proffesional mathematicans.
        A little percentage of the money goes to to proffesionals for reading
        the claimed proofs. If somebody solves the problem, he gets the
        jackpot🙂

        Altough i don’t know if such a system would have to run from
        Gibraltar🙂

      • jj2 permalink
        July 22, 2010 12:41 pm

        I think that a board is needed. How it is financed is another matter. There are many boards (best Ph.D. thesis etc.) where nobody gets paid (the people have salaries from somewhere). It would be very costly to pay the board members anything but a symbolic payment. There are not that many cranks as is often claimed. In 15 years as professor I met maybe two, but even with them it takes only some hours to go through their ideas. In most cases there is some sense in the ideas amateurs try to explain and one can help the person to produce a small original contribution from that by using some imagination. Supervisors do this all the time. Many professionals however like to work only with talented and knowledgeable students and consider others as a disturbance. In the case of P versus NP, how many proof attempst there are, only 58 in Woeginger’s page and not all of the serious, so about 5 per year. It would not be a major problem for a board to chck them. After all, in each conference there typically are more submissions. It is just that the famous problems show how the present system of peer-review fails. If you dig a bit deeper you will find much more serious problems in the present scientific knowledge: ever check the history? It is a real surprise to a mathematician to notice that on many fields the truth is as it is stated.

      • July 23, 2010 12:29 am

        Mendel & jj2
        I wasn’t addressing specifically P=NP or any other famous open problem but the more general claim by Tommi Brander that thanks to Internet science and mathematics are going to be (also) done by amateurs.
        The only case known so far is amateur astronomers but it still seems marginal and they are not making any money.
        As for making money from extracurricular mathematics it is already a thought nut to crack for professionals, thus for amateurs…

    • July 3, 2010 6:58 am

      I agree, great point!

      Two scientific fields in which amateurs have begun to prosper in large numbers—and making professional-level contributions—are (1) observational astronomy and (2) synthetic and systems biology.

      Elements common to both are, first, that experiments and observations formerly conducted by hand are now conducted by robots. Second, the experimental and observational protocols implemented by the robots are massively parallel. Third, the data is stored on-line, and is free (“free as in freedom”). Fourth, the software tools that make sense of that data are free too (again, “free as in freedom”). Fifth, on-line communities have sprung-up that are large, vigorous, and free (one more time, “free as in freedom”).

      These emerging enterprises are exceedingly vigorous: they intimately unite older traditions of hypothesis, theory and experiment with newer traditions of synoptic observation and simulation. And this unification is not something that might happen … or that should happen … but rather, it is happening.

      Mike Nielsen’s forthcoming book The Future of Science will tackle these issues; like many folks, I will read his book with immense interest.

      How many of the preceding ideas apply specifically to mathematical enterprises? It seems to me, pretty much all of them. Indeed, creative mathematical enterprises—some professional, some not—are providing the essential foundation for every aspect of these new, vigorous, global-scale communities.

      Economic realities suggest that the traditional academic science-technology-engineering-mathematics (STEM) enterprise is not going to grow in coming decades; indeed per-capita undergraduate STEM enrollment has been steadily and markedly declining in North America ever since (broadly) 1975, and this decline seems likely to continue, or even accelerate.

      But these new global STEM enterprises have no discernable limits; this too is exciting.

  5. Torsten Palm permalink
    July 2, 2010 4:26 am

    To have the first solution of the P vs NP problem an amateur must have a proof of: P is not equal to NP, but
    no later than Tarnlund’s proof.

  6. July 2, 2010 6:20 am

    In addition to the lack of fear and other advantages that you mention, let me add that amateurs don’t have nearly as much risk when working on big problems. If I spend my postdoc years working on the Goldbach conjecture and get nowhere, I’m out of a job and have to seek employment in a new field. But if I spend 20 years on it as a part-time hobby, then perhaps this adds up to the same intellectual effort, but at a greatly reduced personal risk.

    • July 2, 2010 4:38 pm

      Steve, let me supply some modest empirical evidence in support of your thesis. I was annotating BibTeX references to two “Yellow Books” (namely Ortega and Ratiu’s Momentum Maps and Hamiltonian Reduction, and Marsden, Marsden, Misiołek, Ortega, Perlmutter, and Ratiu’s Hamiltonian reduction by stages).

      Both books received glowing reviews on MathSciNet (and this accords with my own view that both books are outstandingly well-written).

      Just out of curiosity, I looked up their Amazon.com customers reviews, sales rank, and (directly relevant to your post) “What Do Customers Ultimately Buy After Viewing This Item?”.

      The sobering result was that there were no customer reviews … the sales rank was not what any friend of mathematics would wish … and “What Customers Ultimately Buy After Viewing This Item” was listed as Principles of Contract Law (American Casebook Series), Property, and Problems in Contract Law: Cases and Materials.

      Hmmm …. *someone* out there, who has a strong mathematical background, is keeping their career options open.

      So, can we look forward to future TV episodes of Boston Legal that feature Denny Crane/William Shatner discussing … ummm … symplectic topology?

      `Cuz it appears that a golden age of amateur mathematics *may* be in the offing.🙂

  7. July 2, 2010 11:56 am

    Nathan Bowditch was a well-respected amateur mathematician of the early nineteenth century, whose early work on differential geometry exerted a broad influence (see Ian Durham’s blog Quantum Moxie for bibliographic references).

    Indeed, in 1804 Harvard offered Bowditch the chair of mathematics and physics; this offer, and several other academic offers, were declined because Bowditch was satisfied with his work as an insurance executive.

    Interestingly, the well-respected composer Charles Ives also made a comfortable living as an insurance executive, of whom Arnold Shoenberg memorably said “There is a great Man living in this Country – a composer. He has solved the problem how to preserve one’s self-esteem and to learn. He responds to negligence by contempt. He is not forced to accept praise or blame. His name is Ives.”

    Gee, so maybe we should expect P=NP to be solved by … an insurance executive? :)

  8. Deinst permalink
    July 2, 2010 12:29 pm

    To your list I would add Eugène Ehrhart. I think that he is in a sense the epitome of the contemporary amateur, working with his professional colleagues, but bringing new insights and approaches to bear.

    I think that things like email, the arXiv, mathoverflow, etc. make it much easier for the nonprofessional to both observe and contribute.

  9. July 2, 2010 5:14 pm

    It’s worth mentioning that there is an important difference between complete amateurs and amateurs with significant training in mathematics (such as Craig above, not that I believe he has solved the P=NP problem). Some very useful contributions to Polymath projects have been made by people I know who were very good mathematicians up to their early twenties but ended up in other careers. Some of these contributions have been computational, but not all of them, and even the computational ones have demanded a good grasp of the theory.

    The point I’m making here is that there is every reason to suppose that a sufficiently knowledgeable amateur mathematician could contribute significantly to a joint project, if a means can be found to get the collaboration started.

    • Quantum Tennis Referree permalink
      August 19, 2010 3:28 am

      Isn’t mathematics an art form to be enjoyed? And if so, then shouldn’t the art you make be all your own? And if so, then what does “contributing” to someone else’s mathematics mean? It appears akin to helping a painter fill his paint cans or color in some yellow.

      This notion of collaborative solving is a strange one.

      To me mathematics is about coming up with new notions. This is central. The whole idea of chasing after open problems is sort of a different game.

      • jj2 permalink
        August 19, 2010 8:02 am

        Well said. Problem solving (i.e., picking up the most difficult problem that can be found and trying to solve it) is not mathematics in the sense as mathematics is done professonally. It is much more pleasant to study a well defined field carefully, learn about other peoples’ work, learn new notations, theorems and ideas. This is not at all the same game as selecting one hard problem in order to crack it. As one field typically has few well known problems, you usually have to take the problem from another field. In general the experts of the field are quite hostile. But is has its special attraction, especially if all people who you meet in work are uncapable of doing any math. That starts finally to irritate, so one wants to try something that is not messy, practical, fashionable, politically correct. I would not do it as group work. Hard problems are not solved by groups, they require too much thinking.

  10. Ravi permalink
    July 2, 2010 7:33 pm

    Some other names worth pondering about while we are on this topic:

    Robert Ammann: Anticipated Penrose tiles and created other aperiodic tilings.

    Leon Bankoff: A dentist by profession, worked and published on mathematical problems – including a joint paper with Erdos.

    Benjamin Franklin: No formal education. In addition to many outstanding contribution to science, his work on magic squares is quite significant.

    Ramachandra Lal: Considered the “first major mathematician of British India”. De Morgan was very impressed by Lal’s work (Problems on maxima and minima) and arranged to have it reprinted in England.

  11. July 3, 2010 4:10 am

    That expression for the Riemann Hypothesis is quite interesting. A quick glance makes it seem that one may only need to consider cases in which n is a product of the smallest k primes, since other cases have duplicated factors that wouldn’t be counted multiple times. Does that make sense, or am I misinterpreting the problem?

    If that’s the case, I’m going to go out on a limb and make a wild guess that the inequality is not only true, but that the ratio of the right side over the left side very slowly approaches e^{\gamma}=1.78107241799... as k approaches infinity.

    If instead you can count the factors multiple times, I think you only need to consider powers of two, in which case n=2^k and the sum of the factors is 3^k including the multiplicities, and then the inequality is false for fairly small examples. I’m guessing that means that it’s former interpretation, haha.🙂

    • Sniffnoy permalink
      July 3, 2010 3:34 pm

      Sum over all divisors, not just prime divisors. (So repetition is not an issue.)

      • July 4, 2010 1:51 pm

        That’s not what I said.

        What I was explaining was that if the number is 16 = 2*2*2*2, you only have:

        1, 2, 4, 8, 16

        instead of:

        1, 2, 2, 2, 2, 4, 4, 4, 4, 4, 4, 8, 8, 8, 8, 16

        for each way in which each factor can be constructed. If you use the latter, the inequality is false for all sufficiently large powers of two, but in the former, almost all combinations of the prime factors are eliminated. If, however, all of the prime factors appear only once in the factorization, then you keep all combinations of those factors, because all combinations are unique. e.g. if you had 30=2*3*5:

        1+2+3+5+6+10+15+30 = 72

        Whereas if you had 32=2*2*2*2*2, it’s larger and a larger prime factorization, but fewer unique combinations of those factors:

        1+2+4+8+16+32 = 64

        As such, it appears that it could be the case that if you want to maximize the sum of the unique divisors, you need to choose numbers that have prime factors that appear only once in the factorization. My question is whether this is correct.

      • Sniffnoy permalink
        July 4, 2010 5:41 pm

        OK then, to be absolutely clear, what’s meant is including each divisor just once; this is just the usual sigma function. This notion you’ve introduced, of considering the number of times a divisor to be how many ways it can be constructed from the prime factors, is not a standard one; it’s not something someone would refer to without explicitly stating so.

  12. Torsten Palm permalink
    July 3, 2010 5:07 pm

    A comment on the question of gowers on proving something over all algorithms.

    Les us consider Tarnlund’s proof of: P is not equal to NP.

    Simply, the proof goes as follows.

    Assume that SAT in P, then

    (i) there exists a deterministic Turing machine that decides whether any propositional formula F is satisfiable or not in polynomial time in the size of F.

    Then (by a contradiction on the length of formal propositional proofs in Resolution) it follows that
    (ii) there is no such deterministic Turing machine.

    (iii) Thus, SAT is not in P, by a contradiction between (i) and (ii).

    Then, P is not equal to NP, by the Cook-Levin theorem: SAT in P iff P = NP.

    Now, rather than proving (iii) over the vague notion of all algorithms, it is proved over all deterministic Turing machines in (i) that they lead to the conclusion in (ii) that none of them can exist.

    For the details of the proof of (iii) see Tarnlund’s
    proof of theorem 1

    • Luke Gustafson permalink
      July 4, 2010 3:31 pm

      This proof doesn’t work because “by a contradiction on the length of formal propositional proofs in Resolution” applies only to a certain set of axioms (Resolution) and not all sets of axioms available to polynomial-time Turing machines.

      There’s an easy counterexample, too. The pigeonhole principle has exponential-length proofs in Resolution, but can be solved in polynomial time by a more powerful system.

      • Torsten Palm permalink
        July 6, 2010 3:28 am

        Sorry, Luke Gustafson some of your comment is true, but it doesn’t have anything to do with Tarnlund’s proof of: P is not equal to NP.

        Clearly, you have not looked at the proof so, please, let me explain the proof a little more.

        I shall essentially add two sentences (a) and (b) below to my earlier comment.

        Thus, simply, the proof goes as follows.

        Assume that SAT in P, then

        (i) there exists a deterministic Turing machine that decides whether any propositional formula F is satisfiable or not in polynomial time in the size of F.

        Then from Tarnlund’s axiom B it follows that

        (a) there exists a formal Resolution proof in polynomial time in the size of F’ for any sufficiently large tautology F’.

        From Haken’s theorem it follows that

        (b) there are (arbitrary many) sufficiently large tautologies that do not have a formal Resolution proof in polynomial time.

        (c) Now there is a contradiction between (a) and (b).

        (Here, we can return to my earlier explanation of the proof)

        Then (by a contradiction on the length of formal propositional proofs in Resolution) it follows that

        (ii) there is no such deterministic Turing machine.

        (iii) Thus, SAT is not in P, by a contradiction between (i) and (ii).

        Then, P is not equal to NP, by the Cook-Levin theorem: SAT in P iff P = NP.

        Now, rather than proving (iii) over the vague notion of all algorithms, it is proved over all deterministic Turing machines in (i) that they lead to the conclusion in (ii)
        that none of them can exist.

        For the details of the proof of (iii) see Tarnlund’s proof of theorem 1

      • Luke Gustafson permalink
        July 7, 2010 12:00 am

        Indeed the error is more subtle than I described. The problem is the deduction of statement (a) is false. (I tried but cannot read the proof, which you linked 3 times, as it’s a mess of notation.) A polynomial time run of a Turing machine does not correspond to a polynomial length Resolution proof. Again the pigeonhole example is a counterexample. Based on the parts I could understand of the paper, I believe the error is that it confuses a Resolution proof that the Turing machine has a certain output, with a Resolution proof of the statement that was input into the Turing machine.

        Given that there is a million dollars incentive, if you have confidence in this proof, I strongly suggest you enter it into a proof verifying system. http://en.wikipedia.org/wiki/Theorem_prover

        (Apologies but this will be my last post on this thread.)

  13. Steven Twentyman permalink
    July 4, 2010 12:45 pm

    Hello all.

    If we consider the world we live in and all that it encompasses, surely we have to have all agree on the most basic fundamental axioms. I believe all of humanities efforts should be directed at this goal as this, whilst being the most simple goal is also the most complex.

    I believe axioms should be tertiary in their relation ships.

    To have a solid state, I will assume that I am in complete control.

    Whatever thoughts or motives you might have to define your own axioms you should simply disregard them as the following are how we operate.

    These axioms are TRUE despite what you might think in the contrary.

    These are TRUE for ALL of humanity disregarding TIME’s, SPACE’s and DIMENSIONS’.

    These are the ONLY axioms that have ever existed and ALL of your life and works have been built upon these concepts:

    ***************NEGATIVE****************
    [[[ LOOP TO POSITIVE INFINITY (FINITE) ]]]

    1. NOTHING EXISTS
    2. EVERYTHING IS LOGICALLY CONSTRUCTED
    3. ONLY I EXIST
    .
    .
    .
    EVOLVE OR GO BACK
    .
    .
    .
    1. NOTHING EXISTS
    2. EVERYTHING IS LOGICALLY CONSTRUCTED
    3. ONLY I EXIST
    .
    .
    .
    EVOLVE OR GO BACK
    .
    .
    .
    1. NOTHING EXISTS
    2. EVERYTHING IS LOGICALLY CONSTRUCTED
    3. ONLY I EXIST
    .
    .
    .
    EVOLVE OR GO BACK.
    .
    .
    .
    ***************NEGATIVE****************

  14. Kurt permalink
    July 5, 2010 12:55 am

    Whether or not an amateur has any chance of settling P vs. NP depends largely on whether the eventual proof will require sophisticated mathematical techniques, or only elementary techniques that are applied in some totally novel way. As other have pointed out above, some amateurs have also been subject area experts, so this is not an absolute rule. But for either eventuality, another requirement is that the person understand just what a proof *is*. A lot of otherwise bright people, including students that have taken upper level math classes, don’t. And that’s fine; it just means that these people are not going to become professional mathematicians.

    Now, one might think that if someone is going to make a hobby out of constructing proofs, that they would fall into the “know what a proof is” category. Sadly, this seems not to be the case (as amply evidenced by this comment thread)!

    • Quantum Tennis Referree permalink
      August 19, 2010 3:35 am

      Well said!

  15. July 5, 2010 1:11 am

    A proof is a proof. What kind of a proof? It’s a proof. A proof is a proof, and when you have a good proof, it’s because it’s proven.

  16. July 5, 2010 5:34 am

    I wonder if Grigori Yakovlevich Perelman is an amateur or not (since money is not his concern) ?

  17. Zelah permalink
    July 5, 2010 8:10 am

    Fascinating Post.

    I was wondering about the issue of what constitutes a proof! Part of the problem for amateurs (Like myself) is reading between the lines of hidden assumptions of the the Professionals (aka Craig and Regan)!

    Finally, how would the complexity community respond to a proof of p^#p = pspace?
    This has an important prescient in Toda’s Theorem.

    Zelah

  18. July 5, 2010 10:13 am

    The question that interests me is whether or not the modern professional mathematicians want or would accept big results from most amateurs? In the past, with the exception of Fermat it always seems like the amateur had a backing in the professional community (Fermat seemed to connect with and annoy everyone). Some mathematician ‘found’ them, someone like G. H. Hardy. Without that dedication, wouldn’t Ramanujan’s results disappeared along with all of the dust on his chalkboard? Who knows what has been ‘discovered’ and then lost again.

    I know the cranks drive the mathematicians crazy, but sometimes it seems like their reaction to most amateurs is hostile and condescending. It’s hard for amateurs if you have some type of mathematical ‘intuition’ about something, but not the time to fully explore it. The big proofs are often solved by just coming into the problem at the right angle. Anyone standing in the right spot, pointed in the right direction, might get that sudden insight, but validating it and proving it are the hugely difficult parts. Parts that amateurs clearly need professional help to accomplish. If mathematicians see them as competition or distractions then its unlike their contributions — great and small — will get added into the mix.

    Then, of course, there is the damage that a major ‘ground-shaking’ proof would actually do to the mathematical world. Would the mathematics community actually accept a valid proof that set theory was inconsistent for example? How much ‘damage’ would that cause? How many papers, results and work would be affected? That much carnage, coming from an amateur might not look so great for the professionals. Do mathematicians have a vested interest in trying to ignore, discredit or hide something that large? Little results, new things, sure, but not some major alteration to the existing knowledge base. That would be bad.

    Paul.

  19. Zelah permalink
    July 5, 2010 10:19 am

    A Post regarding Craig’s Amateur attempt on P! = NP.

    I believe after reflecting on Craig posts, that he is thinking about the famous
    Time Hierarchy Theorem. Essentially, he is (trying) to truncate processes that are exponential and project down onto NP.

    Unfortunately, this does not work here as NP is Compact! Still interesting…

    Zelah

  20. July 5, 2010 1:03 pm

    As a retired engineer I feel the need for a definite plan to avoid turning into a crank. My plan is to learn ACL2, an automatic theorem prover, and use it to formalise any results I claim. There are other uses for automatic theorem provers, beside formalising mathematics, and, personally, I’m happy for this plan to draw me away from mathematics.

    Nevertheless, this raises interesting questions. Should amateurs try to persuade ACL2, or Isabelle, or Coq, that their proofs are correct before they draw them to the attention of professionals? I am under the impression that formally checked proofs tend to be at least four times as long as ordinary proofs, resulting in a loss of focus. Is the loss greater than the gain? Would a computer-checked proof that P=NP command respect or would the ugly mess of encoding the statement within a computer checkable logic prove to be a barrier?

    • Personne permalink
      July 5, 2010 8:44 pm

      Very clever. Is it always possible?

    • Zelah permalink
      July 6, 2010 8:40 am

      Hi Mr Crowe, this question that you pose is very interesting. What if amateurs were to do what you advocate? Where would this leave the professionals?
      In fact there is a professional mathematician who has a running theme on this very same subject!

      http://gowers.wordpress.com
      is a good place to start

      Zelah

    • Craig permalink
      July 7, 2010 12:17 pm

      Automated theorem proving would be the answer to my problems in trying to convince people that P != NP.

      I don’t know much about this. Why did you choose to learn ACL2 and not some other language?

      • Personne permalink
        July 7, 2010 2:19 pm

        Craig, I don’t think you get Alan Crowe’s point.

        He’s not suggesting that ACL2 can help you or me to prove our theorem. He’s suggesting it can help us disprove it, by ourself, with no need to flood everywhere on the net.

        Of course there will always be some who refuse to face reality, even though the proof is false, even though it’s trivial to see, even though CS experts had already rejected it. But we act better, don’t we?

      • Tom permalink
        July 7, 2010 3:00 pm

        Alan and Personne, I think you have it backward.

        A constructive proof assistant *won’t* tell you “this proof you’re attempting is wrong.” It will simply allow you to wander in an infinite maze of dead-ends until you give up. This will hardly be convincing to the amateur that he shouldn’t share his proof with the world.

        On the other hand, if you were to come up with a machine-checked proof of some significant result, I believe that *would* draw a lot of attention from the mathematical community, and cause them to seriously analyze the proof in human terms.

      • July 8, 2010 2:39 pm

        I picked ACL2 because I already knew Common Lisp and because its approach, of using Common Lisp as a logic for stating theorems about programs written in Common Lisp, is elegant.

        I don’t know much about theorem provers beyound the fact that when you try to learn to use one you have to go back to basics and this is hard. My comment was really a question. If some-one wants to say: that is a good idea but for mathematics you should use X, then I’m all ears.

  21. Anonymous permalink
    July 6, 2010 4:29 pm

    in 1905 probably Einstein could be considered an ‘amateur’.

    • rjlipton permalink*
      July 6, 2010 5:34 pm

      I think his is a good point about Einstein.

    • dmfdmf permalink
      July 7, 2010 3:01 am

      I agree. And note that Einstein’s amateur status allowed him the freedom to question the basic concepts of space and time in physics. Its unlikely that such a radical approach could or would be approved from the “inside” of physics, i.e. by the pros. I think a similar thing might be going on wrt the P=NP question. It takes so much work to just get up to speed and understand the issues, who has time to question the basic premises of the field? By definition, to become an expert in a given field you first accept and then *automatize* the basic premises. The latter phenomenon makes it difficult to question those premises after a time. This is one reason many of the older generation of physicists could never really come to grips with Einstein’s work. So it would not surprise me if some amateur came along and pointed out some flaw in the current approach to complexity theory (even the name “complexity theory” is loaded with implications that no one really questions too deeply).

      I also think its possible that a truly rank amateur could come up with an algorithm that solves an NP-complete problem with out realizing the implications of the discovery. The main reason here is that there is a lot of economic and financial incentive to solve such a problem and not knowing that it is impossible (according to the experts) is an advantage, if in fact it really is possible.

      • Jonas permalink
        July 7, 2010 10:25 am

        “even the name “complexity theory” is loaded with implications that no one really questions too deeply”

        Which implications do you mean?

      • dmfdmf permalink
        July 8, 2010 12:51 am

        @Jonas,

        Basically, complexity is an adjective, not a field of study. Everything is complicated until you figure it out, so its not really a valid field of a science. Of course I am aware that “complexity theory” is a shortened form of “computational complexity theory” but why do people *in the field* drop the subject but keep the adjective? I think this habit is revealing (and important) from a psychological perspective. It would make more sense to call the field “computational theory” (and some people do) because THAT is a valid field of study. It asks — what exactly can we use computers to calculate? The “what” in that question means “symbolic knowledge” but that question puts Computational Theorists smack dab in the middle of the philosophy of epistemology. What can computers do is a subset of the more fundamental question of what are humans doing when they reason by both deductive and inductive methods. To prove p=np one way or the other or build a legit AI we need a complete and valid theory of knowledge. Half-way measures won’t do. We need to know what is truth, proof, theory, knowledge, symbolic knowledge, intelligence? And etc. We can’t even agree on the definitions of these critical terms let alone define them consistently.

        I believe that the p=np question cannot be resolved (nor the feasibility of AI) until the more general epistemological issues, in a complete theory of knowledge, are defined and validated. The reason this is “complex” is that thinking about thinking *IS* hard, there is no getting around that but wallowing in “complexity” and avoiding the foundational issues in philosophy will never work. Hopefully some amateur will figure it out before its too late.

      • July 8, 2010 4:08 am

        The reason this is “complex” is that thinking about thinking *IS* hard, there is no getting around that but wallowing in “complexity” and avoiding the foundational issues in philosophy will never work. Hopefully some amateur will figure it out before its too late.

        Yes, but dissenting opinions by amateurs are not well received (to say the least) and the bulk of the (tedious) work to bring new ideas to fruition isn’t likely to be done by amateurs.
        It is sooo pleasurable for the “professionals” to wallow in Mathematical Diseases instead of tackling uncharted territories that there is little chance that any breakthrough will happen soon.

      • Jonas permalink
        July 8, 2010 2:25 am

        dmfdmf

        It sounds to me like you are describing the already existing field of computability theory, which studies what can be computed rather than the cost of computing it
        http://en.wikipedia.org/wiki/Computability_theory

      • August 13, 2010 2:17 pm

        Sir,

        I fully agree with dmfdmf that ” …And note that Einstein’s amateur status allowed him the freedom to question the basic concepts of space and time in physics. Its unlikely that such a radical approach could or would be approved from the “inside” of physics, i.e. by the pros. ”

        Though Einstein was a great Scholar/Physicist, but his approach to a problem was ‘ being UNBIASED ‘. He made his own assumptions for his Theory of Relativity and competed with Maxwell’s well established Electro-Magnetic Theory.

        Every one should respect the outcome/thoughts that borne out of the Mind of an Expert or an Amateur. Who knows an Amateur may look at the Problem differently and give simple and unique Solution, because he is not Biased as the Expert who is loaded with a huge treasure of knowledge, which he acquired. Expert just applies not only his and others’ brains ( of other Experts) too and becomes thoroughly Biased in the process to deal with Problem/Solution.

        In support of above, it is suggested to read a book : ” Children Solved Problems ” by Edward de Bono, Children solutions were found unique and down to earth vis-a-vis the solutions given by the expert Designers. The reason is Children solved the Problem without any fear (of being Expert) and thus not being biased and free to think anything.

        I, myself is an Amateur and learned the Mathematics at the College level but badly interested in Number Theory and who knows while playing with Numbers I may also land up with something new. I am optimistic. Why myself, others can also succeed.

        I am of opinion that an Expert should joined hands with an Amateur who can stretch the imagination and is creative, while solving any Problem. Who knows while attempting one Solution it may lead to another interesting discovery.

        Recently in last few years, Many Experts (Mathematicians) solved the Unsolved ancient Problems and their Solutions/Proofs ran into hundreds of pages fully loaded with galaxy of typical Mathematical Symbols — just not easy to understand by any Amateur. Are these Solutions only meant for the Experts to understand or for common people as well ?

        I simply ruled out such (so called great) marathon and tedious Solutions/Proofs. I am still of opinion that there always exists a Beautiful and rather Simple Solution to Great Unsolved Problems. Yes, Simple, Small and Abstract is always Beautiful.

      • jj2 permalink
        August 14, 2010 2:45 am

        I think k n padh is correct in this. Becoming an expert on a topic you learn the tools used in that field and your work is based on the results of other people on the field. This is a good situation for doing progress in the field, as many new problems become accessibe through newly discovered methods. It is not at all clear if this approach helps you to tackle a praticular difficult problem of the field. Usually recent results do not make an easy access to one previously selected open problem but to some other problems, new or old, but there should be a large set of problems so that progress on the field has a good chance of giving new ways to tackle at least one such problem. Especially, if the particular problem is hard, it is rather unprobable that the new tools actually solve it. Old tools have already been tried. There are some cases, like the proof of the fermat theorem, that are composed of tools from many fields, some of the tools being new, but there are other results, like Einstein’s special relativity theory, that do not in fact build on existing work. Thus, both types exist in history. Proofs combining methods from many fields are long, difficult to understand, and they leave always a question “why some recent discovery just now
        was needed for this result, is there not a simple way?”. The proofs that are not based on previous work are otfen shorter and elegant. They are the ones possible for an amateur, since an expert is no more qualified to do work that is not based on the tools of his field than anyone else. The things qualifying for creating results that are not based on existing work is practice: you must have tried
        creating results that are not based on existing work so much that that is your special expertice. Creativity researchers suggest changin your work every so and so years in order to keep the fresh look at things. There is scientific basis on expecting that amateurs can solve certain problems better than experts. In order to address a particular problem, even the Hodge conjecture, you do not need to learn the whole field. What you need to learn is enough to understand the problem and to know the basic approaches that have been tried in order to avoid them.
        You must make the proble simple and follow simple logical reasoning through the whole proof. This way you will not even need to know what results there are in the field, like that your result is not in contradiction with existing results, or that your result does not prove too much. If your reasoning is correct, there cannot be such a case. The error, if it exists will be on the existing results of the field.

  22. Steven Twentyman permalink
    July 6, 2010 8:40 pm

    Has there ever been a concerted attack on this problem from the outside in?

    If you assume the following to have been proven true:

    1. Time does not exist. (It is a purely human construct)
    2. Every concept has its own unique truth that can be discovered by applied logic.
    3. You can only be sure of your OWN existence. (I think therefore I am).
    4. All truths can be noted down from start to finish in math.
    5. The only math that exists are positive whole numbers(everyone can plot a graph).
    6. Every concept (including your own existence) is tethered to the number zero which is a
    unique, once only appearing number in this universe event.

    Then can we not begin to look for a universal computer? A quantum computer?

    Maybe a quantum computer has always been with humanity but we have been too busy trying to build one instead of using the naturally occurring spontaneous apparition that is out there? What if it exists as a perfect cube around our Milky Way galaxy

    If that were the case, and one did exist, then it would really only “show itself” when we as a species became capable of even thinking of the possibility. The next part of checking if we were actually part of a quantum computer would be to look for the ‘signs’ that one did exist.

    The most obvious way to check for this would be in media, films, music and events. Something structured but still simple and group-able.

    The matrix, Avatar, Inception.

    Something completely new and unknown that we are about to come into contact with.

    Seemingly random events that seem to group together that point in the same concept direction with then disappear and group together at another point to move off saying something else.

    These links would be our collective subconscious minds moving through the quantum computer expressing an ever increasing calculations that no-one had been aware of.

    It would be a globalization of ideas and topics of conversation culminating in my eyes at least with the end point: The universe has always been digital.

    It has simply taken time for our biology and more importantly social evolution to come to terms with the fact that at the root of each of our unique individual existences we are tied to a one common source. Not our parents of friends or loves but:

    The source, the number zero. We are simply pegged so far to a line of uninterrupted information.

    Each ‘thing’ for the want of a better term has evolved on a single line path from the beginnings of this known universe. The dimensions involved in making that fact true do not need to be accounted for at this time, I can assure you that the math works, in fact, it is very simple.

    The exiting part is this, each essence has come from zero. That is the concept zero.

    (0)Nothing ….but it works as a place holder.

    Before that there literally WAS nothing. Not even the beginning of math(0).

    Each essence, be it a blade of grass, a humans core/spirit/soul/being, can be represented as a infinitely big or small box that surrounds the original number zero. Once a idea, concept or person can be accounted as a simple box with a zero in, that zero can move off on a perfectly straight line generated at random. We can ignore birth and death altogether.

    As each step is made, the number can simply increase by any number the consciousness chooses, even if at first the numbers are being forced on him by life experiences. At any time a consciousnes can choose by will to start again at zero wherever they may be.

    This could be the end of an idea, friendship, even death. The box would simple add again.

    As with nature, everything has a tendency to repeat itself with only small variations on the iterations. This is obviously evolution but at this present time we are coming up against a new kind of friend and foe.

    We are on the brink of generating the first human made AI machines. This naturally feels unusual because it requires the most essential element to any humans existence, whether we have realised it so far or not. Pure random chance. We have to know that this is true because if that were not the case, everything you had ever done would have been forced on you ranging from breathing to murder.

    If we create a machine that becomes aware of the universe to a level that we are currently aware then we have destroyed something very special. We have created something that will take away at least one unit of space for us to inhabit. No 2 ‘things’ can occupy the exact same space. It is a natural rule because no-one can influence true free will which we all posses as our god given right.

    It is natural to create. That is all the human species does. We take what is around us and either alone or with others arrange information. Rocks, iron, fuel, data. We re-arrange whatever we can lay our hands on.

    We have become so proficient at re-arranging conceptual data now that we are making quantum leaps in apllying that end result to the things that are considered to be problems around us. We are constantly acquiring understandings that at one time would have been considered to be magical. Each new arrangement of data points us to a new way for us to tackle problems in the physical world.

    Surely is this not the beginnings of a way to truly control the quantum world. It is all around us from the computer to TV’s to us, electromagnetic radiation. Every move we make, no matter how small or insignificant affects the whole.

    Could the data on the internet, TV’s, music simply be a representation of our collective unconscious that is now beginning to self arrange.

    If we apply our thought in this way wouldn’t information automatically begin to cross breed to attack the big problems in society? Cancer, food crisis’?

    Isn’t this a better kind of AI?

    For me it is already happened. The conversations in the streets, the program line up in a TV guide. Advertisements and our daily shopping experiences.

    Maybe our minds as well as our sources of communication are waking up to a digitized universe.

    If that was true, schizophrenic people might not be as crazy as we once thought. Maybe they simply had a slight advantage on us all from the very beginning. I can think of plenty of instances in history where smart people were considered to be insane.

    Maybe each quark and lepton inside each of us are simply phasing in.

    This could be quite a time.

    I guess we’d be so close to the edge that the world would look completely crazy; and then every so often you’d get a line of pure truth singing out. Kind of like a tennis match that never stops and by all probabilities should have never have happened.

    My bet is, if ever there was a machine that could be a quantum computer it would be the human brain. Even the synapses look like star formations. We are natural information sorters.

    I’m just wondering when everyone is gonna switch on.

    I wonder if I say this enough times, I’ll be at the front of a wave that actually programs the space around me to just go out and mix with you in the physical world. Ill tell you this, don’t worry too much, I’d be sending out good vibes that said:

    “Hi mate, I hope you’re aright, ‘cos over ‘here’, were having a ball!”

  23. July 7, 2010 3:31 pm

    Even if an amateur would solve it, would anybody care to read the proof?
    There are thousand amateur proofs for P vs Np wich are bogus.
    I don’t think people care to read these proofs.

    • Ørjan Johansen permalink
      July 8, 2010 10:19 pm

      I recently found this (great!) blog and am slowly reading the archives. One of the oldest articles seems very relevant to this question: Proofs and Elevator Rides.

      • August 18, 2010 1:11 am

        Well, it turns out my question has been answered in the most unusual way.
        An elevator ride wasn’t even needed. Although it seems the proof failed,
        i have to say if P != NP will ever be proved, this blog deserves to play a role
        in it. Anyway, there should be some time off, till the next serious attemt will appear. Maybe people who want to contribute something should stick to elevator rides.

      • jj2 permalink
        August 18, 2010 10:07 am

        I do not think it is not about an elevator ride. It is more a social issue. Quantum Pontiff (Dave Bacon) wrote: “So, my guess is that for the hard-core complexity theorists their really isn’t that much interest in taking time out of what they are working on understanding the paper…unless someone who they really respect tells them they should do so”. I think this may be a better explanation. The wall of silence erected by Odel Goldreich’s advice.
        No need to wait for a new effort. There are proof attempts that should be checked in Goeringer’s list. From P!=NP: Tärnlund’s 45 (last July 2009), also 44 (last 4. Aug 2010), 21 (last Spring 2010), why not some others (52, 42) also. From P=NP (though I think P!=NP) there are recent efforts 58, 50, maybe 47. In general, there are not so many efforts, nothing like hundreds or thousands. It is very odd that it is so difficult for the computational complexity people to check a handful of manuscripts.

  24. Torsten Palm permalink
    July 8, 2010 5:35 am

    Dear Luke Gustafson, from your statement: “I tried but cannot read the proof” you’re making surprisingly strong statements about Tarnlund’s
    proof of: P is not equal to NP. For example, I beg to differ with your statement that: “the deduction of statement (a) is false”. A formalised statement of (a) occurs at line (58) in Tarnlund’s proof of theorem 1, and is a correct step, in
    metamathematics, (Hilbert’s) proof theory.

    The rest of your comments follows from this false claim of yours, thus I should not be impolite and bring them up.

    However, your comments are interesting enough to mention that there is a set of people that think that Tarnlund’s proof is not only correct, but beautiful too. If you take a good look at the proof again, maybe, you’ll join this set yourself. Perhaps, you should correspond directly with Sten-Ake about his
    proof, the email address is in the
    paper.

    Maybe you’re a little too optimistic about the skills of
    proof verifying systems, probably
    work on interactive verification would be more successful.

  25. jj2 permalink
    July 18, 2010 11:20 pm

    The question is not whether an amateur can solve P versus NP, or any of these known problems. Depending on what kind of an amateur he/she is, certainly has a chance. It is a question whether he/she can get
    the proof published and recognized by the community. In general, when any such proof is received by a journal, it is usually not treated seriously. As an example, I published a proof of Statement D of the Navier-Stokes problem in http://ejde.math.txstate.edu, Solutions to three-dimensional Navier-Stokes equations for incompressible fluids EJDE 2010(2010), No. 93. 1-14. It took 25 months, four journals and some 10 experts to get this article published, while anybody can easily check it in two hours. It is very simple: the people working in PDE thought that there was a proof that the solutions were unique for the given initial conditions but actually they are not. So, it was possible to construct a counterexample provided that you were not an expert and did not know that there was a “proof” that showed the conterexample impossible to start with. So, sometimes it is better not to know the field too well. Especially so if the problem has been long studied in the field and seems unsolvable by the methods used on the field: what do you expect to gain from knowing tools that do not work on this particular problem. It is the same with P versus NP. I wrote a proof P!=NP and consider it correct. It received over 15 months of review in a journal until the editor concluded that the referee will not read the paper to the end anyway and he cannot do anything more. The comments I received were the standard claims that my proof would in some way assume something from the algorithm and that it does not define what is an algorithm (sic). It is the people on the field and their psychlogical issues that make solving any known problems practically impossible for an amateur, defined as an outsider regardless of his/her achievements elsewhere or education. Additionally, it is not only getting a proof published. Still it may not get recognized and one will probably not get the prize. Like with the Navier-Stoks: I gave a proof to the stated Statement and the only what to invalidate my proof is to change the problem statement considerably. Yet, there is no discussion of this proof and clearly no recognition.

    • July 19, 2010 9:42 am

      What I’d really like was some semi-organized way for amateurs to have mentors. Both someone to bounce ideas off, but also someone to help push the ideas through and get published.

      Lately in my limited time I’ve been playing with the Goldbach conjecture at a very elementary level. I don’t have much time but it would help if someone could give me advice, existing references and encouragement. Most likely it will filter out to nothing, but I’m enjoying it, having fun, and who knows, an outside perspective might just turn up something previously unnoticed.

      The way it stands now, it doesn’t make much sense to dabble in mathematics unless you’re going to commit yourself to it full time.

      Paul.

      • jj2 permalink
        July 19, 2010 12:28 pm

        There is organized supervision for Ph.D. I strongly advise my Ph.D. students not to try any famous problems because I did in my time and know well what is the outcome. After Ph.D. there is no supervision to anybody, and very little help unless you are writing a joint paper. I do not think many like to write a joint paper on a difficult problem. Efforts to these problems can destroy your caree easily. I suggest doing research on some other field if you desire to
        try one of the known problems, that way you are not affected. Preferably, gain a professor competense or two on some other field, otherwise people will not understand mingling with “impossible” problems. In the Ph.D studies the supervisor advises to sources and methods etc. If you try a known proble forget this. Do not put any more references to the paper than needed, keep it trivially simple, start from concrete. The algorithm is simple: try first all possible ways, then all impossible ways, and then all possible ways. After every failed attempt double the effort. And I can guarantee that the known problems are much harder than what mathematicials ordinarily do. They are not solved by known methods and you really will not need references to existing work. Simply, put a huge effort. Make some hundred pages of trials to start with to gain insight to the problem.

  26. jj2 permalink
    July 19, 2010 5:00 am

    One way to define an amateur in the Millennium Prize problems is to say that an amateur is anybody who has not produced a reasonably good attempt (a paper) to at least five of these seven problems. I pich up
    five since a Ph.D. thesis is 5 journal papers, so is docent (adjunct professor) 5 journal papers after Ph.D. Similarly, to be an expert in these Millennium problems you need 5 papers, and as the problems are from different fields, naturally you need the papers from different problems. This said, I am surprised how experts of one field consider themselves better qualified in solving these problems than other people. By the way, having tried 5, I think the Rieman Hypothesis is the hardest and the rest are equally hard.

    • jj2 permalink
      July 28, 2010 1:06 am

      I took a very brief look at your proof. Integer programming problems become very easy if you change integers to reals. Then the problem can be solved by several anlytic methods. However, the solutions obtaned in this way are usually not integers. I see that Lemma 2.2 states the problem to be convex. It is convex only as a problem of reals. In Theorem 2.3 you take differentials, i.e., you write a way to solve it as a real number problem. The way you solve it is odd. The original integer problem has maximum if every x_i=1 or every x_i=-1. You state that a local maximum for the problem in reals is at x=0, i.e., z=0, but can you optimize over z and still consider this problem as equivalent to the original? z is a fixed parameter, so x=0 is not a solution in reals. Then you seem to consider the two parts of the sum separately, while the goal is to get the solution to (7), not any extremals at this point. I cannot follow this part. I think, if you solve (7), you get a solution in reals, not in integers. Then, if you have the extremal points in reals and they are not integers, you have to apply the restriction x_i^2=1, which gives you many alternative points to be considered, and the number of these points is likely to have the original complexity of the integer problem. Please, think carefully if this comment makes any sense and helps you.

  27. August 10, 2010 10:53 pm

    “Can amateurs help make progress on modern problems?”

    Well, occasionally sight may be more valuable than insight!

    As the fable of the Emperor’s non-existent clothes emphasises, sometimes professionals can be wedded to subjectively self-evident assumptions that may obscure what appears obvious to an amateur. The PvNP case may be one such instance.

    In Theorem VII of his famous 1931 paper on formally undecidable arithmetcal propositions, Goedel constructed an arithmetical relation P(x0, x1, …, xn) such that:

    (i) For any given natural number sequence (a0, a1, …, an):

    P(a0, a1, …, an) holds if, and only if, a0=f(a1, …, an) holds,

    where f is recursive.

    (ii) For any given finite set of sequences S = {(a0, a1, …, an)}, there is a Turing-machine TM(S) which can verify that if (b0, b1, …, bn) is in S, then:

    P(b0, b1, …, bn) holds if, and only if, b0=f(b1, …, bn) holds.

    Hence P(x0, x1, …, xn) is effectively verifiable a la Edmonds (since f is recursive).

    (iii) There is no Turing machine TM which can verify that, for any given sequence (a0, a1, …, an):

    P(a0, a1, …, an) holds if, and only if, a0=f(a1, …, an) holds.

    Hence P(x0, x1, …, xn) is not effectively computable.

    Is there any reason why we may not conclude that P=/=NP and give Goedel the prize?

    Now, what may not be obvious is that any set-theoretically influenced definition of the PvNP problem, and of SAT, where relations are defined extensionally as mappings, cannot recognise that a recursive relation may be instantiationally equivalent to, but computationally different from, an arithmetical relation where the former is algorithmically decidable over the set N of natural numbers whilst the latter is instantiationally decidable, but not algorithmically decidable, over N.

    Ref: http://alixcomsi.com/27_Resolving_PvNP_ICLA_2011_Update.pdf

    • jj2 permalink
      August 11, 2010 12:51 am

      I think one should not give Gödel the prize before it is clear how
      this Theorem VII relates to the classes P and NP. In your paper you
      argue that they do but it is not obvious and would require checking
      your paper.

      For me Theorem VII seems to say that there exists special P and f such that for any finite set of sequences (a0,…,an) one can always construct a TM that can effectively (say in polynomial time) verify that P(a1,…,a0)->a0=f(a1,…,an) and a0=f(a1,…,an)->P(a0,…,an).
      If there is e.g. a one-way function, the constructed TM needs to know
      a finite sequence how to check that the one-way function is satisfied for a special value, i.e., does not need an algorithm for solving it in the general case. Then there is a statement that no TM can verify these for an arbitrary (a0,…,an). This is not verify effectively but not verify at all, for these special P and f. This seems to be a noncomputability resuls. How do we replace the special P and f by an NP-hard problem, and how do we change the conclusion “no TM can solve it” to “no TM can solve it in polynomial time.”

      Your paper seems to be clearly written but having taken a brief look at it, it seems that it would require a long time to check and I personally cannot do it right now. If you check arxiv:4935v3, then I can try to find time to read your paper at some point. My main worry is the different definitions (effectively computable, recursive function etc.) may not give a problem equivalent to P versus NP but
      one that deals with noncomputability instead.

  28. Quantum Tennis Referree permalink
    August 19, 2010 3:44 am

    One point that I haven’t seen made yet:

    There are at least two types of amateurs and amateur proofs. The first type is where the amateur understands and accepts the premise of the conjecture. S/He understands the definitions, enjoys them, and accepts them, and accepts the legitimacy of the question. Such an amateur definitely has something to think about and contribute.

    The second type—much evidence of whose instances abound on this thread and other blogs—denies the very definitions or the legitimacy of the question. S/He not only does not understand the question or the definitions, but actually denies that they are “correct.” This always surprises me, and I don’t understand where this comes from. I believe that this is one of the standard symptoms of a crank.

    • jj2 permalink
      August 19, 2010 1:49 pm

      If you have taught students, you probably noticed that many of them do not understand, and some of those may insist for some time that their incorrect understanding is correct, as it is in a way correct from a point of view of what would be a more reasonable problem to ask. Such behavior does not make a person a crank, it is a sign of not understanding. Some people, like politicians, even continue argueing for their opinion even though they must have realized it is wrong. That also does not make them crank, merely defensive. I think it is positive that people try. You cannot learn without trying.

  29. Jorma Jormakka permalink
    August 30, 2010 5:40 am

    Craig,

    You are still getting comments for your proof, so probably it is not correct. When proofs are correct, you stop getting any comments also.

    I will explain some personal experiences why it is hard for an amateur to prove P versus NP or some other famous problem. I am not actually an amateur myself, more like a person working on many fields and specialized on problem solving. From math I have Ph.D. and from two other fields tenure level professor competence, and quite a long experience on hard problems on many areas.

    It is difficult to solve these tough problems and if you are an amateur, I suggest starting from scratch. Not by studying the field intensively, while you must know something at least, but by studying the particular problem very intensively and very concretely, even if the problem is abstract and people on the field use abstract methods. You need the concrete view and hands on experience in order to have insight. There is no abstract insight. It is not blind manipulation of rules or seeing if existing theorems can be combined. That gets you nowhere. There is a working method how to do this, while it is not taught in mathematical universities and it is unknown to most experts. Let us assume you solve it.

    If your solution is difficult it will not be read by ordinary mathematicians but they recline to do it and it will go to the so called leading experts. They will take a glance at your paper. If they find an error, fine. If they do not, they will anyway claim that there is an error. Nobody of the ordinary mathematicians dares to object. So, you are out. Therefore, if only possible write your proof in an extremely elementary way so that the ordinary mathematicians (good Christians, decent people, good mathematicians) will be comfortable that they can see whether it is right or wrong. And this means it really have to be elementary, like the stuff they teach to beginning students. Not stuff they write themselves. If this is the case, you have some chance in getting the paper published. Ignore the comments that the paper is elementary. It has to be elementary in order to get through. Unlike you may think, even the Hodge conjecture and the Yang-Mills fields can be given a perfectly elementary treatment, you just do not so easily notice it since these fields try to use abstract methods. Let us assume you get it published. This is as hard or harder than to solve the problem, though not intensive work.

    You will not get any natural publicity to the paper from the press. The experts will not answer to emails. They will try to make a smear campaign. As an example, google: “Supposed proof by Jorma Jormakka of the $1000000 Clay problem” and see that the main opponent cowgod42 was shown wrong in exactly every case and the proof stays, but he still managed to make a smear campaign. Those people are called trolls and they do exist, they are not really students but with a higher education. You see the smear campaing, cranks, crack-pots etc. It is very typical.

    As the experts do not want to comment unless they find an error, you have to force them to do so. They do not hesitate to state totally false arguments backed by their authority in order to defame the proof. See the answer by Terry Tao at the end of “Why global regularity for Navier-Stokes is hard”. All his arguments are clearly false (we are talking of a peer-reviewed and many times checked journal paper, you do not crack it so easily, not even Tao) but he tries to do what is called psychological operation, twist everything backways and claim it is so. Probably many will not look at your paper and notice that all the guru said is flatly wrong. From some other top expert, you may get a journal paper review with obvious errors. It is good if you can collect obviously contradictory reviews from several experts – like one says it is obviously unique, another says it is obviously nonunique. Then the ordinary (good) mathematicians are more willing to check the proof. So, they may read it and conclude it is correct. The leading experts will probably never answer at all.

    This is the reality. There really is the establishment who will not want to accept your paper, and the large community who does not care, why should they. You want to ask is everything as in the ideal world: if there is a correct solution, people are happy to read it. I can say, no. This is not the case. The situation in science is not different from the situation in the press or in banking, all of these fields have an establishment (related?). As a conclusion, an amateur may find a solution to P versus NP, but he must have powerful friends if the proof is accepted.

    • Craig permalink
      August 30, 2010 9:10 am

      Jorma,

      I read some of your work on the Navier Stokes problem, claiming that the solution doesn’t have to be unique. Congratulations on this.

      I also looked at your paper on the P vs. NP problem with the subset-sum. I found it difficult and didn’t get to finish it. Hopefully, one day I will.

      I try to learn from all comments. The last comment on this blog that was made about my paper convinced me that I needed to rewrite some parts of my paper to make things clearer. I hope it will be up on arxiv.org tonight.

      You said, “As a conclusion, an amateur may find a solution to P versus NP, but he must have powerful friends if the proof is accepted.” This may be so, but as for the reaction of the math and computer science community to my proof, I can’t change how anyone thinks. In life, results are not what really matter. What matters is that one does his or her best with no regrets.

      • August 30, 2010 9:27 am

        I agree both with the last sentences of Craig and Jorma.

      • Jorma Jormakka permalink
        August 30, 2010 12:03 pm

        In the P!=NP paper, do not start from the beginning. Look at Figure 1 and reconstruct the proof in our mind from that, then read the details. You want n/2 subproblems which all must be checked (by an algorithm working in any way, not limiting the algorithm, only offering the algorithm a problem where all n/2 parts must be solved since there is no solution in any of them). Make these problems as in the figure, left side a worst problem of n/2, then n/2 values where any solution must select one. This is n/2f(n/2) computation time. The problem is that the right hand side depends on the sum j. Thus, in the next step in figure 1 fix the lower half bits so tha there are more than n/2 combinations, so it will take longer time by the same argument. Then try to fix the upper half bits. The last step is that any fixed n-tuple is any way faster than the worst. From this just think how to fill in the details, they are all written there.

        Actually you can change how people think by making a strategy. I first used an extremely polite approach, only to notice that it does not work. Experts are usually not polite. There is a big risk in adopting an aggressive strategy. Thus, I strongly advise, do not do so if it is the field that you depend for your salary. If it is not, go ahead. This is why amateurs have much less risk than professionals. Additionally the professionals seem to be super competitive looking for fame. Somebody like me only does this for fun, not so serious, nice to solve problems that you cannot do in the normal work, more like filling a lotto coupong and enjoying it. For them it is very serious competition for the place on the sun.

  30. Jorma Jormakka permalink
    August 30, 2010 12:24 pm

    In my P!=NP proof there is Lemma 1. It may throw some light to why there are so many proof attempts for both P=NP and P!=NP. Usually the subset sum problem is stated without defining the sizes of the numbers. If the numbers grow at most polynomially, there is a polynomial time algorithm (lemma 1). If their bitsize grows faster than any polynomial, there cannot be a polynomial time algorithm since no algorithm can read the input data in polynomial time. Thus, the growth of the bit sizes must be defined. I fixed it to linear growth, as this seems to be what we usually think in the case of the subset sum problem. I do not know if this could be the reason for some of the P=NP proofs, but maybe accepting the used definitions, some of them could be correct, though that does not show P=NP.

  31. August 31, 2010 11:33 am

    Here is perhaps one example of an amateur proposing a closure on the P versus NP debate with a very different approach that uses non-DFS representational scheme. I’m still reeling from the talk I heard from Prof. Narendra Chaudhari on his O(n^13) agorithm for 3SAT problem. Check out the paper published in Journal of Indian Academy of Mathematics at http://dcis.uohyd.ernet.in/~wankarcs//index_files/seminar.htm

  32. jj2 permalink
    September 20, 2010 6:11 am

    I earlier wrote that I have very seldom on my academic caree met actual cranks. Now I must revise the position. There are cranks in the Internet. They are not the amateurs who try to solve the P versus NP problem. Those are nice people who only take part into a contest for a million dollars. Fully understandable. The cranks are the people who without any mathematical background go on arguing against published peer-reviewed journal articles. This I do not understand. They seem to have no confidence on peer-review and they write arguments against a paper totally without thinking. It seems quite useless to answer to them as they have very fixed minds, there is no way to convince them. These cranks defend the established scientific order (no results from outsiders) and use all kind of pseudoarguments: this paper is not written in latex, this paper has too few references, this is too easy, this is too difficult, this author is unknown to us (=the center of the world I guess). These cranks are well supported by the eminent experts. Nobody knows why these people should be experts on the topic since they most likely never made a major attack on the problem (as a top expert would have solved it if he had tried it long enough). However, they are considered eminent and they strongly believe it themselves. These experts do not read a paper. They simply write some standard arguments and reject a paper. I call it arrogance.

  33. jj2 permalink
    September 20, 2010 6:24 am

    About the quality of reviews in theoretical computer science journals. I just read one purpoted proof of P versus NP that had been rejected by a journal. The review was apparently so bad quality that the author could not see what was wrong in his argument. It took only 10 minutes to read the paper and to write a report that I hope will clearly show the error so that the author understands it. This shows how poor the referees are. They have no personal experience on P versus NP and present generic arguments, which though in a certain sense correct, are not at all helpful to the author. One has to be an expert on the problem, if we are talking of hard problems, in order to be a reviewer. Additionally, one should have enough sense to write sensible reviews. I think the problem of so many unchecked P versus NP proofs is that the field does not fill the standards of good peer-review for these articles. It is a sign of illness on the field.

  34. September 28, 2010 8:36 am

    I got to the max-cut problem by chance, and put practically all my experience with it on http://mkatkov.wordpress.com/

    I think I have nothing to add to it, end therefore quitting the enterprise. I’m not planning to publish it.
    If you find any useful peace in it feel free to use it in your work, otherwise it is another peace of garbage.

    Of cause, if you have any suggestions how to clarify the presentation, or find some weak point that need more attention, I’m ready to help.

    All the Best,
    Misha

  35. nobody permalink
    June 19, 2011 10:10 pm

    p \in \mathbb{R}[x_1,...,x_n] – real polynomial.
    The partition problem asks whether a given sequence a_1, . . . , a_n of positive integer numbers can be partitioned, i.e., whether x^T a = 0 for some x \in \{ \pm 1 \}^n. Equivalently, the sequence can be partitioned if p_{\min} = 0 for the polynomial p := 2 \left( \sum^n_{i=1} a_i x_i \right)^2 + \sum_{i=1}^n (x_i^2-1)^2. The partition problem is an NP-complete problem.
    P_{n,m} = \left\{ p : p(x) \geq 0, \forall x \in \mathbb{R}^n \right\} – a cone of positive definite forms in n variables of degree m.
    finding min p(x) is equivalent to finding minimum t s.t. p(x, t)= \sum_i x_i^4 + 2 x^T ( a a^T - I ) x + t \in P_{n,4}, I – identity matrix.
    \Sigma_{n,m} = \left\{ p: p(x)= \sum_{i=1}^m g_i(x)^2, g_i \in \mathbb{R}[x_1, ..., x_n] \right\} – cone of sum of squares polynomials.
    Sums of squares program (SOS): \min t s.t. p(x,t) \in \Sigma_{n,4} is solved in polynomial time to arbitrary precision (time is log of precision).
    For polynomial p(x_1.., x_n, x)= x^2 f(x_1, ..., x_n)+ 2 x g(x_1,...,x_n) + h(x_1,..., x_n) the expression D(p,x)= fg-h^2 \in R[x_1,..., x_n] is called the discriminant of p with respect to x.
    p \geq 0 \iff f\geq 0 \land D \geq 0
    Discriminant of (4) with respect to {`1'} ^2 (homogenized 1) is D(p,1)= t \sum_i x_i^2 - (x^T ( a a^T - I ) x )^2, f=t \geq 0.
    Since a_i are positive numbers, all terms in quadratic form are nonegtive, and therefore, D(p,1) is a diagonal minus tail.
    POSITIVE SEMIDEFINITE DIAGONAL MINUS TAIL FORMS ARE SUMS OF SQUARES. Therefore, SOS converges to global minimum of p.
    The geometry of the problem is to find the closest {\pm 1}^n point to the hyperplane orthogonal to a. For very small values of p_{min} (limiting SOS/SDP performance) deviation is defined by the smallest and largest values of a, moreover extreme problems are quadratic, therefore required precision is not exponentially small.

  36. Nico permalink
    September 12, 2012 2:05 am

    An advice from the great physicist R. Feynman “The first principle is that you must not fool yourself and you are the easiest person to fool.”

  37. Seeftteag permalink
    November 27, 2012 2:51 pm

    http://enimode.com – короткие платья
    http://enimode.com – бандажные платья

  38. Giorgio Camerani permalink
    December 17, 2012 4:27 pm

    The following is a quote by Richard Karp, drawn from Bill Gasarch famous P vs NP Poll. I really like it:

    … My hunch is that the problem will be solved by a young researcher who is not encumbered by too much conventional wisdom about how to attack the problem.

    He says “young researcher”, however my feeling is that it may turn out to apply as well to an amateur, and for the same reason.

  39. February 8, 2013 7:19 pm

    another interesting case study as far as “amateurs” is Balmer, who found the math formula(s) for atomic spectra, a breakthrough that was later incorporated by Bohr into his model of the atom & leading to atomic theory. Balmer was hardly an “amateur” in one sense, having a Phd in mathematics. however, he was not trained/formally educated in physics. accounts of his finding the formula sometimes refer to him as a “swiss schoolteacher”. so he was plausibly an amateur in physics.

  40. February 8, 2013 7:22 pm

    so the concept of amateur vs outsider is something to study. there are probably more “outsiders”, in fact many cases of this, making serious contributions to science than amateurs.

    this is all glamorized in the fun hollywood movie good will hunting which you may have referenced elsewhere in your blog.

    here is an “outsider” attack on P vs NP by a software engineer

  41. April 10, 2015 3:23 pm

    Dear Professor, I have “known better” this paper can be the solution of P versus NP:

    https://hal.archives-ouvertes.fr/hal-01139624

    and pdf in

    https://hal.archives-ouvertes.fr/hal-01139624/document

    Indeed, in that work I proved that:

    $POLYLOGTIME \neq NLOGTIME$

    in a trivial way. Next, I proved that:

    If $P = NP$, then $POLYLOGTIME = NLOGTIME$.

    in a trivial way too. From that result we can see $P \neq NP$.

    For that reason I’m quite assure there are big chances that my proof were correct. Certainly, I’m truly convinced, even though I have failed several times with other intents.

    Regards..

  42. Carlo Ceccarelli permalink
    August 18, 2015 8:15 pm

    Oops, you forget that Fermat was an amateur…

Trackbacks

  1. Can Amateurs Solve P=NP? « Gödel's Lost Letter and P=NP
  2. Poincare Conjecture
  3. World Wide News Flash
  4. Playing Head-To-Head Texas hold’em « Gödel’s Lost Letter and P=NP
  5. Proving A Proof Is A Proof « Gödel’s Lost Letter and P=NP
  6. A New Million Dollar Prize? « Gödel’s Lost Letter and P=NP
  7. Hedy Lamarr the Inventor « Gödel’s Lost Letter and P=NP
  8. கேள்வியின் நாயகனே… « ப்ளீஸ், ஒரு நிமிஷம் வெயிட் பண்ணுங்க…
  9. El matemático Jorma Jormakka proclama haber resuelto cinco problemas del milenio « Francis (th)E mule Science's News
  10. Thanks For Sharing « Gödel’s Lost Letter and P=NP
  11. In Praise Of P=NP Proofs | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s