# A New Million Dollar Prize?

* Possible new problems to add to the Clay list *

Grigori Perelman, the solver of the three-dimension Poincaré Conjecture, has declined to accept the first awarded Clay Prize. I may not understand him, but I think he is one of the most colorful mathematicians ever. Perhaps the award should have been offered to both him and Richard Hamilton—recall Hamilton created the Ricci-flow approach used by Perelman. Perhaps together they would have accepted. Oh well.

Today I will talk about what the Clay Mathematics Institute should add to their list to get it back to the magical number *seven*. I do not know if they plan to do this, I doubt they will ask me for advice even if they do, but let’s discuss what they should add anyway.

Seven is a magical number. One example is the work of George Miller. When I was at Princeton I was honored to meet George—one of the world’s great psychologists. George has done many things, but he is famous for the rule of seven. He discovered that most people can remember in their “working memory” only things. I probably am on the lower end of this bound these days, but his paper is one of the most cited in all of science. Perhaps his rule is one reason the Clay Institute should get back to seven problems.

I once worked with George on a large project, although we never wrote a joint paper together. The large project was called the *Massive Memory Project*: our thesis was that large computer memory was as important as having many processors, sometimes even more important. It was under consideration for funding by NSF, but first we needed to “pass” a site visit. It went well until I was asked a certain hard question by a visitor X. I tried to answer X’s question, but was unable to convince them. Then, George Miller answered why memory was the answer to X’s question:

The brain uses memory to do this, so I think we should use memory too.

This convinced X. We were soon funded by both NSF and DARPA.

** Problems **

What problem should Clay add to their list if they decide to get back to seven problems? Here are seven they may wish to consider.

**Jacobian Conjecture:** I just discussed this here. I think it is one of the great open problems.

**Hilbert’s Tenth Over Rationals:** Suppose is an integer polynomial. The question is do there exist *rational* numbers so that

Of course if have to be integers, then this is in general an undecidable problem. This follows from the beautiful work of Martin Davis, Yuri Matiyasevich, Hilary Putnam, and Julia Robinson on Hilbert’s Tenth Problem. However, when the can be rational numbers the problem is open.

**Graph Isomorphism:** The current Clay list does not have any graph theory problems. One that I might suggest is the famous—infamous?—problem of deciding in polynomial-time whether or not two graphs are isomorphic. This problem has some beautiful partial results, but the question is unresolved.

**Quantum Computing:** The question I would suggest is: Can quantum computers be simulated in polynomial time by classical ones? This seems to be one of the outstanding open questions in this area. An answer would have many consequences, and would shed light on both physics and computational complexity.

**Goldbach Conjecture:** Recall this simple to state conjecture asks: can every even number greater than be written as the sum of two primes? I think this is one of those—“how can it be open questions”—I have called them mathematical embarrassments. Paul Homer just posted a comment here about this conjecture.

**An Explicit Quadratic Boolean Circuit Lower Bound:** The problem is to give an example of a boolean function family so that two things are true:

- The family can be computed in polynomial time by an explicit program.
- The general boolean circuit complexity of grows as .

I selected quadratic growth, but of course even would be a major result.

**Invariant Subspace Problem:** The invariant subspace problem is the question: “Does every bounded operator on a separable Hilbert space over the complex numbers have a non-trivial closed invariant sub-space?” See this for some history. See Terence Tao for a recent discussion of this famous problem.

** Open Problems **

Do you like any of my suggestions? What problem do you think should be added to the Clay list?

Frankly, these are nice hard open problems, they may open up new interesting research directions, and I like them, but I think they are not comparable to the Millennium ones. They are not as foundational as the Riemann Hypothesis or P ?= NP.

Actually, if someone can solve the graph isomorphism problem as stated above, then P = NP.

Dimitris,

I do not follow this comment. GI is not known to be NP-complete.

My mistake. Of course you are right.

However, the reverse is true, P=NP would solve graph isomorphism. It seems somewhat strange to have two problems on the list such that solving one can lead to an immediate solution of the other.

Ørjan, Scott Aaronson has said in the past that if one can show P=NP, then they can immediately solve

all of the other Millenium Prize problems as well! Something about using a proof-searching algorithm over all reasonable length proofs I think? He’s often described P = NP as implying that prooffindingis easy. (If I understood & recall correctly!)(I assume this relies on finding not just polynomial time algorithms, but truly nice ones, as opposed to something like O(n^1000).)

Heck, if you can show P = NP, that should be worth at least $2 million.

Re: quantum computing…So if someone showed that the decision version of factoring is not in P they could win two prizes?

I vote for graph isomorphism (would the prize be resolved is someone showed GI in BQP?)

Hehe! To be fair in this case, is an open problem as stated above!

I actually am against including GI and BQP for exactly this reason. Each problem should be a major open problem that is “representative” of a subfield.

Goldbach is probably my favorite from this list since it has great PR value.

Part of me wants to say something like the Collatz conjecture, since (I thought) the point of the prizes was to encourage interest in mathematics, and Collatz is both easily conveyed and immensely challenging. Though as far as an influential result goes, I suspect it falls flat on it’s face.

The upside might be that a solution may carry with it a radical new approach, applicable elsewhere… maybe?

I would like to add graceful tree conjecture.

I think Hadwigers conjucture is a nicer graph theory problem than the complexity of isomorphism.

Cody,

I do not think that a proof of P=NP would solve anything else. Assume P=NP even had a reasonable algorithm, which is of course not required. Then, you could search for all proofs of length N in a reasonable time, say N^3. But what is N? The shortest proof of Riemann might be 1,000 pages and have N very very large.

I think this was Godel’s argument in his famous (lost?) letter to von Neumann. The argument is that the program could find any proof of reasonable size that mathematicians would be capable of finding. The machine can simply guess the proof and check that it is a proof of the problem.

rjlipton, oops! I inadvertently omitted that caveat.

After posting that comment I actually stumbled across an instance of Scott saying that here.

So I guess he figures that a proof of RH more than (say) 1000 pages might not be something humans would comprehend? Or want to deal with? I’m not really sure… If the shortest proof of RH in ZFC were to be a million pages long, would anyone ever “get” it?

I think the proof of the Graph Minor Theorem [Robertson&Seymour] requires (even today after parts of it have been simplified) around 500 dense journal papers. So a proof a bit over twice the size of this proof would be unacceptable?

I for one do not appreciate very much the “selling-pitch” used by complexity theorists saying that P=NP implies we could proof theorems as easily as we could verify them. There is no known (and probably cannot be) any axiom system which has a single polynomial bounding the proofs of all its theorems.

It seems to me that Dick’s comment expresses an intuition that

itselfcould (might?) be formalized as new Seventh Problem.From practical experience, we have the intuition that there is a tight connection between algorithms and proofs. E.g., we examine some lines of code, and we conclude “This code is a subroutine that computes the permanent.” Such analysis is of course a completely ordinary task for all engineers, scientists, and mathematicians.

So let us formalize this intuition of “examine code” to become “exhibit a concrete PTIME algorithm ‘E’ whose input is a Turing tape” … and formalize the intuition of “conclude” to become “and whose output is a PSPACE proof” … and formalize “computes the permanent” to “that the computation associated to input Turing tape halts.”

So the net statement is “Exhibit a concrete and provably PTIME algorithm ‘E’ whose input is a Turing tape and whose output is a proof that the computation associated to input Turing tape halts (or else a null output).”

The phrase “or else a null output” is appended because the preceding is just a variant of the Halting Problem (as I understand it) … and so we know that no algorithm E can succeed in general, much less in PTIME.

What we gain by embracing the intuition that solving the Halting Problem is an ubiquitous STEM activity that typically succeeds, is the idea maybe the class P is too big … that maybe we can profitably define a restricted class P_E of Turing inputs that provably halt according to a concretely specified examination algorithm E.

Then we have the natural conjecture, P_E .ne. NP relative to any concrete E.

What good is such a conjecture? Well, if we define the class P’ to be the set of all {P_E} for

someE, then we have the natural question, does P’ .eq. P?Intuitively, and with a focus on engineering problems (including cryptography), we are asking whether the set of problems that can be

understandablyanalyzed with PTIME resources is a good-enough approximation to P.Here the point is that even if problems exist in P that are not in P’, these problems (as several comments suggest) are not obviously of practical interest to PTIME beings like ourselves, because we can neither construct this set of problems on our own, nor understand them if an oracle gives them to us.

I’d better conclude with a triple-strength

apologia, in the (very likely) event that these engineering-oriented intuitions are badly-stated, already-known, or completely wrong. “Dammit Jim, I’m an engineer, not a complexity theorist!”. Still, as Saunders Mac Lane said, mathematics “develops by the treatment of many specific problems”, and so even we engineers have a role to play, in pointing to specific problems that arise naturally in engineering (in the present case, from the mundane task of code-documentation) .As always, my deep respect and thanks go to Dick, for the effort he puts into a blog that affords wonderful broad-spectrum inspiration to mathematicians, scientists,andengineers.If we broaden Dick’s quantum computer question by changing one word, from

“Can quantum computers be simulated in polynomial time by classical ones?”to“Can quantum processes be simulated in polynomial time by classical ones?”then that is a great question … comparable to the Clay Institute’s present Navier-Stokes (NS) problem.It needs a little bit of sharpening to become a prize question, and the way the NS problem is phrased suggests how to sharpen it. The Clay Institute asks for proofs of the “existence and smoothness of NS solutions” or alternatively proofs of the “breakdown of NS solutions.”

Adopting this idiom, we are led to something like:

“Show that any quantum dynamical system, in contact with a thermal reservoir and relaxing exponentially to equilibrium, in the absence of other external interactions, can be simulated in P, or alternatively, exhibit a quantum dynamical system that provably cannot be so simulated.”Just like the NSE problem, some of the proof technologies for meeting this challenge are tougher to apply than might first appear. One approach might be to prove that an error-corrected quantum computer cannot be simulated, but it might be tough to exhibit an error correction scheme that is provably compatible with exponential thermal relaxation and with the absence of external interactions.

With regard to broader challenges, I just came back from giving a lecture on “Elements of naturality in quantum simulation” at the 3rd Nano-MRI Conference. To enliven the meeting I constructed a hierarchy of four simulation-related challenges, of which the fourth—and by far the hardest—amounted to an experimental/theoretical broadening of Dick’s challenge, as follows:

Experimentally demonstrate any oracle device that cannot be simulated in PTIME (that is, with polynomial resources). Alternatively, provide direct experimental evidence that the geometric dynamics of nature’s quantum state-space obstructs such a demonstration, and describe that geometry.And here I’d better say that the first half of this challenge was directly inspired by ongoing research by Scott Aaronson and Alex Arkhipov.

Interestingly, the Nano-MRI audience was entirely sanguine about non-Hilbert state-space geometries, for much the same reason that fluid dynamicists are sanguine about non-Newtonian viscosity; the Nano-MRI attendees took it for granted that

of coursequantum state-spaces are linear only in a coarse-grained approximation.What most audience members really cared about was not the linearity of Hilbert space, but the (mathematically weaker) condition that fluctuation-dissipation theorems hold, because a great many spin transport experiments rely on these theorems in analyzing their data. Basically, as long as thermodynamics works, and the dynamics of the system can can be linearized by feedback control, then these experimental physicists don’t care much about the geometry of the underlying state-space (unless that geometry is what they’re measuring, e.g., in a GPS experiment).

As for a million-dollar prize … well … the consensus at the 3rd Nano-MRI Conference was that the economic and strategic stakes associated to quantum spin biomicroscopy are very much larger than that, and medically more serious too.

Hmmm … isn’t anybody going to list favorite mathematical challenges that are

notwell-known classics?Here is the third challenge from the Nano-MRI conference. By construction, the challenge both mathematically and physically natural … heck, it grew out of one of Dick Lipton’s posts …

Challenge 3 (Lindbladian naturality)Construct a complete set of Lindbladian stochastic potentials solely from intrinsic considerations of geometric, dynamical, and informatic naturality (that is, without reference to an embedding Hilbert space) and prove that the associated communication channels are relativistically causal.Here the idea is to do for Markovian measurement dynamics (that is, Lindbladian processes) what Riemann did for geodesic dynamics. As the accompanying commentary says “This challenge aims to untether quantum dynamics from linear Hilbert geometry.”

This particular challenge is rooted in a 1999 article by Ashtekar and Schilling, which provocatively asserted “The geometric formulation (of quantum dynamics) shows that the linear structure which is at the forefront in text-book treatments of quantum mechanics is, primarily, only a technical convenience.”

What makes the Lindbladian Naturality Challenge hard (perhaps impossible) is the final clause “… and prove that the associated communication channels are relativistically causal.” This amounts to requiring the construction of a causally consistent relativistic quantum field theory on non-Hilbert state-spaces.

And yet, it

soundskinda easy … why would it be difficult? Heck … almost all large-scale quantum simulation codes already run on non-Hilbert state-spaces.As with the Navier Stokes equations, we engineers

knowthat our simulation codes work; what’s hard for mathematicians isprovingthat they always work.GI and the circuit lower bounds are in some sense subordinate problems to P vs. NP, so it probably wouldn’t make sense to add them to the same pot. (In saying that, I’m taking the stance that GI is actually not in P.) The fact that current lower bound results are so weak reinforces the point made by a commenter on an earlier post that we ought to be focusing on easier class separations than P vs. NP, but I digress…

One problem with math problems is that you might not know how important they

are until l AFTER people have spend a lot of time on them and found, or didn’t find,

connections to other problems.

Here is my candidate:

(I think its already worth 3000, so maybe the Clay inst. could just give 1,000,000 – 3000)

Show that if A is any set such that sum_{x\in A} 1/x DIVERGES then A contains

arbitrarily large arithmetic sequences of length k.

NOTE: Greene-Ta0 proved it for A=PRIMES. Szemeredi proved it for A positive upper density.

GASARCH

Well, I will make one more comment on this (great!) question, and conclude with a question.

First the comment. On a planet with six-going-on-ten billion people, it obviously makes sense for the Clay Institute to pose challenge problems that are sufficiently difficult, that only one person in a billion has a reasonable chance of solving them. And this high-risk/high-reward strategy is exemplified by the six remaining challenge.

For a new, seventh challenge, the

oppositestrategy would make sense:Pose a mathematical problem so rich, that a million mathematicians could productively work on it. Here the idea is pose a Seventh Challenge that (by design) might productively engage the creative capacity of 1/10,000 of the global population.The idea of global-scale mathematical engagement is suggested by a literature search for the origins of the concept of “naturality” in mathematics. Among the earliest formal descriptions seems to be that Eilenberg and Mac Lane, “Natural isomorphisms in the calculus of relations”, J, Symb. Logic 8(1), p. 40 (1943) (if anyone knows an earlier reference explicitly defining mathematical “naturality”, please comment upon it and/or email the reference to me).

Although it is hard to conceive what a million-mathematician challenge might look like, it seems plausible that, to succeed, any such challenge would have to include a pretty substantial broadening of our present notions of mathematical naturality.

So a meta-natural question is this: What are the prevailing notions of mathematical naturality, how did these notions arise, how will they evolve in the 21st century, and what mathematical challenge(s) could/should the Clay Institute propose, that would most vigorously accelerate this evolution, and broaden it to support global-scale enterprises?

I agree that Goldbach’s conjecture is important, but it may not be on the list because of it’s relation to the RH. I know that the RH proves Goldbach’s weak conjecture at least.

If I could vote, I think whether or not gamma (the Euler–Mascheroni constant) is irrational or not should be on the list.

Tim,

What is the weak Goldbach? It is a theorem that all large enough odd numbers are sums of three primes. But this is a theorem. So what is weak version?

http://en.wikipedia.org/wiki/Goldbach%27s_weak_conjecture

Yes, it is the version stating every odd number is the sum of three primes.

If you think of intermediate problems like graph isomorphism, there’s also a natural problem which is not known to be decidable or not.

Can you decide, given an linear recurrent sequence u_n, whether there exists n so that u_n = 0 ?

(Sometimes called Skolem’s Problem)

We only know that the problem is decidable for linear recurrent sequences of order less than 5 (and the proof is quite hard), and that it is NP-hard in general…

Proving it’s decidable would need a new insight into linear recurrent sequences from mathematicians. There is a lot of work on upper bounds for the number of zeroes of linear recurrence sequences, but not so much results on this particular problem. Proving it’s undecidable may require a terrific encoding of Turing Machines.

Read the survey “Skolem’s Problem – On the Border between Decidability and undecidability” for more details.

What a fun post! I think the 7 problems should come from different areas of mathematics, and P=NP already covers complexity theory, so I’d count out graph isomorphism and quantum computing, fascinating though they are. Similarly the Riemann Hypothesis has got prime number theory covered, I’d say.

How about the group extension problem? Or is that just too big? We could always limit it to finite groups.

In model theoretic algebra there’s a fascinating research program aimed at proving the Cherlin-Zilber conjecture, which claims that every abstract group which admits a very simple but robust notion of ‘dimension’ must in fact be an albegraic group over an algebraically closed field.

Or perhaps the best like-for-like replacement would be the Smooth 4-dimensional Poincare Conjecture.

It is a theorem that all large enough odd numbers are sums of three primes. But this is a theorem. So what is weak version?As I recall, ‘large enough’ is really very big indeed here. So although only finitely many cases need to be verified to prove the conjecture, it’s still considered open because of the size of the numbers involved.

My candidate: Is NL = L?

How where the 7 Problems

selected in the first place?

If the mathematics community

was asked to make a list, there

must have been such a discussion

before. Are there any anecdotes

about the problem wich made the

unfortunate eight place? :-)

A really interesting article. Thank you very much.

I think Richard is right.

What’s about the Collatz problem? I think this is in contrast to Goldbach-Riemann and GI-P/NP problems something totally different.

I would like to see the Graph Isomorphism Problem added to this list. Graph theory plays an enormous role in innovation and society as everything both abstract and concrete can be represented as a graph. The applications could be endless for such a solution!

A proof that Peano Arithmetic is inconsistent should be a new million dollar problem.

2nd the nomination for the collatz conjecture to receive a large prize. fyi some very striking/ promising new results which measure and find a near-termination-invariant/ strange attractor in “bit density”

A very interesting question, and probably impossible to answer, is if there is a super-Turing computational model. In other words, is there any computational procedure that we could implement in real world but modern computers and Turing machines may not compute? Have we reached the limits of computation? This could be stated as an open problem.