Why they can be important
Joe Bloggs is not a computer scientist, at least not one that we know. As noted by Wikipedia, Joe Bloggs is a “placeholder name” like “John Doe” or “Jane Roe” in the US, or “Ashok Kumar” in India. Sometimes real people have or acquire the placeholder names: Ashok Kumar (born Kumudlal Ganguly) was a famous actor in India, whose acting brothers also took the surname “Kumar,” and the “Roe” in Roe v. Wade is Norma McCorvey. We especially like the fact that “Joe Bloggs” came about long before there were blogs. A public restraining order is indeed called an “Ashok Kumar order” in India the way it is called a “John Doe order” in the US.
Today Ken and I consider whether there should be a “John Doe order” against submitting claimed proofs of or , and instead point out some reasons that we should look at such attempts by Ashok, Jane, or Joe.
I know this is at variance with the general attitude that such proofs must be wrong, and looking at them is a complete waste of time. I would like to explain why this is not the case. Ken and I have spent a good deal of time looking at such proofs—they do come in at the yearly rate suggested by Gerhard Woeginger’s P-versus-NP page While there are attempts and there are attempts, we think that subject to minimal criteria, it is fruitful to look at them.
Let’s explain why.
The proof could be correct. We have discussed many times before that in mathematics people often had valid proofs that were at first rejected. Often they were only later found to be correct after another more mainstream proof was discovered. One of my favorite examples is the proof by Kurt Heegner of the beautiful theorem:
Theorem 1 Let be an imaginary quadratic field whose ring of integers has class number one. Then is where is one of
Recall class number one means that the ideals have unique factorization.
Heegner was a radio engineer and published his “proof”” in 1952. Few believed it. In 1969 the the eminent number theorist Harold Stark solved the problem. To his great credit he went back and re-read Heegner’s proof and realized it was essentially the same as his. It had some minor errors that were easily fixed. See this exposition by Jeremy Booher for more details.
This speaks however the first minimal requirement: The proof should have a short “elevator pitch” as to why it can be correct, quite apart from saying to read the details. What is the new idea that others have not seen or appreciated, and how is it used in the proof? How can it surmount known barriers to the conclusion? This needs to be stated up-front.
The proof could be wrong, but Part of our view at GLL is that we are here to help spread the word about theory and mathematics. We do not guarentee that we will look at every attempt, so do not flood us with them. But we have seen some interesting ideas raised by some of the attempts. My favorite ones that I am unable to completely understand are those that try to use physical arguments to show that if , then there would be some strange physical consequence.
This speaks to the issue of distinguishing “true efficiency” from “galactic algorithms” which while technically polynomial have running times like . Or , an exponent seriously mentioned to us in correspondence this year. Thus a second minimal requirement for those on the “equal” side is to state the algorithm and estimate its running time. The algorithm may depend on “nonconstructive” features—as we have covered—but it should always be possible to state it once those features are itemized. Even for the “not equal” side, with some of the physics-based papers this distinction is not clear.
The proof could be wrong, but interesting. The most important reason I believe is that failed proofs have in a number of cases helped create new mathematics. The best example of this is probably the work on extended formulation of polytopes. This notion was created by Mihalis Yannakakis in an attempt to show that a whole class of proof structures for must fail.
Yannakakis was motivated by the attempted proofs by Ted Swart that in 1985–87. He tried to give linear programming formulations of polynomial size for the Hamiltonian cycle problem. This would have proved that since linear programming is in and the Hamiltonian cycle problem is -hard. Yannakakis showed that Swart’s approach was doomed because it created a kind of symmetric extended formulation of the feasibility polytopes for the standard formulation of the Traveling Salesman problem. He proved that all such symmetric extensions of must have exponentially many constraints, thus ruling out the programs Swart was trying to create.
I am sure that Swart was greatly disappointed that he did not solve . And I doubt he would be happy to see that his work helped create the entire field of extended formulations, but his work did just that. Without Swart would we have seen this work done? Would it have been created anyway eventually, or would we be living in a world without extended formulation theorems? I would assert that at a minimum the creation of this area would have taken many more years to be created, if at all. Thanks Ted.
Yannakakis’s work has in this decade been actively pursued by Sebastian Pokutta and his colleagues. They began by removing the “symmetric” condition on the kind of formulations ruled out, and then liberalized the kinds of extensions. Their work has greatly extended the range of “no-go” theorems, but quite apart from this negative purpose, has created a lot of beautiful mathematics. Moreover their work links several areas: it began with some new thoughts in quantum communication complexity, and has greatly increased our understanding of the structure of polytopes. Ken and I have still been struggling with its exact reach.
Thus our third minimal requirement is that the proof should engage neighboring areas of theory. As we have said before, this should include deriving some lesser consequences, short of shooting the moon. If you’ve proved “equal,” is it easier to see why factoring has a sub-exponential time algorithm? If “not equal,” can you say why does not have a linear time algorithm? And why must the first blush be separating rather than from . If either side, what about the mechanics of separating from or from ? These are addressed in a recent paper by Harry Buhrman and friends and a survey by Ryan Williams, both of which we aim to cover when time allows.
What do you think? Is there a possible proof out there that we will avoid reading that is correct, or useful, or contains some interesting new ideas?
Ideas from algebraic geometry and arithmetic complexity
Hyman Bass is a professor of both mathematics and mathematics education at the University of Michigan, after a long and storied career at Columbia University. He was one of the first generation of mathematicians to investigate K-theory, and gave what is now the recognized definition of the first generation of K-groups, that is for a ring . He co-founded with Jean-Pierre Serre a theory of graphs of groups, that is mappings associating a group to each vertex and edge of a graph. He is a past President of the American Mathematical Society—and exemplifies the idea that the AMS promotes mathematics at all levels: since 1996 he has worked on on elementary school math education with his Michigan colleague Deborah Ball.
Can chess statistics help design multiple-choice exams?
UT Dallas source
Bruce Pandolfini is one of very few living people who have been played by an Oscar-winning actor. He was the real-life chess teacher of junior chess player Joshua Waitzkin, who went on to become the world champion—in T’ai Chi. Their story is told in the movie “Searching For Bobby Fischer” with Sir Ben Kingsley, which is still iconic after 20-plus years. Pandolfini is still doing what he loves as an active chess teacher in New York City. For much of this time he has also written a popular feature called “Solitaire Chess” for Chess Life magazine, which is published by the United States Chess Federation.
Today Dick and I wish to compare styles of multiple-choice exams, with reference to “Solitaire Chess,” and have some fun as well.
Cases where we can count objects in polynomial time
New Interim Dean source—our congrats
Mike Sipser is one of the top theorists, especially in complexity theory, and has been Head of the Mathematics Department at MIT since 2004. He is long known for some very clever proofs and his famous survey paper on the status of . He is recently known for co-authoring the paper that introduced adiabatic quantum computing, which is the kind being given physical realization by the company D-Wave Systems. But he is perhaps best known nowadays for his textbook Introduction to the Theory of Computation. I have used the earlier version many times for teaching our undergraduate class; I have not used the third edition, mainly because I teach other things these days.
Today Ken and I wish to talk about a topic that is covered in his book, finite state automata, in relation to counting.
It’s that time of year again
Altered from NPR source.
Faadosly Polir is back in contact with us. We have encountered him before, but this time he has company. He has teamed up with two well-known radio personalities to launch a quiz show about mathematics.
Today Ken and I wish to talk about material for the show that Faadosly has sent us.
A famous algorithm, a new paper, a full correctness proof
Vijay Vazirani is one of the world experts on algorithmic theory, and is especially known for his masterful book on approximation algorithms. Among many results in computational complexity, two have been Wikified: his theorem with Les Valiant on unique SAT, and the subsequent generalized Isolation Lemma with Ketan Mulmuley and his brother Umesh Vazirani. Lately his focus has been more on computational aspects of auctions, economic systems, and game theory.
Today Ken and I wish to discuss a recent paper by Vijay on matching.
On Rabin’s recent talks at Tech
Jeffrey Shallit is a computational number theorist, with many wonderful results. He is also well known for his work as an advocate for civil liberties on the Internet, and also against intelligent design (ID). Indeed he was at one point slated to testify in the 2005 Dover case until a stand-down by the ID side accomplished his purpose. Ken once used his survey with Wesley Elsberry in a graduate seminar on cellular automata and various forms of complexity, not to say anything about ID, but just as a source of well-written definitions and relevant examples.
Today I would like to talk about on Michael Rabin’s talks at Tech this week and their connection to Jeff.