# Make Your Own World

* What would you do if you could do anything? *

Ian Munro is a world expert on data structures and algorithms—I have discussed some of his great results previously here.

Today Ken and I wish to examine a fantasy world where you can shape the answers to **the** questions: is equal to ?, is equal to co-?, is equal to ?, and so on.

Munro has a wonderful sense of humor; it has always been a delight to work with him. One of his crazy ideas was: *what* makes a problem -complete? In the early seventies one of the main types of papers were those that had theorems of the form:

Theorem:It is -complete to determine whether an has property .

Program committees started to tire of such results, so eventually only really *hard* theorems of this kind were accepted, and finally almost no such papers were accepted. One, of the few, that was accepted, in FOCS 1996, is a beautiful result due to Michael Rabin:

Theorem:Computing the number of zeroes modulo a prime of a linear recurrence sequence over is an -hard problem.

Note, because the values of the recurrence are from a finite set, , this makes sense: there only are a finite number of values.

Nevertheless, these papers have never seemed to yield a simple answer to Munro’s question: can we identify a property of the *problem* that *makes* it -complete? It is true that all known -complete sets are *polynomial-time isomorphic*, so in a sense is the same for all these problems. But that doesn’t give us a meta-method for looking at a concrete given problem and telling that it’s -complete, which would have simplified the issue of all these papers.

** A Fantasy World **

Munro’s feeling on the lack of simple ‘s led him to speculate that complexity gives a convincing illusion that mathematical truth is participatory. He told me that every problem is either complete or in —well most problems. If the first researcher who worked on the problem tried to show that it was hard, then it became -complete; if the first tried to find an algorithm, then it was found to be in . A strange, but cool, theory.

Let’s modify Ian’s world just a bit: Complexity is so participatory that *you* can just decide what answers you want for all of the open questions. You can design your own world: what would you do? Would you make all the classes that we think are different, different? Or would you collapse lots of classes? Here are two different views from Ken and myself.

** Ken **

I, Ken, would make the choices that maximize the need for a creative response by humanity. With regard to Russell Impagliazzo’s Five Worlds, I take this to be “Cryptomania.” This means not only , but also that public-key cryptography is possible, with security in a heavy preponderance of cases. However, this would not come via the hardness of Factoring, but by dint of elliptic curves or polynomial ideal theory.

The polynomial hierarchy would be distinct as far as , but I find it plausible to make it collapse at level three: closed under complement. Also , but I guess I would design . I could go into further detail about how *succinctness* and *computational depth* play into the hardness of problems, but suffice it now to say that enough should be solvable to “get by with a little help from our friends.” Beyond that, per ardua ad astra—to the stars through struggle.

For recent news on Impagliazzo’s Worlds, see this.

** Dick **

I, Dick, would make the choices that maximize the need for a creative response by humanity. I would make equal to . How can I claim that this would “maximize the need for a creative response,” as Ken’s also would? I will explain.

In this world complexity theory would be immensely simpler. The famous Complexity Zoo would suddenly discover that many of the “animals” they carefully keep in different “cages,” were actually exactly the same. Cryptography would be in big trouble, quantum computing would be gone, and the upheaval would be gigantic.

This unexpected collapse, would generally cause a huge upheaval—famous results would be gone, for example. The collapse would require new deep ideas: how could we make security work? Possibly we would have to turn to physical assumptions? Perhaps we would have to use space limitations or other approaches to security.

** Open Problems **

What would you do?

Have a poly time algorithm which solves the halting problem!!

Hmmm… I can’t decide if the world would be a nicer place for ‘me’ if there were some complete underlying deterministic order to it, or not. On the one hand setting P = PSPACE would place a nice consistency over everything. We could build computer systems to deterministically consolidate our knowledge, but I suspect that as our understandings grow it might start to get boring after a while, or at least very predictable. On the other, setting P != NP we’d have to spend forever and a day looking for heuristical ways to build up our collective knowledge. A never-ending fascinating quest with lots of ups and downs.

Still, since I’m getting older and in this response it is my choice, I think I’ll go with P = NP (but not PSPACE, in my world TIME and SPACE are orthogonal :-). Since it is also my choice, I’d give the answer to me and collect $200 (then retire to Park Place or a beach).

Paul.

“Have a poly time algorithm which solves the halting problem!!”

Argh, scooped! I wanted to write that!

Okay, in that case, I’d make it NP-hard to determine copyright ownership. In fact I’d make it NP-hard to even CLAIM copyright ownership.

I think your replacement idea was even more awesome. “Maximize creative response” indeed.

I’d make P vs NP be independent of ZFC, make P != PSPACE, BPP != P, UGC false, one-way functions possible, but outside of, say, DTIME(n^5).

🙂

s.

I’d make NC = P. My multi-core computers would immediately become more useful without precipitating the Rise of the Machines (I hope).

This is what annoys me the most: “Program committees started to tire of such results,…”

Rather than talk about “journal editors” and “journal editorial boards”, you talk about program committees. No, I am not blaming Dick or Ken; rather I blame the entire TCS community for promoting the conference mode of publication, not the journal mode.

Why not get rid of conference proceedings altogether? Let the authors attend conferences, present their work, then publish in journals? Okay, I’m happy with special issues of journals devoted to a certain conference. But let’s do away altogether with this “conference proceedings” trash. Let’s do what the mathematicians have always done.

I should mention that already in 1981, it was impressed on me at Oxford that merely proving that some natural, maybe not-yet-considered problems are NP-complete was not the stuff of a dissertation. Unless it were a problem with a really long pedigree, that is. So the feeling was more than just program committees.

In particle physics in the 1970s “just another meson” papers became boring … in biology in the 1990s “yet another gene” papers became boring … in complexity theory in the 2000s (?) “yet another oracle-dependent class separation” papers became boring … this process of refocusing and repurposing occurs in all math/science/engineering disciplines. Only in deplorably stagnant fields does any STEM professional do all their career what they learned to do as a student; program committees perform (imperfectly to be sure) the vital service of requiring this professional growth. In consequence, the inexorable trend in all STEM disciplines is toward greater universality and naturality … mainly because there seemingly is no alternative means (or is there?) to make sense of the flood of mundane workmanlike literature.

I would make L=NL. THe world would suffer a dramatic change =P

Unfortunately, we (TCS community) still feel us growing too fast to be archived by journals. See a related post by Lance and subsequent discussions. Most annoying for me is NOT that people announce their results in proceedings: it is always nice to see the “hot news” immediately. Annoying is that they (proceedings) often have no complete proofs, mostly only sketches, and that PC members (and their helpers) have physically no time to check even these sketches. We just mix “announcement” with “publishing”. And this doesn’t seem to change in the next 10-20 years.

does Ian live in australia?

No, just having fun

dick

I would make P=NP and the algorithm for SAT (and other NPC problems) actually usable.

(The title of the paper: A barrier to proving P \ne NP.)

The BENEFIT to humanity of fast algorithms for lots of problems we actually want to solve would, I think, far outweigh the fact that many crypto systems would be crackable.

There might be a VERY ROUGH transition period where people have to QUICKLY switch to

some other kind of crypto system.

This comment confuses me.

a) I’m not aware of any NP-complete problems in use in “many crypto systems”. Factoring can be phrased as a decision problem in NP, sure, but knowing that P=NP would not make it any easier to perform, as far as I can see.

b) Assume that there is such a problem in use in “many crypto systems”. Why does the decision problem being solvable in polynomial time mean that many crypto systems are “crackable”?

P just means polynomial time. There are infinitely many problems in P that do not live in DTIME(n^k), for any fixed k, no? Once k gets even moderately large, the *practical* distinction between P and NP seems meaningless to me.

Is there some well-known algorithm to parallelize a problem in P that is linear in k, for any problem in DTIME(n^k)? If not, I’m not sure why “the world would be a totally different place” statements are so common if P=NP.

steve

Actually, now I just confused myself.

Assume P=NP.

Then for some k, SAT \in DTIME(n^k).

However, for all k, there exists L \in DTIME(n^{k+1}) \ DTIME(n^k).

So there are problems in P harder than SAT.

(so far, so good)

“and the algorithm for SAT (and other NPC problems) actually usable”

but this means that there exist NPC problems in DTIME(n^j), where j is arbitrarily larger than k, violating the “actually usable” clause.

no?

steve

I’m not fussy, so I’m prepared to allow P=NP or P!=NP. However, there are conditions attached. If P=NP I insist that the algorithm that shows this is rather opaque, so that although we can in principle find proofs automatically, provided they are not too long, the proofs we find are unenlightening. So the activity of mathematicians continues, but with a focus on

understandingrather than onguaranteeing truth. (Some would say that is already to a large extent the focus. Well, in my P=NP world it would be the entire focus.)If on the other hand P!=NP, then the price I ask is more or less the opposite: that we come to understand far better the subclass of mathematical statements and proofs we are actually interested in, and as a result of that understanding we develop very good automatic theorem provers that can rival and eventually surpass human mathematicians.

These two worlds can be given a unified description. It is often said that if P=NP then it would show that mathematical creativity can be automated. I would like a world where exactly one of the statements “P=NP” and “mathematical creativity can be automated” is true.

How do you see this relating back to artificial intelligence? Is there a connection?

I don’t see all that strong connection between the P versus NP problem and artificial intelligence. However, in common with many people, I think that the challenge of creating a program that can solve interesting mathematics problems is an excellent toy version of the general challenge of artificial intelligence.

“I think that the challenge of creating a program that can solve interesting mathematics problems is an excellent toy version of the general challenge of artificial intelligence.”

If the program can solve any “interesting mathematics” doesn’t this imply that mathematics ability is a subset of intelligence? Or is it that there is always some aspect of mathematics that would remain beyond this toy program?

If these two dream worlds eventually turned out to be equivalent, it would explain why it’s so hard to know in which one we’re living.

No cheekiness intended here.

“Mathematical creativity can be automated” is untrue by even very strict definitions of “mathematical creativity”. How could this relate to the question P?=NP? I don’t see it at all.

It seems as if people are pretty worked up about the potential gap between the two classes, even though P is enormous, and P contains many infeasible problems. I’m also confused about how this is related to mathematical creativity. I thought that Goedel settled this around 80 years ago.

The question of the equality of the classes is interesting and tricky, but seems essentially unrelated to the usefulness of a particular algorithm (say, the optimal time algorithm for SAT) or the ability of mathematicians to do interesting things that computers seem unable to do.

steve

I even wonder, if P was equal to NP, whether computers would still work. No one-way function means no irreversible process… and maybe no past and no future at all! It is perhaps the mere fact that P!=NP that gives rise to an arrow of time…

If P = NP you still couldn’t solve NxN chess in less than exponential time, so I think the arrow of time is safe… at least for chess computers :).

… unless the very notion of a practical computation – not the Turing machines, they’re nothing but a language – relied on the existence of irreversible processes. Your chess program would be fine, but what if no real processor could run it?

I had this very weird notion recently that one could separate ‘change’ from ‘time’. That is, spatial coordinates could change without time changing. Worm holes, teleportation, spooky action at a distance, etc. But perhaps what you’re suggesting is that there must always be some movement on the temporal side (even if it is tiny), if there is movement on the physical side?

I just can’t imagine a notion of movement without time… 🙂

My feeling is that the apparent impossiblity to solve our problem is due to very strong causes from physics. It’s like for fishes, the water they’re swimming in – they just can’t see it. If our problem was related to the very notion of time, this would explain why we’re so blind at trying to solve it…

I’m convinced that Turing machines are not the final word on the definition of computing. We’ve always been good at defining what a language is, but quite poor at defining what we call a processor. As of today, complexity theory looks very much like a computer science without computers!

To answer your question more precisely: what makes us know that an event belongs to the past? Our memory. An event is short-term but our memories are (more or less) long-term: it is much easier to write something in our brain than to erase it! But this becomes no longer true if there aren’t any irreversible processes. Recall that P=NP implies that there is no one-way function…

I think you have to be careful here because as a fish in the bowl, it’s easy to assume that everything out side the bowl is contained in water as well. Our perception of our memories is tied to time, but that’s just because we see the world that way. That could be the way it is, or it could just be the way it is for us …

I share the opinion that trying to exploit the world where P=Pspace would be very interesting. One thing that bother me a little in the analysis of this and similar “worlds” is the mixture between mathematical consequences and sociological consequences. (I admit though that the borderline is a bit vague.) Also there is a hidden “asymptotic” approach to mathematical theorems. This “asymptotic approach” tells us that, e.g., if we build a beautiful intellectual structure based on the existence of measurable cardinals and later prove that these cardinals do not exist then the whole theory we built was useless. (What is correct is that the large body of research base on measurable cardinals lower (somewhat) the possibility that they can be proved not to exist. And very very rarely we see large mathematical-theories of this magnitude collapse.)

Similar thing can be said say about the question if factoring is in P. (I tend to think that the answer is no, but realize that Dick Lipton and Peter Sarnak tend to think that the answer is yes.) The assumption that factoring is hard is the basis for beautiful theoretical theory and technological advances with huge impact. The undesputable fact is that factoring is difficult

nowand much of the fruits of assuming the hardness of factoring rely just or mainly on that.This has been my all-time favorite

Gödel’s lost letter and P=NP. Not only was Dick’s post great, but so were the answers.IMHO, there are enough good ideas associated to Dick’s column to it to fill a book … even many books … and by a happy circumstance, Gil Kalai has a wonderful

MathOverflowquestion titled “A Book That You Would Like To Write.”And so, here are (ficticious) MathSciNet Reviews of the three books that these fine posts and comments made me want to write … or still better, made me hope that someone

elsewould write.Especially I wish to express my appreciation and high regard of Tim Gowers’ comments, which very largely inspired the themes of all three books … it’s clear that his comments were constructed with considerable care and thought.

I would accept the world as it seems to be – namely one where P!=NP – but I would try to find the physical causes that prevent us from proving this. For me, P vs NP is related to the quantity of information that may be obtained by means of a computer process and with the energy that is required to build this information.

What if the Kolmogorov complexity of the input and output data of processes was similar to some kind of mass? What if computer objects behaved just like physical objects? After all, when one speaks of the *weight* of some computer file, what does they mean? That this file seems to possess some kind of inertia…

Regarding Paul Homer’s 9/8/11 Post:

My guess is that “Time” is the universal x-axis. The y-axis can be anything, including the number of dimensions in effect at any given time.