# NP too big or P too small?

*Circuit upper bounds can solve p=np *

Lenoid Levin is a computer scientist who independently proved essentially the same theorem as Steve Cook did. They both did this in 1971. This was during the height of the cold war and the information flow between Russia and the west was not very smooth. So it took a while for the theory community to understand what Levin had done. Levin is also famous, in my opinion, because his advisor was perhaps one of the most dominate mathematicians of the century: Andrey Kolmogorov. Kolmogorov solved long standing open problems as well as created entire new fields of mathematics. This is a powerful and unique combination.

## Too Big or Too Small?

One approach to is to show that SAT requires super-polynomial sized circuits. By a beautiful theorem of John Savage every problem in P has a polynomial size circuit. Therefore, if SAT had super-polynomial sized circuits, then this would yield a contradiction. Thus, a lower bound on circuits can yield a proof that P is not equal to NP.

This is the approach that many have tried to follow: somehow show that SAT’s circuit complexity is large. So far the progress in this direction is zero. I believe that no one can even prove that SAT requires at least a circuit of size where is the number of bits in the encoding of the problem. Note, there are some artificial problems that have been shown to require at least size circuits. However, they are not SAT.

There is an alternative approach to P=NP. Recall, the story of the ”Goldilocks and the Three Bears.” Recall she finds that one bed is too small and another is too big. Finally, one is just right. Well there is a corresponding approach to our favorite problem.

* Theorem: Suppose that all problems in P have circuit complexity at most where is a universal constant. Then, . *

I have known this theorem for a long time, but do not know who first proved it. The proof of the theorem is the following: If , then we are done. So assume that . Now by a pretty result of Ravi Kannan (which we will cover in another post) NP must have a problem with circuit complexity at least . But, for large enough this is a contradiction. This completes the proof of the theorem.

The cool part of this argument, I believe, is that it shows there are two ways that P and NP could be different. One is that NP is ”too big”, i.e. it has no fixed sized circuits. Of course this is where the CW is betting. Most would agree that SAT must have big circuits. Yet there is no evidence to this. None. In my opinion it is completely possible that this could be false.

The other possibility is that P is ”too small”. In this case P has fixed sized circuits and so it is much smaller than NP. Of course, CW does not believe this is possible. I think it could be the case. Circuits are very powerful. They can do strange things that algorithms cannot. A classic example of this is the famous result of Len Aldeman on de-randomization of circuits. So I think it is possible that this could be the way that gets resolved. But who knows.

## Open Problems

While there is currently no evidence whether NP is too ”big” or P is too ”small”, Levin surprised me one day while we chatted about at a conference. He remarked that his advisor, Kolmogorov, believed that P had circuits of size , i.e. linear. (Of course Kolmogorov stated his ideas in a different way since P for example was not defined explicitly back then.) Linear is as small as they can be, of course. Levin insists that the great man had really thought about this. While Kolmogorov had no concrete results he was convinced that linear was the right answer. I have only Lenoid’s word for this, but it certainly made an impression on me.

Clearly, the obvious open problems here are several. One possible tractable one is to prove that SAT itself has circuit complexity at least . While this has no major consequence it seems to be open and might shed some light. Another set of questions concern trying to improve Savage’s theorem. His famous theorem proves if a Turing runs in time then for all there is a circuit of size that solves the problem on inputs of length . This was improved long ago to . Can we prove anything better? If Kolmogorov is right, then it should be possible to vastly improve this bound.

Hi! I was surfing and found your blog post… nice! I love your blog. Cheers! Sandra. R.

[He remarked that his advisor, Kolmogorov, believed that P had circuits of size , i.e. linear.]

I don’t understand – if you could have a linear circuit for P, couldn’t you simulate that linear circuit with a Turing machine in O(n), which would break Time hierarchy?

No.

How do you find the circuit?

Right, I understand now. Thanks!

However, if you can’t efficiently create the circuit, it should be pretty hard to prove its existence.

Are there any methods that could work here beside the probabilistic method/the pigeonhole principle? Has the former actually been cleverly used in complexity theory?

Are there, maybe, even bright ideas to make diagonalization work to prove the existence of something?

$NC \neq P$ seems to imply $n^{c}$ for some fixed $c \in \mathbb{N}$ may not be possible!

So if every problem f in P has a circuit complexity bounded by Cn, then P is not equal to NP.

But what happens if the complexity grows linearly, but not uniformly among problems in P ? In other words what happens if every problem f has a circuit complexity bounded by C(f)n, where the constant now depends on the problem ?( I reckon this is a version of the Kolmogorov conjecture that I have seen in several places, for examples in a paper of Williams). I am wondering if this version of linear complexity also implies P!=NP.