Skip to content

Things We Did Not Know How to Compute

December 31, 2023


Artificial Intelligence and P=NP

Enio Moraes is the Product and Engineering Director of Semantix in Sao Paulo, Brazil. Semantix provides AI platforms for businesses. He recently blogged on LinkedIn on what P=NP would say about AI.

Today we pose the converse question: how do this past year’s advances in AI influence our thinking about P versus NP?

Among numerous advances in both computer science theory and artificial intelligence this year, examples combining both pique us most. In a result we could have included in a roundup last year, Google DeepMind’s AlphaTensor network discovered a straight-line program to multiply two {4 \times 4} matrices using 47 multiplications, two better than the 49 achieved by Volker Strassen’s 1969 algorithm. Last June, DeepMind’s new AlphaDev code-generation platform discoverd concrete time-savers in sorting code. There was also news of ChatGPT creativity in building chemical reactions. We mentioned AlphaFold three years ago.

There have also been hints of AI discovering new mathematical results. We were underwhelmed by a July 2 New York Times article titled “A.I. Is Coming For Mathematics, Too.” Last week, Scientific American noted that another DeepMind product called FunSearch, which has a similar code-writing approach to AlphaDev, proved some concrete cases of extremal-combinatorics formulas in higher instances of the card game Set.

Think Different

What we really crave are cases where AI formulates and proves a mathematical statement that was not thought of before. Perhaps the analogous task for algorithms is easier to discuss:

When will a large AI system develop a conceptually new algorithm for something that we did not previously know how to compute—at least not so well?

The algorithm has to be more than better base cases for recursions or code-rewriting tricks.

A further contrast enables us to mention another 2023 news item: Oded Regev designed a new tradeoff between quantum circuit size and number of samples in Peter Shor’s factoring algorithm. He reduces the circuit size for factoring {n}-digit numbers from {O(n^2)} to {O(n^{3/2})}, on pain of using order-{n^{3/2}} rather than {O(n)} qubits overall. The average number of samples needed climbs by an offsetting factor of {\sqrt{n}} and the time for the remaining classical phases is undisturbed. Really neat.

Our point of contrast applies to Shor’s original algorithm: Although the power of quantum methods was stupendously novel, the strategy of factoring {N} via finding the period of {a^x \pmod{N}} was something we already knew how to execute. We just needed an efficient plug-in for inferring the period.

P=NP Implies What About AI?

This kind of plug-in could become legion if P=NP. Most theorists believe that P=NP is false. But it would have a huge effect on AI. Simply it would mean that—in principle—we could solve almost any difficult problem whose answers are verifiable. Thus the AI would be immensely powerful.

Here is how Moraes tells it from his AI perspective:

The resolution of the P=NP problem could have far-reaching implications for the field of AI. If P equals NP, it would imply that every problem with a verified solution could be solved efficiently. This would revolutionize AI by enabling faster computations, enhanced optimization algorithms, and breakthroughs in pattern recognition, machine learning, and data analysis. Many currently intractable challenges could be addressed effectively, accelerating progress in various AI applications.

To be sure, this is the forward implication P=NP {\implies \dots} about AI, where most of us believe the antecedent is false. We want implications of the form AI {\implies} what about P versus NP. Of course, we can just contrapose the first implication. But Ken and I think there is more to it.

AI Implies What About P=NP?

The great problem—does P=NP or not—might be determined by CS’s new big idea—which is true general AI. Ken and I believe that they are related in ways that may be surprising—that go beyond powerful plug-ins for algorithmic ideas that are already humanly conceived. Here is our main hypothetical question looking ahead:

Suppose that sometime in the next five years, a large general AI finds an algorithmic breakthrough. It could be, for instance, an {O(n^2 m)} time algorithm for isomorphism of graphs of {n} nodes and {m} edges, either classical or quantum. Should we take that as evidence supporting P=NP?

Because graph isomorphism is an intermediate problem, the fact of such an algorithm does not imply P=NP. P=NP does imply that graph isomorphism is in P, but need not yield such a algorithm with low polynomial running time (and reasonable constant under the {O}).

The corresponding idea regarding AI discovering new math results is that we can imagine a proof that is hard to find but feasible to understand conceptually after the fact of its creation. Combining the “algorithm” and “proof” strands of what Ken and I imagine leads to a refinement of our question above:

Should we regard such an AI advance as a further indication—besides the known barriers—that P{<}NP will be practically impossible to prove?

The issue is that no methods seem possible to prove that some problems in NP have no polynomial time algorithm. On the other hand we might be able to argue that showing that a problem in NP can be solved in polynomial time. The bound on the time could be gigantic and hopeless from a practical viewpoint. But that it could be that a high-order polynomial evades any foundational obstacle that must be defeated.

Simply put, the fact of P{<}NP is a limit of imagination. A display of unprecedented imagination by general AI should make the prospect of proving such a limit recede.

Do the “Five AI Worlds” Collide with the “Five Complexity Worlds”?

Earlier this year, Scott Aaronson and Boaz Barak proclaimed “Five Worlds of AI.” This is an express reference to Russell Impagliazzo’s long-famous five worlds of how P=NP? and cryptographically-relevant principles stronger than P{<}NP might shake out.

Their article, however, makes no reference to P=NP once the worlds are introduced. Well, the first world they consider, which they call “AI Fizzle,” seems to entail P{<}NP by the contraposed principle above.

The other four worlds break according to whether AI enhances or upheaves present human institutions and whether the outcome is for good or ill. In all of them, AI is powerful. Our reprised questions then become:

How strongly do the other four worlds correlate with P=NP, or at least with cratered likelihood of being able to prove P{<}NP?

AI may or may not work better than we can see. But it seems it must be in favor of better algorithms and not proofs of lower bounds.

Open Problems

Thus, the key issue is: Say we agree that AI is going to change the way that systems are able to operate in the future. Could it be that AI in the future will allow us to find algorithms for NP problems that are beyond all of us humans? Is that a possible future?

Update 12/31/23: Hat-tip to commenter Jim Hefferon for drawing attention to a highly detailed assessment of AlphaFold by Michael Nielsen, penned last May. Nielsen also compares to AlphaZero’s advances in chess and other games.


[formatting improved and some word typos fixed after premature posting click; added mention of AlphaFold and link to Nielsen survey.]

9 Comments leave one →
  1. James O'Brian permalink
    December 31, 2023 5:33 am

    AI proving anything is a very unlikely prospect. The reason for this is that it lacks consciousness. For example, a dog or horse has consciousness, but lacks intelligence, if intelligence is defined as the ability to do abstract reasoning and follow (complicated) chains of logic. (In fact this is also a dubious claim, since if dogs and horses could speak, maybe they could amaze us by demonstrating this ability also). Arguably though, AI lacks true intelligence as well, because consciousness is also a prerequisite for this. So really, what is AI doing then, if anything at all? Well, Elon Musk had it right at the recently held AI conference in Britain when he said that AI is basically a large csv (comma separated values) file (of sorts). Essentially, AI is a complicated program acting as a model, or simulacrum for the outputs of the human brain we typically associate with intelligence, like essay-writing, holding a conversation, or doing some basic calculations. Think of it like this – a human being has consciousness and intelligence; AI has neither. It provides a simulacrum of the latter, without ever having the former. There is more prospect of dogs learning to speak than there ever is of AI becoming “conscious”. This whole field in essence all goes back to the particle/wave duality, the double split experiment, and the fact of a computer lacking, for want of a better word, an animating spirit – which let’s say is a prerequisite for consciousness, not the other way round. Human life is truly special; the double split experiment proves that our act of conscious observation of the universe changes it in some magnificent way. Hence, in truth we live in a moral universe – only spirited beings gifted with consciousness and self-awareness possess true intelligence. No amount of ML-this or AI-that will ever change that basic fact, and the truth is, it took billions of years to give the illusion to human beings and all living creatures that they acquired consciousness of their own accord – so effortless and joyful is the act of living for all creatures – nothing shorts of at least several millennia will ever create something conscious – in ‘raw’ intelligence yes, arguably AI has already surpassed us, but intelligence without consciousness is like bread without dough.

  2. December 31, 2023 6:06 am

    Thank you for the very interesting post. A wonderful New Year’s read.

    Could it be that AI in the future will allow us to find algorithms for NP problems that are beyond all of us humans?

    I thought it is a worthwhile addition to point to Michael Nielsen’s post https://michaelnotebook.com/mc2023/index.html. He says ,”The biggest success of AI in science so far is the AlphaFold 2 system. This is a deep learning system which has made large strides on a fundamental scientific problem: how to predict the 3-dimensional structure of a protein from the sequence of amino acids making up that protein.”

    I have seen this problem cited as an example of an NP-complete problem that would have tremendous consequences if we came up with an algorithm. What he writes about is not a solution in the same sense but sometimes that’s how things work, that progress takes a path a little bit around instead of straight through.

    Perhaps this is another example, like SAT solvers, where as you folks have asked, “Has the Specter of NP-Hardness Faded”?

    • December 31, 2023 8:41 pm

      Thanks very much—especially for the Nielsen link! I thought of AlphaFold, forgot it, then maybe thought it was over the one-year horizon. Nielsen seems to go only thru 2020, so answering the rest of your questions with respect to AlphaFold may best await the next development in 2024.

  3. Frank Vega permalink
    December 31, 2023 6:44 am

    I think an easy question would be: “Is the AI capable to verify math proofs?”. I think that is an easier task to solve than any NP-complete problem. However, we are maybe far from this threshold. What it could take for reach this barrier (months or a century?): your answer could be as good as mine. Right now, it suddenly came to my mind while I’m thinking on: the Incompleteness Theorem or maybe the Halting Problem.

    For example, during these holiday days I tested (and with the help of others) some of these well-known and current programs and they wasn’t able to detect any flaw from my current two drafts which pretend (both of them) to solve the Riemann hypothesis:

    https://www.researchgate.net/publication/376607859_New_Criterion_for_the_Riemann_Hypothesis

    and

    https://www.researchgate.net/publication/376956490_On_Dedekind_Function_for_the_Riemann_Hypothesis

    No flaw or mistake was found: Actually it was worst, since they cannot describe well my attempts (there were math flaws in their descriptions). However, they can easily describe well published recent math article like mine:

    https://link.springer.com/article/10.1007/s11139-022-00574-4

    Now, as far as I know, it was useful (at least for me) to find math references about well-known theories on published papers. Personally, we think this is a beginning of a long journey and maybe it could be too soon and risky to predict whether AI can solve P vs NP question or other crucial problem!

    • 1stein permalink
      January 2, 2024 8:16 pm

      Have you ever thought of using Coq or Lean for the verification of your papers?

      On Terry Tao’s blog he discusses the use of Lean

  4. Frank Vega permalink
    December 31, 2023 5:12 pm

    I tried to say that the AI didn’t recognized any flaw in the following two papers:
    New Criterion for the Riemann Hypothesis (Proof of Riemann hypothesis):
    https://www.researchgate.net/publication/376607859_New_Criterion_for_the_Riemann_Hypothesis
    NP on Logarithmic Space (Possible Proof of P versus NP):
    https://www.researchgate.net/publication/370426542_NP_on_Logarithmic_Space
    Actually I suggest AI should be focus in checking new math proofs as a first step before exploring to do a new one itself! 🙂

  5. Zachary Yezek permalink
    January 8, 2024 12:33 pm

    If AIs start finding P algorithms for intermediate problems like Graph Isomorphism, then I’d interpret it as growing statistical evidence for P=NP.

    It’s akin to how checking more and more zeros of the Riemann Zeta function made mathematicians confident that the Riemann Hypothesis is ‘probably’ true. Sure, without a hard proof you can always get things like Skewes’ #, where the first counterexample is some gigantic #, but those cases those stand out precisely because they’re relatively rare. If AIs start shifting more and more problems like GI to P, then it should be seen as empirical evidence for the collapse of the hierarchy. Something that conceptually isn’t expected to occur UNLESS P=NP.

    I do think that AIs will only solve P vs. NP directly if P=NP. Mostly because P=NP allows for a constructive proof of the kind a computer could find: The AI finds a potential polynomial time algorithm for 3-SAT (or a similar NP) problem. Proving that P!=NP is a qualitatively harder job, and might even be impossible in principle (an undecidable problem). Because you’re fundamentally trying to prove a negative (a P algorithm for NP problems can’t exist), over sets and spaces of possible algorithms that are not very friendly to reason about & possibly infinitely large. About the simplest proof of P!=NP that could exist in principle is a proof by contradiction, and those are still substantially more complex pieces of logic than simple constructive proofs. Which is likely to require a correspondingly more advanced AI to find.

    So, if P=NP we should see (in roughly chronological order):
    1) A couple longstanding problems like Graph Isomorphism get reduced to P
    2) An AI assisted proof that removes or bypass a longstanding lower time bound on an NP problem, OR
    3) Or a new AI discovered algorithm that solves nearly all instances of an NP problem in P time, with the implication that the remaining outliers form a finite set (e.g. can’t prove for n=8, or cases where the graph has as special finite subgraph)
    4) Building on 2 and 3 somebody covers the remining cases and defines a P algorithm a NP problem

    Whereas I’d expect, for P!=NP, the AIs to run into the same brick walls the humans have:
    1) AIs attack problems like Graph Isomorphism but make little improvement on existing results, even after several generations of AI improvement
    2) AIs find NEW lower bounds on NP problems that prove equally hard over time as the existing ones.

    And so on.

  6. January 10, 2024 11:40 pm

    Our Entangled Gauge calculator https://gaugecalculator.com/ work indicates that AGI needs a new mathematical framework that should be equivalent to the Yang-Mills theory and then it can solve 3SAT problem in polynomial time. The entanglement of the natural numbers in space is a hidden variable that has not been payed enough attention.

Trackbacks

  1. Does Pi Conspire? | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading