# Transcomputational Problems

* Problems beyond brute force search *

Cropped from Wikipedia source |

Hans-Joachim Bremermann was a mathematician and biophysicist. He is famous for a limit on computation, Bremermann’s limit, which is the maximum computational speed of a self-contained system in the material universe.

Today Ken and I wish to talk about the limit and why it is not a limit.

A transcomputational problem is a problem that requires processing of more than bits of information. The number comes from Earth-scale considerations, but adding less than 30 to the exponent breaks the scale of the known universe. Our friends at Wikipedia say:

This number is, according to Bremermann, the total number of bits processed by a hypothetical computer the size of the Earth within a time period equal to the estimated age of the Earth. The term transcomputational was coined by Bremermann.

## Pro

What is interesting is that he thought about “transcomputational” problems in 1962. Yes almost a decade before the P=NP problem was stated. See his paper for details.

He noted back then that certain problems were beyond any reasonable brute-force search. In his own words:

The experiences of various groups who work on problem solving, theorem proving and pattern recognition all seem to point in the same direction: These problems are tough. There does not seem to be a royal road or a simple method which at one stroke will solve all our problems. My discussion of ultimate limitations on the speed and amount of data processing may be summarized like this: Problems involving vast numbers of possibilities will not be solved by sheer data processing quantity. We must look for quality, for refinements, for tricks, for every ingenuity that we can think of. Computers faster than those of today will be a great help. We will need them. However, when we are concerned with problems in principle, present day computers are about as fast as they ever will be. We may expect that the technology of data processing will proceed step by step—just as ordinary technology has done. There is an unlimited challenge for ingenuity applied to specific problems. There is also an unending need for general notions and theories to organize the myriad details.

Quite insightful for a paper that dates a decade before Cook-Karp-Levin on P=NP. It also predates the limits associated to Jacob Bekenstein’s bound on the information capacity of finite space and/or to Rolf Landauer’s principle.

One wonders what might have happened if Bremermann’s paper had been better known in our theory community. Ken notes that the Russian theory community in the 1960s highlighted the question of *perebor*—brute-force search. But he senses the emphasis was on problems for which it could be *necessary* in the abstract, rather than tied to Earth-scale considerations like Bremermann’s.

## Con

Of course there are several eventualities that were missed. One of course is quantum computation—I believe all his calculations depend on a classical view of computation. There are several other points that we can raise to attempt to beat his “limit.”

*Change the algorithms*: Of course his limit could be applied to computing primality, for example. The brute force method is hopeless for even modest-sized numbers. Yet we know methods that are much better than brute force and so we can easily beat his limit.

Steve Wozniak visited Buffalo yesterday as a UB Distinguished Speaker. In a small group attended by Ken, he told his standard anecdote about the first program he ever wrote. This was to solve the Knight’s Tour problem on an chessboard. He first coded a brute-force solution trying all Knight moves at each step but realized before he hit “run” that it would take about years. This awakened him, he said, to the fact that good algorithms have to go hand-in-hand with good hardware.

*Change the answers*: Another method is to change what we consider as an answer. Approximation algorithms of course are one important example: allow the answer to be near the optimal one. This has opened the floodgates to increase the class of problems that we can solve.

*Change the problems*: Another method is to change the problems that we attack. In many cases we can avoid general problems and exploit special structure of a problem. Examples that come to mind include: replace dense matrices by sparse ones; replace arbitrary graphs by planar ones or those with restricted minors; and replace data analysis of arbitrary data sets by analysis of data that is generated with specific noise, like Gaussian.

*Change the world*: We have posted about the idea of a world without true randomness, presenting Leonid Levin’s proof that SAT is nearly polynomially solvable in such a world. That post offered the weaker idea that every efficient generator of SAT instances might be solved by Levin’s algorithm on all but finitely many instances. The finite bound might be huge, but the fact of Levin’s algorithm would carry weight: it would solve everything else based solely on the principle that “nothing succeeds like success.” We can put this idea in concrete terms like Bremermann’s:

Could we live in a world where the act of creating an instance that requires processing more than bits of information requires processing more than bits of information?

We note that Larry Stockmeyer proved that every Boolean circuit capable of deciding a certain class of logical formulas that fit into 8 lines of 80-column ASCII text must have more gates than atoms in the observable universe. But this does not rule out a little algorithm solving every that we could generate—unless we spend the time to cycle through every legal formula that fits into 640 characters.

## Open Problems

Are there realistic limits on computation of the type that Bremermann postulated? What are the right limits in light of today’s insights into computation?

This fine

Gödel’s Lost Letteressay encompasses themes of complexity-theoretic mystery and madness that are inherent in two great intellectual pursuits: chess and statistical mechanics. To appreciate this, we reflect as follows.Modern computer-chess and statistical mechanics alike are grounded in large databases of micro-theorems (e.g., in chess position

X, moveYis legal; in system stateX, incrementYis Hamiltonian). Each micro-theorems is in itself entirely rigorous, and even a cell-phone nowadays can crank out billions of them.For scientists, chess-players, and entrepreneurs alike, the chief opportunity to create value is to distill large numbers of micro-theorems into a small number of high-reliability macro-postulates (

e.g., white to play and win, drug-designZwill bind strongly, macroscopic entropy never decreases, scalable quantum computing is/isn’t feasible).In recent decades humanity’s

empiricalcapabilities in extracting high-reliability macro-postulates has been improving exponentially; this ongoing improvement is Moore’s Law broadly conceived, and it is among the dominant features of the 21st STEAM enterprise.Rigor has been harder to achieve than empirical success, and the obstructions to rigor are maddening. As David Goodstein famously put it:

ConclusionIf the 20th century study of chess-play and statistical mechanics has proved to be maddening yet hugely productive — especially the vexing transition from micro-theorems to macro-postulates — then how muchmoreproductively maddening will be the 21st century study of computational complexity and quantum computing/simulation?“One wonders what might have happened if Bremermann’s paper had been better known in our theory community”

The theory community would have realized that the P=NP identity is the Maxwell’s demon paradox.

Surely, Maxwell’s demon was clever enough to turn any thermal disorder into a quantum computer… 🙂

exactly Serge, Maxwell’s demon is a quantum computer.

Maxwell’s demon is an involution of controlled-NOT gates.

On the quantum scale, there’s no outstanding time variable (according to loop quantum gravity). A time variable can only emerge as an average on thermal disorder. As an argument, recall that the only known irreversible phenomena must involve heat in one way or another. So, on our macroscopic scale, complexity is nothing but a struggle against the flow of time. But Maxwell’s demon lives on the microscopic scale, which allows it to run a quantum process (contrary to us poor humans).

Thank you very much Alexandre for giving me this idea. I really think it settles the P=NP puzzle!

Mathematics ignores time and P=NP does hold in this timeless science. Nothing indeed in the Peano axioms stipulates that time is irreversible. But, in a world like ours with its irreversible processes, everything behaves as though as P!=NP. Unfortunately, the second law of thermodynamics prevents us from proving it, and also prevents mathematicians from proving P=NP. Finally, the existence of one-way functions seems to be the one missing axiom in Peano arithmetic that’s required for doing computer science.

Thanks, Serge.

The NOT gate (bijection) is the core of irreversibility!

Perhaps world without true randomness is possible. Howard Wiseman wrote:

“Parallel universes – worlds where the dinosaur-killing asteroid never hit, or where Australia was colonised by the Portuguese – are a staple of science fiction. But are they real?

In a radical paper published this week in Physical Review X, we (Dr Michael Hall and I from Griffith University and Dr Dirk-André Deckert from the University of California) propose not only that parallel universes are real, but that they are not quite parallel – they can “collide”.

In our theory, the interaction between nearby worlds is the source of all of the bizarre features of quantum mechanics that are revealed by experiment.”

http://app.griffith.edu.au/sciencesimpact/when-parallel-worlds-collide/

Thanks for pointing me to Bremermann’s limit — I hadn’t heard this concept before!

There’s a somewhat similar idea in Paul Davies’ book, “The Last Three Minutes,” about what we know and conjecture about the “end” of the universe. One of the topics discussed at length is whether there is a universal limit on the total amount of computation, since computation fundamentally must take up some energy.

Are there realistic limits on computation of the type that Bremermann postulated? What are the right limits in light of today’s insights into computation?

The limits on computation, if any, seem to be the same as the limits on proving that there are limits on computation. I think this vicious circle can only be broken by some physical hardness principle. Indeed, the class NPC is a sort of computational black hole, with NP as its boundary. Now let’s postulate that the reduction of a black hole to its boundary is always computationally inefficient, P≠NP might thus be a mere relativistic effect as a special case of the so-called holographic principle. Example of that principle in physics are:

a black hole vs. its boundary

quantum gravity vs. quantum field theory (the AdS/CFT correspondence from string theory)

I suggest extending it to computer science:

non-deterministic polynomial time vs. deterministic polynomial time (NP/P)

integer programming vs. linear programming (IP/LP)

quantum computing vs. classical computing

Computational complexity would then turn into a chapter of general relativity. Moreover, I believe that computability theory is already, to some extent, a chapter of special relativity. Indeed, should extraterrestrials who live outside our light cone develop an exotic kind of mathematics, their theorems might contradict ours without any problem… provided they were self-consistent and that any potential contradiction between their results and ours could never get computed – thanks to the speed of light!

So, if computability is a consequence of special relativity, why not consider complexity to be a consequence of general relativity?

Serge, our very mathematics is contradictory!!!

See: https://unitambo.wordpress.com/2015/04/01/one-way-bijection-over-gf2/

Consider GF(2) as the smallest finite field (binary system), a two-element Galois field, with Z2 = {0,1}. Let P1 be equal to x²+1 and P2 equal to x. The following are equivalent representations of the same value in a characteristic 2 finite field: Polynomial (P1): x²+1 Binary: {101} and Polynomial (P2): x Binary: {010} Remembering that a sum in GF(2) corresponds to the bitwise XOR operation, P1 ⊕ P2 =x²+x+1. See:

x²+1 ⊕ x = x²+x+1

{1 0 1} XOR {0 1 0} = {1 1 1}

So, since the XOR operation for making the sum P1⊕P2 is used, now check the truth table of XOR gate. Remembering that the polynomial x²+x+1 is the irreducible polynomial in finite field GF(2), because x²+x+1 has no solution (try plugging in 0 or 1 to see). In other words, x²+x+1 is a polynomial that may not be factored into the product of two non-constant polynomials:

x²+1 x x2+x+1

Irreducible? T T F

Irreducible? T F T

Irreducible? F T T

Irreducible? F F F

Now, note that x²+1 (= x+1) corresponds to the NOT gate in GF(2), thus being a bijection (reversible). Note also that x cannot be irreducible, then, so that x²+x+1 is irreducible in the truth table of its XOR gate, x²+1 must be irreducible (one-way).

Therefore, the NOT gate is both reversible and irreducible in GF(2) at once. This true contradiction is the core of Gödel’s P vs.NP problem!!!

NOT gate is a one-way function in binary system???

In the light of 53 subsequent years of research in complexity theory and quantum computing, and with the experience gained in 53 years of Moore’s law progress in computing machinery and simulation algorithms, it is natural to amend Bremerman’s question to read:

The just-announced transformationally large and ambitious collaboration of the Swiss-based pharmaceutical corporation Sanofi with the Portland-based quantum simulation corporation Schrödinger ($120M/5year) amounts to a vote of confidence that the answer is “yes” … as does

this week’s(#1520).xkcdcartoon titled“Degree-Off”One of the great lessons of the GLL-sponsored Kalai/Harrow debate “Is quantum computing feasible” (as it seems to me) is that the answer to Bremmerman’s Question (as amended)

cannotbe yes in any universe in which scalable quantum computing and/or scalable BosonSampling is feasible.From this perspective, the immense scale and vast ambition of the Sanofi/Schrödinger program amounts to a vote-of-confidence that:

ConclusionEnterprise-investors are placing increasingly large bets that the answer to Bremermann’s Question is “yes”.QuestionIs the quantum dynamics of the world — both its small-scale low-energy QED limit and its large-scale high-energy (string theory?) limit —determinedby the requirement that the answer to Bremmerton’s Question (amended) is “yes”?Don’t ask me (or anyone) for the answer to this tough question … but we can all be grateful to

Gödel’s Lost Letterfor inspiring us to ask ourselves these foundational questions, and to read with sympathy the marvelously hopeful literature that tries to answer them.Further ReadingsSeveral of the themes raised in recent essays here onGödel’s Lost Letterhave also been raised in recent essays onShtetl Optimized(“Is There Something Mysterious About Math?”); these themes include large-F (Fundamental) foundations that guarantee mathematical rigor, small-f (practical) foundations fordoingmathematics, and even Gil Kalai’s recent interest theological foundations (per Gil’sSOcomment #87 ).It may interest

GLL/SOreaders that thesesamethemes are being vigorously debated, at a high mathematical level, in recent essays and comments on (Fields Medalist) Michael Harris’ weblogMathematics Without Apologies.Particularly commended (by me anyway) is Harris’ essay

“Pantheism and homotopy theory, Part 2“(April 30, 2015), which beginsPlease let me add that (as it seems to me) it’s no bad idea for scientists, engineers, and historians in general, and

GLL/SOreaders in particular) to read Voevodsky’s article too. `Cuz when Fields Medalists trade candid views regarding large-F and small-f STEAM foundations, then to borrow Feynman’s phrase,it’s terrific!These Harris/Voevodsky essays will make clear to

Shtetl Optimizedreaders that the ongoing debate among mathematicians in regard to univalent foundations is comparably heated and enlightening as the Harrow/Kalai debate regarding the feasibility of quantum computation.Both debates are thrilling and even share common F/foundations … regarding which Michael Harris’ weblog

Mathematics Without Apologiesis an outstanding resource for background information.As further student-friendly/rabble-rousing mathematical reading, it’s tough to better Michael Harris’ own textbook, titled (like his weblog)

Mathematics Without Apologies(2015). A delightful historical extension — and worthy counterpoint to both Jonathan Israel’s ongoing multivolume history of the Enlightenment and Gil Kalai’s comment #87 onSO— is Leszek Kolakowski’s “Dutch seventeenth-century non-denominationalism andReligio Rationalis:Mennonites, Collegiants and the Spinoza connection” (2004).And finally, the title alone of Leszek Kolakowski’s 2005 essay collection,

My Correct Views on Everything, will bring a smile to regular readers ofGödel’s Lost LetterandShtetl Optimized, and more broadly, to all fans of enlightened friendly discourse.Summary and ConclusionThe Enlightenment that was born in the seventeenth century’s friendly ferment of Spinozists, Collegiants, and Anabaptists is today being reborn and renewed — in full radical vigor and public view — as a friendly ferment of quantum information theorists (as Spinozists), category/type/univalent theorists (as Collegiants), and medical researchers (as Anabaptists).As eventuated in the 17th century, our 21st century’s friendly ferment may turn out to be

wonderfully nourishing(at it seems to me) of the entire STEAM community … full of wonderful opportunities for young researchers in particular …especiallybecause (just as in the 17th century) opinions in this regard are so marvelously various and vigorous.Is it possible to get at last an answer: what or who is “Pip”? This is unclear and annoying.

Please see the “About Us” tab at top. WordPress doesn’t (didn’t?) allow multiple author names.

❗ thx for playing a pioneering role in recognizing the significance of what might be called “

sourcecode complexity” in the study of TCS. alas its kind of controversial & a fringe area still. intend to blog on this at some pt. this recent cs question on complexity shows some of the controversy on the downvoted answer that mentions source code complexity & there was some ensuing discussion in the cs chat room also on the topic. it was handy to have some of your blogs on the subj to cite.Dick, in 1979 you wrote about the social processes which output commonly acceptable proofs and programs. I contend that this notion of social process can be formalized. A social network is a network of neural networks – mathematical language being its communication protocol – and a social process evolves in rules analogous to those of Conway’s Game of Life, though only more complex. Thanks to Church’s thesis, social processes are Turing-complete – that is, everything which can be done by a social process could also be done, in principle, by a Turing machine.

Now, you might wish to apply all the modern results of complexity theory to this natural super-computing model. It would be nice indeed to prove that Levin’s universal one-way function has probability zero of getting inverted by a social process that’s executed within a closed system such as the Earth’s light-cone. I wonder if it’s a reasonable consequence of the second principle of thermodynamics…