Skip to content

Transcomputational Problems

April 30, 2015


Problems beyond brute force search

Bremermann
Cropped from Wikipedia source

Hans-Joachim Bremermann was a mathematician and biophysicist. He is famous for a limit on computation, Bremermann’s limit, which is the maximum computational speed of a self-contained system in the material universe.

Today Ken and I wish to talk about the limit and why it is not a limit.

A transcomputational problem is a problem that requires processing of more than {10^{93}} bits of information. The number comes from Earth-scale considerations, but adding less than 30 to the exponent breaks the scale of the known universe. Our friends at Wikipedia say:

This number is, according to Bremermann, the total number of bits processed by a hypothetical computer the size of the Earth within a time period equal to the estimated age of the Earth. The term transcomputational was coined by Bremermann.

Pro

What is interesting is that he thought about “transcomputational” problems in 1962. Yes almost a decade before the P=NP problem was stated. See his paper for details.

He noted back then that certain problems were beyond any reasonable brute-force search. In his own words:

The experiences of various groups who work on problem solving, theorem proving and pattern recognition all seem to point in the same direction: These problems are tough. There does not seem to be a royal road or a simple method which at one stroke will solve all our problems. My discussion of ultimate limitations on the speed and amount of data processing may be summarized like this: Problems involving vast numbers of possibilities will not be solved by sheer data processing quantity. We must look for quality, for refinements, for tricks, for every ingenuity that we can think of. Computers faster than those of today will be a great help. We will need them. However, when we are concerned with problems in principle, present day computers are about as fast as they ever will be. We may expect that the technology of data processing will proceed step by step—just as ordinary technology has done. There is an unlimited challenge for ingenuity applied to specific problems. There is also an unending need for general notions and theories to organize the myriad details.

Quite insightful for a paper that dates a decade before Cook-Karp-Levin on P=NP. It also predates the limits associated to Jacob Bekenstein’s bound on the information capacity of finite space and/or to Rolf Landauer’s principle.

One wonders what might have happened if Bremermann’s paper had been better known in our theory community. Ken notes that the Russian theory community in the 1960s highlighted the question of perebor—brute-force search. But he senses the emphasis was on problems for which it could be necessary in the abstract, rather than tied to Earth-scale considerations like Bremermann’s.

Con

Of course there are several eventualities that were missed. One of course is quantum computation—I believe all his calculations depend on a classical view of computation. There are several other points that we can raise to attempt to beat his “limit.”


{\bullet } Change the algorithms: Of course his limit could be applied to computing primality, for example. The brute force method is hopeless for even modest-sized numbers. Yet we know methods that are much better than brute force and so we can easily beat his limit.

Steve Wozniak visited Buffalo yesterday as a UB Distinguished Speaker. In a small group attended by Ken, he told his standard anecdote about the first program he ever wrote. This was to solve the Knight’s Tour problem on an {8 \times 8} chessboard. He first coded a brute-force solution trying all Knight moves at each step but realized before he hit “run” that it would take about {10^{25}} years. This awakened him, he said, to the fact that good algorithms have to go hand-in-hand with good hardware.


{\bullet } Change the answers: Another method is to change what we consider as an answer. Approximation algorithms of course are one important example: allow the answer to be near the optimal one. This has opened the floodgates to increase the class of problems that we can solve.


{\bullet } Change the problems: Another method is to change the problems that we attack. In many cases we can avoid general problems and exploit special structure of a problem. Examples that come to mind include: replace dense matrices by sparse ones; replace arbitrary graphs by planar ones or those with restricted minors; and replace data analysis of arbitrary data sets by analysis of data that is generated with specific noise, like Gaussian.


{\bullet } Change the world: We have posted about the idea of a world without true randomness, presenting Leonid Levin’s proof that SAT is nearly polynomially solvable in such a world. That post offered the weaker idea that every efficient generator of SAT instances might be solved by Levin’s algorithm on all but finitely many instances. The finite bound might be huge, but the fact of Levin’s algorithm would carry weight: it would solve everything else based solely on the principle that “nothing succeeds like success.” We can put this idea in concrete terms like Bremermann’s:

Could we live in a world where the act of creating an instance that requires processing more than {10^{93}} bits of information requires processing more than {10^{93}} bits of information?

We note that Larry Stockmeyer proved that every Boolean circuit capable of deciding a certain class of logical formulas {f} that fit into 8 lines of 80-column ASCII text must have more gates than atoms in the observable universe. But this does not rule out a little algorithm {A} solving every {f} that we could generate—unless we spend the time to cycle through every legal formula that fits into 640 characters.

Open Problems

Are there realistic limits on computation of the type that Bremermann postulated? What are the right limits in light of today’s insights into computation?

The Anti-Pigeonhole Conjecture

April 26, 2015


A conjecture about faculty behavior

rdkibzwang
“Dr. Kibzwang” source

Colin Potts is Vice Provost for Undergraduate Education at Georgia Tech. His job includes being a member of the President’s Cabinet—our president, not the real one—and he is charged with academic policies and changes to such policies. He is also a College of Computing colleague and fellow chess fan.

Today I want to state a conjecture about the behavior of faculty that arose when Tech tried to change a policy.
Read more…

Case Against Cases

April 22, 2015


How to avoid too many cases at least some of the time

Unknown

Theon of Alexandria was history’s main editor of Euclid’s Elements.

Today I want to talk about case analysis proofs.
Read more…

Kinds of Continuity

April 15, 2015


Congratulations to John Nash and Louis Nirenberg on the 2015 Abel Prize

NashNirenberg
Combined from src1, src2.

John Nash and Louis Nirenberg have jointly won the 2015 Abel Prize for their work on partial differential equations (PDEs). They did not write any joint papers, but Nirenberg evidently got Nash excited about David Hilbert’s 19th problem during Nash’s frequent visits to New York University’s Courant Institute in the mid-1950s. Nash in return stimulated Nirenberg by his verbal approach of barraging a problem with off-kilter ideas. The Norwegian Academy of Sciences and Letters recognized their ‘great influence on each other’ in its prize announcement.

Today we congratulate both men on their joint achievement.
Read more…

A Quantum Two-Finger Exercise

April 8, 2015


More mileage than expected from a little example

CarrollBlackboard
Cropped from World Science Festival source

Sean Carroll is a cosmologist in the Department of Physics at Caltech. He also maintains a blog, “Preposterous Universe,” and writes books promoting the public understanding of science. I have recently been enjoying his 2010 book From Eternity to Here: The Quest for the Ultimate Theory of Time.

Today—yes, Carroll would agree that there is a today—I would like to share an interpretation of a little quantum computing example that occurred to me while reading his book.
Read more…

April Fool

April 1, 2015


Non-jokes on April Fool’s Day

nunspriest
Nun’s Priest’s Tale source

Faadosly Polir is the older brother of Lofa Polir. You may recall he invented new ways to apply powerful mathematical techniques to prove trivial theorems, and she once claimed a great result on integer factoring. We have heard from both since, but they haven’t given us any new April Fool’s Day material, mainly because they weren’t fooling to begin with.

Today Ken and I wished to help you enjoy April Fool’s day.
Read more…

Is It New?

March 24, 2015


How to tell algorithms apart

egdaylightSummerPic

Edgar Daylight was trained both as a computer scientist and as a historian. He writes a historical blog themed for his near-namesake Edsger Dijkstra, titled, “Dijkstra’s Rallying Cry for Generalization.” He is a co-author with Don Knuth of the 2014 book: Algorithmic Barriers Failing: P=NP?, which consists of a series of interviews of Knuth, extending their first book in 2013.

Today I wish to talk about this book, focusing on one aspect.
Read more…

Follow

Thank you for subscribing to “Gödel's Lost Letter and P=NP”

You’ll get an email with a link to confirm your sub. If you don’t get it, please contact us