The Busy Beaver Problem
Can the impossible become practical?
Complex analysts gallery source
Tibor Radó was a mathematician who was the PhD advisor to a number of computer scientists. They include Shen Lin, Paul Young, and several others. All were his students while he was at Ohio State University. He is not the Rado of the Erdős-Ko-Rado theorem—that is Richard Rado, without an accent. Tibor Radó had an Erdős Number of 3, through Lin and then Ron Graham.
Today we want to continue to talk about the famous “Busy Beaver” problem which we mentioned in our discussion on the Microsoft SVC lab closing.
The problem is actually quite neat, is still studied after more than fifty years since Radó created it, and has interesting links to industrial labs. It, the busy beaver problem, is actually an important and cool problem. For starters we note that Radó’s paper was published in the Bell System Technical Journal in 1962: On non-computable functions.
The problem, which Radó called a game, is quite simple: Consider a Turing Machine with one two-way infinite tape, a finite deterministic state control of states, and alphabet . There is no separate “blank” character—the tape initially holds all s (or is regarded as the blank). How long can the machine run? We do not allow machines to run forever, we insist that our machines halt. The question is: can such a simple device run for a long time and still stop?
Clearly with enough states the machine could compute for some very fast growing function and a modest and then halt. This could easily take a huge amount of time. The reason the problem is interesting is that the number of states is quite small. So the “game” is to try and use just a few states to encode some long running computation. This is not easy to do even for tiny, which is why the Busy Beaver problem is still interesting even today.
The function is the number of steps that such a machine can make with finite control states and still halt. This is still known exactly only for . The current -state lower bound is
This was discovered by Heiner Marxen and Jürgen Buntrock in 1989. The busy-beaver function itself is reckoned as the number of ‘s on the tape at the time of halting; the same machine gives . There are still about 40 five-state programs whose status is unresolved. Marxen’s BB site has this and much other info. The number of different programs to consider grows hugely with , but still we find it amazing that small programs can run for so long.
A Six—or Three—State Beaver
What we also find amazing is that for some of the busiest machines, their few states can be made to seem even fewer with a simple prose description. Let’s describe one using an eco-friendly beaver who plants trees as well as chops them down. We figure the trees grow quickly in beaver-time to give shade, and we give our beaver some traits of a groundhog. He starts on a ridge that has been completely deforested, bare to his right and left as far as an eagle can see.
At any time, including the start, our beaver can decide to Plant Two Trees. This means finding the first bare spot at or rightward of his position and planting a tree there. If he can then plant in the spot immediately to its right, then he does so but stays under the first tree he planted, else he finds the first bare spot to his left, plants there, and steps left. Either way, he then pauses to Take Stock. Thus our beaver’s first action is to plant two adjacent trees and Take Stock under the first, with the second to his right.
In the Take Stock state, our beaver looks at the spots left and right, and acts according to these four rules:
- If he finds himself in sun with no tree on his right, he decides to Plant Two Trees right there.
- If he finds himself in sun with a tree to his right, he chops it down and Takes Stock again in the place where he was.
- If he is in shade with no tree on his left, he plants there, goes back where he was, and decides to Plant Two Trees.
- If he is in shade with a tree on his left, he chops it down, steps left again, and Looks For His Shadow.
When Looking For His Shadow, if he sees it, he gets frightened and halts by planting a tree and burrowing under it. Else, he steps left again. Then if in sun he steps left to Plant Two Trees, while if under a tree he chops it down, moves left yet again, and Takes Stock.
That’s it. The first two Take Stock rules can even be combined to say that if he’s in sun, he plants there, ensures the place to his right is planted, and then Takes Stock again where he was. In the third rule, the “goes back” clause is unnecessary to say.
Returning to our beaver taking stock for the first time, he is in shade with no tree on his left, and so he immediately plants one tree there and two more on the far right, making trees in all. He is under the tree, chops down the (middle) tree, and next looks for his shadow when under the tree.
Of course he doesn’t see it, so he goes on beavering—until he has 95,524,079 trees in all, after over 8.69 trillion steps.
These rules can be coded by six Turing machine states, using two to plant trees, three to take stock, and one to check the shadow plus re-using one of the take-stock states. The take-stock state is reminiscent of a simple one-dimensional cellular automaton that depends only on its one-cell environment, but the planting step is decidedly non-local. This machine was discovered by Marxen and can be viewed in notes by Jeffrey Shallit. It is far from the record, however. The current busiest 6-state beaver, found by Pavel Kropitz and verified, is known to leave somewhat more than trees in over steps total.
The Current Approach
How do people prove such gargantuan bounds on ? For starters one can try to create clever programs that take a long time. This will of course only yield lower bounds, but these are still quite interesting. For higher some machines have been constructed for which cannot conveniently be expressed by conventional exponential notation, but requires something like Donald Knuth’s arrow notation to estimate.
For upper bounds, this strategy must incorporate “determining” that certain machines will never halt, as well as those that halt. To quote from Marxen and Buntrock’s 1990 online paper “Attacking the Busy Beaver 5,” here is how they proceed: Enumerate all -state Turing machines , and for each of them do the following.
- Simulate the behavior of , starting with an empty tape.
- If halts in a given bound, then record its number of steps and mark it determined.
- If uses more than a certain amount of tape and does not halt, mark it undetermined.
- If the number of simulation steps exceeds a certain predefined limit, mark undetermined.
- Use rules that predict a machine will never halt. If they prove non-halting for , then mark as determined.
The key to make this huge search possible, even for small , relies on the ability to find and analyze “macro phases” of the computations. In the above example we were able to identify macro phases with subsets of the machine itself, but generally this requires hierarchically analyzing the trace of the computation. The art of this work is finding clever patterns that correctly predict non-halting yet are easy to apply. Ground-level patterns of the kind pictured on the left in the picture below can be abstracted and then organized in a hierarchical nested manner. Sometimes this is used to prove explosive growth before termination, and sometimes to prove non-termination.
Composite of src1, src2 (p78)
Indeed, the drawing on the right comes from the 2005 MS thesis of Owen Kellett from 2004, which was part of a large project at RPI, and which includes and extends the specification by Rona Machlin and Quentin Stout of a “Christmas Tree” pattern that implies non-termination.
Thus the Busy Beaver problem becomes less about Turing machines and more about finding convergent and provably recurrent or divergent patterns in the kinds of bit-fields and word-fields in which machine-language programs operate.
Halting and the Impossible
Busy Beavers in any form represent the most extreme test cases of the Halting Problem in the sense of their raw step counts. They look like they will never halt but do. Unless there is some way a machine can look like it is going to halt but run forever, they are the only extreme in that sense. Of course, the functions and are uncomputable—for much the same reason that the Halting Problem is undecidable. They outgrow every computable function, and for any effective formal system of logic, there are only finitely many values or that can verify.
So why should we try to compute them? The wider but commensurate question is, why should we try to solve the Halting Problem? After all, the Halting Problem is ‘impossible’ to solve—everyone proves this in a typical theory course. With -hard problems too, the popular subtext is not to try to solve them. So why try?
We could quote a well-known exchange between Alice and the Red Queen in Lewis Carroll’s Through the Looking-Glass, or quote “To Dream the Impossible Dream” from the musical Man of La Mancha, or otherwise appeal to indomitable human spirit. But we can finally talk about another answer to: why try to solve impossible problems?
Because—it’s good for business.
The Microsoft Terminator
Microsoft undertook to do exactly that. “Terminator” was a doubly-thematic name for their project, which is now called T2. Not only is it about program termination, but its purpose was to do the Arnold Schwarzenegger “Terminator” business on unruly device drivers. To quote the most recent press article linked from that page,
To reduce the number of buggy device drivers, Microsoft embarked on what it called “data-driven program verification.” This is a process whereby “you model a computer program as a mathematical system and the goal is to build tools that find proofs of correctness using mathematics and logic,” said [project manager Byron] Cook. …
Working out whether a device driver would get stuck in an infinite loop was a bit more tricky, as Microsoft was faced with the difficulty of addressing the halting problem. …[which Alan Turing showed undecidable]… But Cook said the nature of device drivers meant there were ways to analyze drivers to see if they would terminate.
A May 2011 ACM Communications article by Cook with Andreas Podelski and Andrey Rybalchenko describes some of the formal logic ingredients. Their examples are mostly numerical, and we wonder how well the formal methods can be applied to analyze the patterns thrown up like sawdust by the beavers. They conclude:
With fresh advances in methods for proving the termination of sequential programs that operate over mathematical numbers, we are now in the position to begin proving termination of more complex programs.
We have tried to re-position the Busy Beaver game in terms of brief prose descriptions of algorithmic procedures rather than Turing machines. Can this shift be cross-cut with practical examples? The succinct routines need not be maximum like the “busy beaver” for any size . This may help.
Which other types of “busy” machines can be described nicely in prose? We have not found anything as neat for Kropitz’s 6-state machine.
[added more links]