# Power Circuits

*Not anything to do with electrical engineering*

Alexei Miasnikov, Alexander Ushakov, and Dong Wook Won are the authors of a brilliant paper, “Power Circuits, Exponential Algebra, and Time Complexity.” It and several followup papers give polynomial-time algorithms for problems in group theory where only highly exponential algorithms were previously known. The algorithms use a new scheme for manipulating highly succinct representations of certain huge integers. Regarding this, the first followup paper says:

In this sense our paper is more about compression and data structures than about algorithmic group theory.

Today Ken and I want to discuss the compression method called power circuits.

Their paper is on a topic near and dear to those of us working in computational complexity: the use of Boolean circuits to represent large objects in a succinct way. That is to represent an object of bits in a manner that requires many fewer than bits. Their circuits are algebraic but they don’t use large constants so bit-measures apply.

Of course in general, by a standard bit-counting argument, this is impossible. But it can be possible for objects with special properties. The shock—to me at least—is that this new type of representation is very reasonable and yet it is one that complexity theorists missed. The representation was, we understand, created by group theorists in order to solve a problem they had. I guess it just shows again that

Necessity is the mother of invention.

See this for a discussion of the origin of this saying. William Horman included the Latin form ‘Mater artium necessitas’ in *Vulgaria*, his textbook of common phrases which he published in 1519.

## Power Circuits

The short explanation is that Miasnikov, Ushakov, and Won (MUW) needed a way to represent extremely large numbers. For example, they need to handle a number like

Note that is very large and that it is not sparse. But it is easy to see that is not equal to

Take the two numbers modulo , for example, the first is and the second is .

Complexity theorists love big numbers, like the above. The best way to describe such numbers is probably the arrow notation of Don Knuth. However, the group theorists MUW needed the ability to indirectly manipulate very large numbers. They needed the following:

Definition 1A power circuit is a usual arithmetic circuit but with the allowed operations of , , and and the constants .

This means that a power circuit can compute the sum or difference of previous results. But it cannot do multiplication. It can, however, take a previous number and shift it by a previous value . Thus it can compute in binary the number via

where and are previous results. In particular,

is easy to define in this representation. They introduce their new type of circuits and then prove the following cool theorem:

Theorem 2There is a polynomial time algorithm that determines whether one power circuit represents the same number as another power circuit.

The surprise, in my opinion, is that they can represent and manipulate the binary encodings of many truly immense numbers. Think towers of . That one can determine whether two such expressions are equal in polynomial time, is quite neat.

They then provide several interesting applications. They show that the quantifier-free theories of some exponential algebras are decidable in polynomial time, as well as the word problems in some “hard to crack” one-relator groups.

Let’s take a look at the one-relator problem, since my understanding is that this is the main motivation for the creation of this cool type of representation.

## One-Relator Groups

Emil Post and Andrey Markov Jr. independently proved the following theorem:

Theorem 3There exists a finitely presented monoid for which there is no algorithm to solve the word problem.

A trouble that plagued this area for many years, was that going from monoids to groups was tricky. A group is a much more fragile object than a monoid. This makes the encoding of computation into a group quite challenging. Finally Pyotr Novikov and William Boone showed there is a finitely presented group with an undecidable word problem. A indicator that this was a major result is that the original proof of Novikov is 143 pages long.

Of course the word problem is:

Given a finite presentation of some fixed group , decide whether an input word represents the identity element.

Since Novikov-Boone showed that the general word problem is undecidable, one quest is to find classes of groups that have decidable word problems. An important result concerns the case where the presentation has one and only one relationship:

These groups are not surprisingly called *one-relator groups*. See this post for a great discussion about a famous theorem by Wilhelm Magnus:

Theorem 4The word problem for a one-relator group is decidable.

Note, this was proved in 1930 before the modern theory of unsolvability. It is possible to state positive computability theorems without any theory, but to state uncomputability theorems requires the whole modern theory of what is computable.

## Special One-Relator Groups

The following one-relator group has a solvable word problem, by Magnus’s theorem, but until recently it was not known to have a polynomial time word problem.

Actually it is

as a one-relator group presentation. The time complexity of the algorithm implicit in Magnus’s theorem is terrible, and takes more than a tower of time. It was a breakthrough when this was reduced to exponential time. But MUW used the power circuit idea to reduce the cost of the word problem for this group to polynomial time. Wonderful.

The running times are also “improvable.” Originally they had for this word problem, but a followup paper reduced the time to . The same paper gives an time algorithm for the word problem in a 4-relator group devised by Graham Higman:

These results are not completely *ad hoc*—they come with general theorems about decidability of certain quantifier-free theories. But the theory needs tuning for the application. The authors of this paper needed a different kind of compressing circuits to tackle group word problems involving Ackermann growth rather than merely tower growth. They note that their circuits however have fewer closure properties than power circuits. The closure issue seems at first to be a paradox. Why not just add multiplication to power circuits? The answer is that adding this is incompatible with the ability to tell feasibly whether two power circuits are equal. If we allow multiplication then it is impossible to keep this critical property.

Power circuits have an even stranger situation. For any there are numbers that are multiples of and yet they have no power circuit of size less than a tower of 2’s. By diagonalizing over one can get super-elementary growth. Thus it is not feasible to solve simple linear equations using this model—see their paper for details.

The moral is that MUW tailored the definition of power circuits to include exactly what they needed to solve their group word problem. They left out multiplication because they did not need it; and they left it out because with it they could not solve their group word problem. Brilliant.

## Open Problems

Why did we—computer theorists—completely miss this idea? Why indeed.

A mark of maturing literature for power circuits is their inclusion in a collection that serves as a textbook. To track their development, please use the search

power circuits and one relator group

to avoid hitting electrical engineering papers.

☞ https://oeis.org/wiki/Riffs_and_Rotes

Thanks, Jon—A heads-up for others that there are lots of interesting number charts there.

I’m a longtime fan of this blog. It’s mostly a distance from my research, but suddenly I find myself doubly blessed here: a link to the blog of a seminar I led three years ago and a discussion of a paper “Taming the hydra: the word problem and extreme integer compression” which I coauthored. Thank you for your interest in this direction of research and for drawing attention to it.

I think you have nailed a key point. For these compressed representations the “closure properties” that we know how to compute efficiently are highly limited. MUW can determine in polynomial time when two power circuits represent the same integer. With our “Ackermann compressed” representations, my coauthors and I can only establish (in polynomial time) whether a representation is well-formed, and, if so, whether the integer is zero, positive, or negative. But both for MUW and for us those are enough for the motivating group theoretic applications.

And, yes, what is left out from the form of the compressed representations (e.g. multiplication) is key. I believe for MUW and also for us, the representations naturally emerge from the group theory. When attempting to solve a Word Problem for a finitely presented group head-on by manipulating words according to the defining relations, one might seek to go through some scheme of rearranging the input word in the direction of some normal form. However the word may dramatically blow up in size in the process. More specifically, some letters may occur to very high powers. The ways in which those powers grow in the course of the manipulation dictate the operations that go into the definitions of the compression representations. One can then use the compression representations to shorten the words that occur in the computation, and, if one can manipulate (or recognize) the compression representations well enough, one can keep a lid on the complexity.

You are also absolutely right that the theory, as it stands, needs tuning for applications. For my coauthors and I that “tuning” was hard going. While I think the ideas behind our paper are natural, the technical details are substantial. My coauthors and I are predominantly group theorists. I hope complexity experts will unite, streamline, and further develop the theories of power circuits and Ackermannian compression. It would be great if new applications could be identified and the scope of the compression methods be better understood—e.g. what “closure properties” can be computed efficiently.

On a more vague note…. The group theoretic problem my coauthors and I solved was that of finding a polynomial-time algorithm for the Word Problem for a certain double of a “hydra group”. This group is “natural” — it is far from the contrived Novikov/Boone-type examples you mention. Even though it was hard, it always seemed inevitable that a polynomial time algorithm would eventually emerge. Why?

Thanks very much. Comments after the first one go through automatically. Dick and I both have related further interests. Mine are represented by these recent papers. A particular problem: Suppose we have a multiplication gate h = f*g where the ideal has a monomial—i.e., there are polynomials a,b such that a*f + b*g cancels all but one term. Can we then write h as a function of f^2 and other terms that are simpler in some usable sense than f*g or than doing (f+g)^2 – f^2 – g^2? It’s not so well-defined…

Thank you. I’m glad to learn more about this broader context in which some of the ideas live.

It is perhaps interesting that if one adds a bit of “CS” flavor, by including bitwise Boolean operations between numbers, the complexity changes — from P to PSPACE complete …

Even so the textbook mentioned last in the blog post really deserves to be mentioned (because it manages to treat advanced topics and still remain readable by a motivated student), at least the German version does not discuss power circuits. The Baumslag-Solitar group and the one-relator Baumslag group are discussed in the chapter on amalgamated products and HNN extensions, and the papers with the O(n^7) and O(n^3) algorithms for the word-problem are mentioned. But the book itself just describes an algorithm with a terrible runtime for showing that their word-problem is decidable.

Why this paper is of any use to complexity theory? Does it show anything close to P=NP?

Complexity Theory is not the same as the P vs NP question — any more than Number Theory is not the Goldbach Conjecture.

Is this reply a joke? Really what use is this paper to complexity theory?

these results do have some indirect connection to complexity class separations because a major attack on those yielding some fruit is circuit theory. circuit theory seems to have a lot of deep secrets yet to be uncovered and it seems likely it will play a key role in many future theoretical crossroads.

I first encountered power circuits at the SUMS Puzzle Hunt in 2016, where we were required to “reverse engineer” them: we were given three circuits encoding 11, 5, 25, which is KEY (K is the 11th letter of the alphabet, etc) and asked to decode a longer message

http://researchers.ms.unimelb.edu.au/~mums/puzzlehunt/2016/answers.html?puzzle=5.1

It was a challenging puzzle to say the least…