# What Is A Complexity Class?

* The basic structure of complexity theory: strings, problems, classes *

Hartley Rogers is a mathematician at MIT who wrote arguably the definitive book on basic and advanced aspects of recursion theory. His book entitled *Theory of Recursive Functions and Effective Computability* is a classic. He also had many strong Ph.D. students over the years, including several who are well known computer scientists: Patrick Fischer, David Luckham, and Paul Young.

Today I plan on talking about the foundations of complexity theory. Several commentators have asked more than once, “what is a complexity class?” I will try to answer that and a bit more.

I learned recursion theory from Rogers’ book. I learned early on to beware of exercises marked with the scary symbol. These were very hard problems. There may be more recent treatments of recursion theory, but to me Rogers’ book will always be one of the great textbooks on any subject.

A recent New York Times front page article on live vs. distance learning states two main points. More and more schools are using on-line technology—it is driven by lack of funds, and there is less and less difference being found between live and on-line learning. While I never was in a live class with Rogers, I wish I had been. There are many stories of how he taught, but my favorite is that he rewarded top students with Leibniz cookies.

I guess that will be always hard to do in an on-line environment—perhaps one day it will be possible to have a cookie roll out of your laptop. I do support experiments that are trying to make better use of technology to teach, but getting a chocolate cookie seems hard to beat.

Let’s move on to the foundations of complexity theory, and I will connect back to Rogers later.

** The Big Three Ideas **

Complexity theory is based on three ideas, they are ideas that are so built into our “DNA” as theorists that I think we sometimes forget how important they are. There are other key ideas, but these three come first, in my opinion.

**Boolean strings:** I already got in trouble over this one here, but I will try to state it again. The starting insight is that we can use boolean strings to represent all our basic objects. From integers to matrices, from permutations to groups, we can and do use strings of ‘s and ‘s to represent them. Strings are the basic building blocks that we use every day in complexity theory. We say all the time, Let be an natural number It is implicitly assumed that everyone will convert that to Let be a string of ‘s and ‘s that encodes the number in the usual way. I believe the first to talk at length about binary numbers was the great Gottfried Leibniz. He was not the first to discover the binary system, but he did write at about it in his Explication de l’Arithm’tique Binaire in 1703. Also see the original here.

**Problems:**

The definition of a **problem** is simple: a problem is nothing more than a set of strings. That is it. Problems often consist of Boolean strings, but we can allow arbitrary finite alphabets, since any alphabet can be encoded into a Boolean one.

This is one of the simplest definitions in mathematics, yet it is really quite important. The notion of a problem as a formal mathematical object allows us to study problems, without this definition a problem would be a vague notion. I think that one of the great achievements of theory is the formalization of the “problem” notion.

Before this definition people might have studied a problem like: “how do we assign tasks to machines?” or “how do we find the shortest path in a graph.” But they were not precise mathematical objects. So even though the definition is almost trivial, it is very important.

In the early days of theory problems were usually called **languages**. A language was just a set of strings:

The reason they were called languages initially is based on the early motivations of the field. The initial questions were driven by two types of languages: natural languages and artificial ones. Certainly Noam Chomsky and many others were interested in trying to model real languages, such as English. Others were driven by their interest in computer programming languages. The questions raised in early theory reflected these interests, as did the results. For example, one beautiful idea is called BNF—and stands for Backus-Naur Form. John Backus was interested in ways to describe programming languages like Algol, which lead him to create BNF. Later he used related tools in his important work on FORTRAN; for this and other seminal work he was awarded the Turing Award in 1977.

Properly a language denotes a *decision problem*. There are also function problems, such as given a whole number , find its prime factors. But these can be converted to decision problems over strings, such as the set is less than some prime factor of . A fast way to solve yields a fast way to factor numbers.

**Classes:**

A family of problems is called a **complexity class**. Thus, is a complexity class provided it consists of problems. It is that general, that simple, and that elegant. Any such family is a complexity class. But as in the famous book *Animal Farm* by George Orwell:

All complexity classes are equal, but some complexity classes are more equal than others.

Our friends at Wikipedia give a much more restrictive definition of a complexity class:

In computational complexity theory, a complexity class is a set of problems of related resource-based complexity.

I think the above definition is fine for helping pin down the notion of *complexity*, but for *class* it is too restrictive in two ways. Not all interesting classes are related by a known computational resource. The notion of what are important resources itself keeps changing and growing. In the beginning there was just time and space, then we added nondeterminism, then probabilities of various sorts, then interaction, stream-ability, and so on. Perhaps energy is next.

Second, the notion of “related” better applies to the idea of a complexity *level*. The -complete problems are all closely related and form a level as well as a complexity class, but is a complexity class with levels that are quite unrelated. Of course, relatedness depends on the level of fine detail—compared to primitive-recursive all of is relatively easy—but saying “related” misses the larger point, I think. The notion of class identifies problems that share important characteristics, which may not be resource-bound and may not be “equal” even in Orwell’s sense.

I like having a general view of what a complexity class is, to focus on what makes problems *interesting*. In any event, not all possible complexity classes are interesting, and we will soon see some things that make a class, well, “classy.”

** How Many Classes Are There? **

The answer to this question depends on what you mean. The cardinality of strings is countable , the cardinality of problems is , and the cardinality of complexity classes is .

Of course most of these classes are uninteresting. But I thought it would be fun to point out the sizes involved. If nothing, it is clear there is a hierarchy here: there are many more complexity classes than problems, and many more problems than strings.

** Special Classes **

Since there are complexity classes and there are only a finite number of people doing research in complexity theory, we must focus on some of the classes. There is no single rule that I know that makes a complexity class “interesting.” But some classes are definitely more equal than others.

At the top level some classes seem to be very natural, others seem less natural but help explain the structure of other classes, and some classes exist for more practical reasons. I think some exist because we can prove something about them, while other more natural classes are too difficult to understand.

There is a great resource for those interested in complexity classes—especially the subset of the classes that people have considered. It is called the “Complexity Zoo” and is run by:

- Zookeeper: Scott Aaronson
- Veterinarian: Greg Kuperberg
- Tour Guide: Christopher Granade

They say that `there are now 493 classes and counting!” Of course we know that there are many more classes, but 493 is the count of “interesting” ones. Here is the tag cloud of the zoo:

One of the things that makes the Complexity Zoo different from a normal zoo, is that sometimes classes disappear. Sometimes we discover that two complexity classes are really the same—I do not know if this happens in real zoos. Do zookeepers find that animals they thought were different are really exactly the same? This does happen in our world. There was, for example, a time when and were thought to be separate, but we now know they are the same. That classes can disappear seems to be one the cool properties of our world.

** Properties Of Complexity Classes **

One of the properties that makes a complexity class special is having a complete problem. Suppose that is a class, if there is a single problem in the class that captures the entire class this is very neat. That a complexity class can be represented by *one* problem is quite special. This happens, of course, for the class : the famous SAT problem, among many others, is -complete. All the intricate structure of is captured by SAT.

The precise definition of being a complete problem for a class depends on the important notion of “reduction.” For to be complete it must have two properties:

- The problem must be in the class .
- Given any problem in the class there must be a reduction from to .

Note, the type of reduction can and does vary: there is polynomial time reduction, Turing reduction, reductions based on lower complexity classes, and so on. The intuitive notion is, however, simple: reduces to provided the ability to solve the problem allows one to solve .

I wanted a name for a complexity class that does have this special property: having a complete problem. I have tried to create a good name, I do not know one that describes exactly this property. I thought it might make sense to call such a class a -class after the usual symbol used for one of the first complexity classes ever discovered to have a complete problem. In recursion theory, see Rogers book, the letter is used to denote the Gödel numbers of those Turing Machines that halt on input . If there is a known name, or a better one, let me know. For now a class is a -class if it has a complete problem.

In Rogers book various notions of complete are defined. There are, by the way, important classes that are not known to be -classes. Perhaps the most famous, notorious even, is . This is the class of problems accepted by polynomial time probabilistic machines than have two sided bounded error. One of the great open problems is: Does this class have a complete problem? Another is the same question for .

All of this is laid out clearly in Rogers’ book, although he does not state some of what I have as explicitly. He assumes natural numbers and does not start with boolean strings. But the notion of problems is clear, as is the notion of complexity classes.

** Open Problems **

There are countless open problems. The main questions fall into two types: what are the relationships between various classes and what are the properties of a class. The former most famous example is the question, and the latter most famous is the question does have a complete problem, is it a -problem?

[various word-level tweaks]

Of course, since we have reasons to believe that BPP might “disappear” for being equal to P, we have reasons to believe that BPP is indeed a K-class. Of course, it is conceivable that we show that BPP is K-class without showing it equal to P yet; that would be fun.

Work from Sipser and from Hartmanis and Hemachandra(/spandra) gives us some evidence that BPP is a K-class. As for P being equal to BPP implying BPP-Complete problems: I have not made up my mind which way the P versus BPP question goes – in fact every day I have a different intuition; and for this reason I find the P versus BPP a much more interesting problem than the P versus NP problem. Determining whether BPP has complete problems is one of my long term research goals. I hope someone clever solves it before I pass away if I can not (of course time is on my side).

Dick, thank you for this blog post. I’m amazed you were able to cover so much material so compactly. And it was funny!

Whoops. Small typo in the first sentence – *isn’t a K-class

Ross,

Thanks. It is only possible because of great readers like you. Thanks to you.

The terminology of order theory might help with the problem of naming such classes.

With the set of all languages as our underlying set, and the relation “can be reduced to” as the ordering relation (where the reductions can be chosen to be whatever is appropriate), we get a poset. Natural complexity classes become ideals in this poset. A ideal, in this context, is a set of languages C that is closed downwards (i.e., if y is in C, and x reduces to y, then x is also in C) and for every x and y in C, there should exist a z in C such that x and y both reduce to z (this is true of natural complexity classes).

Complexity classes that have a complete problem are exactly principal ideals in this set, and those that do not are not principal ideals. (An ideal is principal if it is the smallest ideal that contains a given element p.)

Doesn’t BNF stand for Backus-Naur form, not Backus normal form?

Also, is there some requirement that a complexity class be associated with some kind of measure on the space of problems? I was under the impression that Manuel Blum had proposed such a definition at one point in time, although his phrasing was too much in terms of recursion theory for it to make sense to me.

Gilbert,

I believe both are correct. I learned one…

Gilbert,

I believe both are correct. I learned one…

Gilbert,

Knuth wrote a letter to the Editor of CACM (Volume 7 Issue 12, Dec. 1964) in defence of the name Backus-Naur form.

In defining completeness, it is possible to start two steps earlier than is customary: (i) define completeness not for classes, but for class inclusions, and (ii) say nothing about reductions.

Definition. Problem $A$ is complete for $C_1 \supseteq C_2$ if and only if:

1. $A\in C_2$ and

2. if $A\in C_1$ then $C_1\supseteq C_2$.

(Thus, e.g., if a problem is “$NP$-complete” under any of the standard definitions, then it is complete for the inclusion $P \supseteq NP$ —note that I am not claiming the converse.)

This definition captures all the original motivation behind completeness: it identifies the problems for which the inclusion is equivalent to the concrete question whether $A\in C_1$. And that’s all we ask of a complete problem: to witness the separation iff the separation holds, so that we can safely invest all our energy in its study.

Moreover, the above definition assumes nothing more than the original motivation. In particular, it says nothing about reductions. Hence, it leaves room for the following question: Is there completeness (in the above sense) without reductions? The standard presentations skip this issue completely.

Can you talk a little about why we are so opposed to using promise problems as complete problems? For instance, the following problem is BPP-complete: given a probabilistic Turing machine M and an input x, let P be the probability that M will return “yes”. If P is greater than 2/3, the problem-solver must say “yes”; if it is less than 1/3, they must say “no”; and if P is between 1/3 and 2/3, they are allowed to say anything at all.

It’s true that the promise that P is either greater than 2/3 or less than 1/3 is semantic and not (a priori) syntactic, and thus is difficult to verify. But by making that promise, or equivalently letting the problem-solver off the hook if the promise is false, it seems we get a perfectly reasonable “problem” which is BPP-complete. Why is this so bad? Or rather, why is the distinction between promise and non-promise problems, and the complexity classes for which they are complete, viewed as so fundamental?

Collapse of species happens with dinosaurs: I still have trouble with the fact that the brontosaurus that I grew up with is an apatosaurus. Don’t know what this implies for complexity classes.

One more typo-fix (was missed in my edits): the very last line of the post should be “is [BPP] a K-class?” Ah—I see Ross has noticed above, except his “first” shall be “last”…

Cris Moore has also picked up on a point Subruk raised while we were editing but decided not to suggest to add: the distinction between “syntactic” and “semantic” classes. This seems to align with the existence of a complete problem, but formalizing this connection strikes me as being elusive. If I recall (and events today kept me from more than recalling), some papers in the 1990′s including ones with Lane Hemaspaandra among he author5s tried to pin this down using oracle definability.

I have mixed feelings about using promise problems for completeness—they seem to mislead my structural intuition as much as help. The class UAP, which is the analogue of UP when NTMs are replaced by alternating TMs, was introduced by Klaus-J”orn Lange and Peter Rossmanith in 1994. It has as a complete “promise set” the language of true quantified Boolean formulas, restricted to the promise that every accepting existential node in the corresponding alternating computation has a unique accepting choice. The issue, of course, is that the promise set itself seems more complex than the truth property for these formulas.

Syntactic classes are the same things as K-classes. This is because we can enumerate reductions, for the natural notions of reductions that are studied. I believe this was shown by Hartmanis and Hemachandra.

Not only classes, but also classes of classes can disappear…

caRh,

The way I see completeness (may I say: “feel about” completeness?) matches precisely

the poset definition described by Robin Kothari in his comment, so I won’t repeat it.

The point is that complete problems are a component in a nice order structure of classes, which is an elegant notion and a good tool for thinking, so I would not give that up.

I think that the property defined in your comment deserves a distinct name, say “a separation

witness”. To compare the two notions, consider the standard definition of NP-completeness:

Ladner proved that if P.ne.NP, there are separation witnesses which are not complete problems.

This is an important result and, to me, an illustration of the necessity to separate the terms.

Furthermore, since your definition only uses implication (in no.2) but does not require that the implication be proved by reduction, it follows that for all $C1,C2$ a witness for $C_1 \supseteq C_2$ exists. Thus adopting this as a definition also loses the insight that we gain from learning that certain classes have complete problems and other do not.

(sorry, everybody, for getting my comment into the wrong branch! computers are so difficult to use)

As I was first taught complexity theory, I learned a quite restrictive notion of “time complexity class” and “space complexity class” (for both either deterministic or nondeterministic). Actually, the only existing (non)deterministic time or space complexity classes were DTIME(t(n)), NTIME(t(n)), DSPACE(s(n)) and NSPACE(s(n)) for some t and n.

With this restrictive notion of complexity classes, it is not that clear that the most famous P, NP, PSAPCE and so on are true complexity classes. But the “Union Theorem” (I don’t know if this theorem has another better name) comes to our rescue saying that there exists a function f such that union over i of DTIME(f_i(n)) equals DTIME(f(n)) as soon as for all i and n, f_i(n)<f_{i+1}(n) (and the same holds for NTIME, DSPACE, NSPACE).

I'm kind of aware that this result is quite old-school and not so useful but I definitely like the idea that there exist some functions t_P, t_NP or s_PSPACE such that P=DTIME(t_P), NP=NTIME(t_NP) and PSPACE=DSPACE(s_PSPACE) for instance.

I don't know when and by whom this result was proved.

Typo: “Actually, the only existing (non)deterministic time or space complexity classes were DTIME(t(n)), NTIME(t(n)), DSPACE(s(n)) and NSPACE(s(n)) for some t and s.”

I learned recursion theory from Hartley Rogers. I took three classes from him in my senior year at MIT. His lectures were much better than his book, which is saying a lot. He was perhaps the best teacher I ever had; certainly the clearest lecturer.

Claude Shannon deserves at least some of the credit for the big idea of using boolean strings to represent our basic objects. It was quite controversial when Shannon represented signals, which were thought of as analog at the time, by boolean strings. I’ve heard it said that this was one of Shanon’s most important contributions.

A few days back I had to explain complexity to students with no computation background. Although I got them to understand the basics, I hope I had read this blog post before the lecture and not after.

As for classes with complete problems, the name should suggest that the class complexity is captured in a single problem.

As for classes with complete problems, the first name that comes in mind is complexity preserving “simulable” , which is not that catchy.

Names for

K-classes: “idempotent”, or possibly “reflective”.Attributes for classes: “subprobative,superprobative, idemprobative”, according to whether the computational complexity of witnessing an algorithm’s membership in the class is strictly easier than the class, within the class, or harder than the class.

E.g., “

Pis believed to be superprobative, while circuit-classes are subprobative.”From the latin

probare:approve (of), esteem/commend/recommend/certify; give assent/approval/sanction.I disagree that a “complexity class” is an arbitrary set of problems. Rather they should be defined using some notion of computation. In particular all complexity classes are countable so there are only 2^{aleph_0} complexity classes.

Lance,

I understand. But add random oracles to the mix, does that not change things…

Of course there are few interesting ones.

Not really. For any oracle A, random or otherwise, NP^A is countable.

There might be an uncountable number of complexity classes but each one is countable.

i guess we know who he supported in the priority dispute….

i guess we know who he supported in the priority dispute….

When we are at the topic of Professor Hartley Rogers, I feel compelled to share this story that once happened in his class at MIT. (I found the link to this interesting anecdote one day on wikipedia)

The URL is The Harley Rogers Fencing Story

If you do not feel like going to the URL, here is what appears on that page [copied and pasted]

“Every student at the Massachusets Institute of Technology is required to take certain core classes. One of these notorious freshman classes is 18.03, or Differential Equations, taught by one Hartley Rogers. According to rumor, Hartley is a direct decendent of William Barton Rogers, the founder of MIT, and there are students at MIT now whose parents took 18.03 from him when they were freshmen.

Hartley is one of those love-him-or-hate-him professors, and no freshman gets through the class without having at least one long debate about whether or not Hartley can teach. And that’s just how it was when a pair of freshman hackers were sitting around in the Coffeehouse one evening tooling on some problem sets. One student would say “He’s the worst professor I’ve ever had! He’s boring, has terrible handwriting, and he can’t explain anything!” And another would reply “What are you talking about? Hartley’s great! He’s the best teacher I’ve ever seen!” And the argument would continue. This gave them an idea.

A few weeks later, there comes a morning with a perfectly ordinary 18.03 lecture. Hartley is standing next to the board, writing with his oversized lecturer’s chalk and muttering into the microphone. The students in the first rows are diligently taking notes, the students in the middle rows just as diligently passing notes, and the students in the back of the lecture hall are catching up on the previous night’s sleep, when suddenly, the door in the back of the lecture hall bursts open and a figure comes running in. He’s dressed head-to-toe in black, with a black fencing mask and a saber, and he races down the stairs on the side of the lecture hall, calling out: “Hartley Rogers! For too many years you have bored the freshmen, put them to sleep with your matrices and your differential equations, and I’ve come to put a stop to it! Defend yourself!”

Now Hartley Rogers is standing there looking astonished, and the students have all realized by now that there’s something more interesting than mathematics going on. Before Hartley can reply, however, the other door to the lecture hall burts open and another figure comes running in. This one is dressed from head to toe in white, with a white mask and carrying a saber, and as he runs down the stairs into the hall he cries out “Hartley Rogers! For years, you have taught the freshmen well, and given them the tools they need to learn. I will defend your honor!”

And as the students and Hartley look on, the two fencers have a furious swordfight back and forth across the classroom. Finally the swordsman in white disarms the swordsman in black, and the two mysterious figures disappear again, and Hartley goes back to his lecture.”

I am truly a novice in this field, and know very little about it, so I apologize in advance if this question is misplaced:

If a problem is merely a set of strings, what constitutes a solution to a problem?

A solution is generally a “computational device” of some kind that can tell if an x is in the set or not.

I took the recursion theory course from Hartley Rogers, and I enjoyed it so much that I did so *again*. (I think I also took a third, follow-up course in “higher recursion theory” from him.) Both iterations were after the book was already out; but all three were quite different, and we hardly touched the book. Notes copied straight from the board each time, however, could easily pass for a textbook as good as the one we ignored. No one in a *recursion theory* course taught by Professor Rogers could possibly think he wasn’t a supremely clear and inspiring teacher.

I’m not a theorist, or even a mathematician, but i loved this. Very clear. Classy. I follow this blog regularly. Thank you.