Skip to content

What Is A Complexity Class?

November 7, 2010


The basic structure of complexity theory: strings, problems, classes

Hartley Rogers is a mathematician at MIT who wrote arguably the definitive book on basic and advanced aspects of recursion theory. His book entitled Theory of Recursive Functions and Effective Computability is a classic. He also had many strong Ph.D. students over the years, including several who are well known computer scientists: Patrick Fischer, David Luckham, and Paul Young.

Today I plan on talking about the foundations of complexity theory. Several commentators have asked more than once, “what is a complexity class?” I will try to answer that and a bit more.

I learned recursion theory from Rogers’ book. I learned early on to beware of exercises marked with the scary {\blacktriangle} symbol. These were very hard problems. There may be more recent treatments of recursion theory, but to me Rogers’ book will always be one of the great textbooks on any subject.

A recent New York Times front page article on live vs. distance learning states two main points. More and more schools are using on-line technology—it is driven by lack of funds, and there is less and less difference being found between live and on-line learning. While I never was in a live class with Rogers, I wish I had been. There are many stories of how he taught, but my favorite is that he rewarded top students with Leibniz cookies.

I guess that will be always hard to do in an on-line environment—perhaps one day it will be possible to have a cookie roll out of your laptop. I do support experiments that are trying to make better use of technology to teach, but getting a chocolate cookie seems hard to beat.

Let’s move on to the foundations of complexity theory, and I will connect back to Rogers later.

The Big Three Ideas

Complexity theory is based on three ideas, they are ideas that are so built into our “DNA” as theorists that I think we sometimes forget how important they are. There are other key ideas, but these three come first, in my opinion.

{\bullet } Boolean strings: I already got in trouble over this one here, but I will try to state it again. The starting insight is that we can use boolean strings to represent all our basic objects. From integers to matrices, from permutations to groups, we can and do use strings of {0}‘s and {1}‘s to represent them. Strings are the basic building blocks that we use every day in complexity theory. We say all the time, Let {x} be an natural number {\dots} It is implicitly assumed that everyone will convert that to Let {x} be a string {x_1 x_2 \dots x_m } of {0}‘s and {1}‘s that encodes the number in the usual way. I believe the first to talk at length about binary numbers was the great Gottfried Leibniz. He was not the first to discover the binary system, but he did write at about it in his Explication de l’Arithm’tique Binaire in 1703. Also see the original here.

{\bullet } Problems:

The definition of a problem is simple: a problem is nothing more than a set of strings. That is it. Problems often consist of Boolean strings, but we can allow arbitrary finite alphabets, since any alphabet can be encoded into a Boolean one.

This is one of the simplest definitions in mathematics, yet it is really quite important. The notion of a problem as a formal mathematical object allows us to study problems, without this definition a problem would be a vague notion. I think that one of the great achievements of theory is the formalization of the “problem” notion.

Before this definition people might have studied a problem like: “how do we assign tasks to machines?” or “how do we find the shortest path in a graph.” But they were not precise mathematical objects. So even though the definition is almost trivial, it is very important.

In the early days of theory problems were usually called languages. A language {L} was just a set of strings:

\displaystyle  L \subseteq \Sigma^* .

The reason they were called languages initially is based on the early motivations of the field. The initial questions were driven by two types of languages: natural languages and artificial ones. Certainly Noam Chomsky and many others were interested in trying to model real languages, such as English. Others were driven by their interest in computer programming languages. The questions raised in early theory reflected these interests, as did the results. For example, one beautiful idea is called BNF—and stands for Backus-Naur Form. John Backus was interested in ways to describe programming languages like Algol, which lead him to create BNF. Later he used related tools in his important work on FORTRAN; for this and other seminal work he was awarded the Turing Award in 1977.

Properly a language denotes a decision problem. There are also function problems, such as given a whole number {N}, find its prime factors. But these can be converted to decision problems over strings, such as the set {\mathrm{F} = \{N\#w: w} is less than some prime factor of {N\}}. A fast way to solve {\mathrm{F}} yields a fast way to factor numbers.

{\bullet } Classes:

A family of problems is called a complexity class. Thus, {\mathsf{C}} is a complexity class provided it consists of problems. It is that general, that simple, and that elegant. Any such family is a complexity class. But as in the famous book Animal Farm by George Orwell:

All complexity classes are equal, but some complexity classes are more equal than others.

Our friends at Wikipedia give a much more restrictive definition of a complexity class:

In computational complexity theory, a complexity class is a set of problems of related resource-based complexity.

I think the above definition is fine for helping pin down the notion of complexity, but for class it is too restrictive in two ways. Not all interesting classes are related by a known computational resource. The notion of what are important resources itself keeps changing and growing. In the beginning there was just time and space, then we added nondeterminism, then probabilities of various sorts, then interaction, stream-ability, and so on. Perhaps energy is next.

Second, the notion of “related” better applies to the idea of a complexity level. The {\mathsf{NP}}-complete problems are all closely related and form a level as well as a complexity class, but {\mathsf{EXPTIME}} is a complexity class with levels that are quite unrelated. Of course, relatedness depends on the level of fine detail—compared to primitive-recursive all of {\mathsf{EXPTIME}} is relatively easy—but saying “related” misses the larger point, I think. The notion of class identifies problems that share important characteristics, which may not be resource-bound and may not be “equal” even in Orwell’s sense.

I like having a general view of what a complexity class is, to focus on what makes problems interesting. In any event, not all possible complexity classes are interesting, and we will soon see some things that make a class, well, “classy.”

How Many Classes Are There?

The answer to this question depends on what you mean. The cardinality of strings is countable {\aleph_0}, the cardinality of problems is {2^{\aleph_0}}, and the cardinality of complexity classes is {2^{2^{\aleph_0}}}.

Of course most of these classes are uninteresting. But I thought it would be fun to point out the sizes involved. If nothing, it is clear there is a hierarchy here: there are many more complexity classes than problems, and many more problems than strings.

\displaystyle  \text{ string } \rightarrow \large\text{ problems } \rightarrow \huge\text{ complexity classes}.

Special Classes

Since there are {2^{2^{\aleph_0}}} complexity classes and there are only a finite number of people doing research in complexity theory, we must focus on some of the classes. There is no single rule that I know that makes a complexity class “interesting.” But some classes are definitely more equal than others.

At the top level some classes seem to be very natural, others seem less natural but help explain the structure of other classes, and some classes exist for more practical reasons. I think some exist because we can prove something about them, while other more natural classes are too difficult to understand.

There is a great resource for those interested in complexity classes—especially the subset of the classes that people have considered. It is called the “Complexity Zoo” and is run by:

  • Zookeeper: Scott Aaronson
  • Veterinarian: Greg Kuperberg
  • Tour Guide: Christopher Granade

They say that `there are now 493 classes and counting!” Of course we know that there are many more classes, but 493 is the count of “interesting” ones. Here is the tag cloud of the zoo:

One of the things that makes the Complexity Zoo different from a normal zoo, is that sometimes classes disappear. Sometimes we discover that two complexity classes are really the same—I do not know if this happens in real zoos. Do zookeepers find that animals they thought were different are really exactly the same? This does happen in our world. There was, for example, a time when {\mathsf{IP}} and {\mathsf{PSPACE}} were thought to be separate, but we now know they are the same. That classes can disappear seems to be one the cool properties of our world.

Properties Of Complexity Classes

One of the properties that makes a complexity class special is having a complete problem. Suppose that {\mathsf C} is a class, if there is a single problem {S} in the class that captures the entire class this is very neat. That a complexity class can be represented by one problem is quite special. This happens, of course, for the class {\mathsf{NP}}: the famous SAT problem, among many others, is {\mathsf{NP}}-complete. All the intricate structure of {\mathsf{NP}} is captured by SAT.

The precise definition of being a complete problem {S} for a class {\mathsf{C}} depends on the important notion of “reduction.” For {S} to be complete it must have two properties:

  1. The problem {S} must be in the class {\mathsf{C}}.
  2. Given any problem {T} in the class there must be a reduction from {T} to {S}.

Note, the type of reduction can and does vary: there is polynomial time reduction, Turing reduction, reductions based on lower complexity classes, and so on. The intuitive notion is, however, simple: {T} reduces to {S} provided the ability to solve the {S} problem allows one to solve {T}.

I wanted a name for a complexity class that does have this special property: having a complete problem. I have tried to create a good name, I do not know one that describes exactly this property. I thought it might make sense to call such a class a {K}-class after the usual symbol used for one of the first complexity classes ever discovered to have a complete problem. In recursion theory, see Rogers book, the letter {K} is used to denote the Gödel numbers of those Turing Machines that halt on input {0}. If there is a known name, or a better one, let me know. For now a class is a {K}-class if it has a complete problem.

In Rogers book various notions of complete are defined. There are, by the way, important classes that are not known to be {K}-classes. Perhaps the most famous, notorious even, is {\mathsf{BPP}}. This is the class of problems accepted by polynomial time probabilistic machines than have two sided bounded error. One of the great open problems is: Does this class have a complete problem? Another is the same question for {\mathsf{NP} \cap \mathsf{co-NP}}.

All of this is laid out clearly in Rogers’ book, although he does not state some of what I have as explicitly. He assumes natural numbers and does not start with boolean strings. But the notion of problems is clear, as is the notion of complexity classes.

Open Problems

There are countless open problems. The main questions fall into two types: what are the relationships between various classes and what are the properties of a class. The former most famous example is the {\mathsf{P} = \mathsf{NP}} question, and the latter most famous is the question does {\mathsf{BPP}} have a complete problem, is it a {K}-problem?

[various word-level tweaks]

About these ads
36 Comments leave one →
  1. Albert Atserias permalink
    November 7, 2010 12:00 pm

    Of course, since we have reasons to believe that BPP might “disappear” for being equal to P, we have reasons to believe that BPP is indeed a K-class. Of course, it is conceivable that we show that BPP is K-class without showing it equal to P yet; that would be fun.

  2. November 7, 2010 12:36 pm

    Work from Sipser and from Hartmanis and Hemachandra(/spandra) gives us some evidence that BPP is a K-class. As for P being equal to BPP implying BPP-Complete problems: I have not made up my mind which way the P versus BPP question goes – in fact every day I have a different intuition; and for this reason I find the P versus BPP a much more interesting problem than the P versus NP problem. Determining whether BPP has complete problems is one of my long term research goals. I hope someone clever solves it before I pass away if I can not (of course time is on my side).

    Dick, thank you for this blog post. I’m amazed you were able to cover so much material so compactly. And it was funny!

    • November 7, 2010 12:37 pm

      Whoops. Small typo in the first sentence – *isn’t a K-class

    • rjlipton permalink*
      November 7, 2010 2:45 pm

      Ross,

      Thanks. It is only possible because of great readers like you. Thanks to you.

  3. Robin Kothari permalink
    November 7, 2010 1:24 pm

    The terminology of order theory might help with the problem of naming such classes.

    With the set of all languages as our underlying set, and the relation “can be reduced to” as the ordering relation (where the reductions can be chosen to be whatever is appropriate), we get a poset. Natural complexity classes become ideals in this poset. A ideal, in this context, is a set of languages C that is closed downwards (i.e., if y is in C, and x reduces to y, then x is also in C) and for every x and y in C, there should exist a z in C such that x and y both reduce to z (this is true of natural complexity classes).

    Complexity classes that have a complete problem are exactly principal ideals in this set, and those that do not are not principal ideals. (An ideal is principal if it is the smallest ideal that contains a given element p.)

  4. November 7, 2010 2:09 pm

    Doesn’t BNF stand for Backus-Naur form, not Backus normal form?

    Also, is there some requirement that a complexity class be associated with some kind of measure on the space of problems? I was under the impression that Manuel Blum had proposed such a definition at one point in time, although his phrasing was too much in terms of recursion theory for it to make sense to me.

    • rjlipton permalink*
      November 7, 2010 2:43 pm

      Gilbert,

      I believe both are correct. I learned one…

    • rjlipton permalink*
      November 7, 2010 2:43 pm

      Gilbert,

      I believe both are correct. I learned one…

    • Alessandro permalink
      November 7, 2010 2:58 pm

      Gilbert,
      Knuth wrote a letter to the Editor of CACM (Volume 7 Issue 12, Dec. 1964) in defence of the name Backus-Naur form.

  5. caRh permalink
    November 7, 2010 3:00 pm

    In defining completeness, it is possible to start two steps earlier than is customary: (i) define completeness not for classes, but for class inclusions, and (ii) say nothing about reductions.

    Definition. Problem $A$ is complete for $C_1 \supseteq C_2$ if and only if:
    1. $A\in C_2$ and
    2. if $A\in C_1$ then $C_1\supseteq C_2$.
    (Thus, e.g., if a problem is “$NP$-complete” under any of the standard definitions, then it is complete for the inclusion $P \supseteq NP$ —note that I am not claiming the converse.)

    This definition captures all the original motivation behind completeness: it identifies the problems for which the inclusion is equivalent to the concrete question whether $A\in C_1$. And that’s all we ask of a complete problem: to witness the separation iff the separation holds, so that we can safely invest all our energy in its study.

    Moreover, the above definition assumes nothing more than the original motivation. In particular, it says nothing about reductions. Hence, it leaves room for the following question: Is there completeness (in the above sense) without reductions? The standard presentations skip this issue completely.

  6. Cristopher Moore permalink
    November 7, 2010 3:19 pm

    Can you talk a little about why we are so opposed to using promise problems as complete problems? For instance, the following problem is BPP-complete: given a probabilistic Turing machine M and an input x, let P be the probability that M will return “yes”. If P is greater than 2/3, the problem-solver must say “yes”; if it is less than 1/3, they must say “no”; and if P is between 1/3 and 2/3, they are allowed to say anything at all.

    It’s true that the promise that P is either greater than 2/3 or less than 1/3 is semantic and not (a priori) syntactic, and thus is difficult to verify. But by making that promise, or equivalently letting the problem-solver off the hook if the promise is false, it seems we get a perfectly reasonable “problem” which is BPP-complete. Why is this so bad? Or rather, why is the distinction between promise and non-promise problems, and the complexity classes for which they are complete, viewed as so fundamental?

  7. Jack permalink
    November 7, 2010 5:42 pm

    Collapse of species happens with dinosaurs: I still have trouble with the fact that the brontosaurus that I grew up with is an apatosaurus. Don’t know what this implies for complexity classes.

  8. November 8, 2010 12:32 am

    One more typo-fix (was missed in my edits): the very last line of the post should be “is [BPP] a K-class?” Ah—I see Ross has noticed above, except his “first” shall be “last”…

    Cris Moore has also picked up on a point Subruk raised while we were editing but decided not to suggest to add: the distinction between “syntactic” and “semantic” classes. This seems to align with the existence of a complete problem, but formalizing this connection strikes me as being elusive. If I recall (and events today kept me from more than recalling), some papers in the 1990′s including ones with Lane Hemaspaandra among he author5s tried to pin this down using oracle definability.

    I have mixed feelings about using promise problems for completeness—they seem to mislead my structural intuition as much as help. The class UAP, which is the analogue of UP when NTMs are replaced by alternating TMs, was introduced by Klaus-J”orn Lange and Peter Rossmanith in 1994. It has as a complete “promise set” the language of true quantified Boolean formulas, restricted to the promise that every accepting existential node in the corresponding alternating computation has a unique accepting choice. The issue, of course, is that the promise set itself seems more complex than the truth property for these formulas.

    • Rahul permalink
      November 8, 2010 5:18 am

      Syntactic classes are the same things as K-classes. This is because we can enumerate reductions, for the natural notions of reductions that are studied. I believe this was shown by Hartmanis and Hemachandra.

      Not only classes, but also classes of classes can disappear…

    • November 8, 2010 7:16 am

      caRh,
      The way I see completeness (may I say: “feel about” completeness?) matches precisely
      the poset definition described by Robin Kothari in his comment, so I won’t repeat it.
      The point is that complete problems are a component in a nice order structure of classes, which is an elegant notion and a good tool for thinking, so I would not give that up.
      I think that the property defined in your comment deserves a distinct name, say “a separation
      witness”. To compare the two notions, consider the standard definition of NP-completeness:
      Ladner proved that if P.ne.NP, there are separation witnesses which are not complete problems.
      This is an important result and, to me, an illustration of the necessity to separate the terms.
      Furthermore, since your definition only uses implication (in no.2) but does not require that the implication be proved by reduction, it follows that for all $C1,C2$ a witness for $C_1 \supseteq C_2$ exists. Thus adopting this as a definition also loses the insight that we gain from learning that certain classes have complete problems and other do not.

      • November 8, 2010 7:18 am

        (sorry, everybody, for getting my comment into the wrong branch! computers are so difficult to use)

  9. November 8, 2010 5:29 am

    As I was first taught complexity theory, I learned a quite restrictive notion of “time complexity class” and “space complexity class” (for both either deterministic or nondeterministic). Actually, the only existing (non)deterministic time or space complexity classes were DTIME(t(n)), NTIME(t(n)), DSPACE(s(n)) and NSPACE(s(n)) for some t and n.

    With this restrictive notion of complexity classes, it is not that clear that the most famous P, NP, PSAPCE and so on are true complexity classes. But the “Union Theorem” (I don’t know if this theorem has another better name) comes to our rescue saying that there exists a function f such that union over i of DTIME(f_i(n)) equals DTIME(f(n)) as soon as for all i and n, f_i(n)<f_{i+1}(n) (and the same holds for NTIME, DSPACE, NSPACE).

    I'm kind of aware that this result is quite old-school and not so useful but I definitely like the idea that there exist some functions t_P, t_NP or s_PSPACE such that P=DTIME(t_P), NP=NTIME(t_NP) and PSPACE=DSPACE(s_PSPACE) for instance.

    I don't know when and by whom this result was proved.

    • November 8, 2010 5:30 am

      Typo: “Actually, the only existing (non)deterministic time or space complexity classes were DTIME(t(n)), NTIME(t(n)), DSPACE(s(n)) and NSPACE(s(n)) for some t and s.”

  10. November 8, 2010 7:41 am

    I learned recursion theory from Hartley Rogers. I took three classes from him in my senior year at MIT. His lectures were much better than his book, which is saying a lot. He was perhaps the best teacher I ever had; certainly the clearest lecturer.

  11. November 8, 2010 10:55 am

    Claude Shannon deserves at least some of the credit for the big idea of using boolean strings to represent our basic objects. It was quite controversial when Shannon represented signals, which were thought of as analog at the time, by boolean strings. I’ve heard it said that this was one of Shanon’s most important contributions.

  12. Harry Zisopoulos permalink
    November 8, 2010 6:03 pm

    A few days back I had to explain complexity to students with no computation background. Although I got them to understand the basics, I hope I had read this blog post before the lecture and not after.

    As for classes with complete problems, the name should suggest that the class complexity is captured in a single problem.

    As for classes with complete problems, the first name that comes in mind is complexity preserving “simulable” , which is not that catchy.

    • November 9, 2010 5:20 am

      Names for K-classes: “idempotent”, or possibly “reflective”.

      Attributes for classes: “subprobative,superprobative, idemprobative”, according to whether the computational complexity of witnessing an algorithm’s membership in the class is strictly easier than the class, within the class, or harder than the class.

      E.g., “P is believed to be superprobative, while circuit-classes are subprobative.”

      From the latin probare: approve (of), esteem/commend/recommend/certify; give assent/approval/sanction.

  13. Lance permalink
    November 9, 2010 10:58 am

    I disagree that a “complexity class” is an arbitrary set of problems. Rather they should be defined using some notion of computation. In particular all complexity classes are countable so there are only 2^{aleph_0} complexity classes.

    • rjlipton permalink*
      November 9, 2010 2:26 pm

      Lance,

      I understand. But add random oracles to the mix, does that not change things…

      Of course there are few interesting ones.

      • Lance permalink
        November 9, 2010 3:24 pm

        Not really. For any oracle A, random or otherwise, NP^A is countable.

        There might be an uncountable number of complexity classes but each one is countable.

  14. Aaron Davies permalink
    November 10, 2010 4:47 pm

    he rewarded top students with Leibniz cookies

    i guess we know who he supported in the priority dispute….

  15. Aaron Davies permalink
    November 10, 2010 4:47 pm

    he rewarded top students with Leibniz cookies

    i guess we know who he supported in the priority dispute….

  16. November 10, 2010 9:42 pm

    When we are at the topic of Professor Hartley Rogers, I feel compelled to share this story that once happened in his class at MIT. (I found the link to this interesting anecdote one day on wikipedia)

    The URL is The Harley Rogers Fencing Story

    If you do not feel like going to the URL, here is what appears on that page [copied and pasted]

    “Every student at the Massachusets Institute of Technology is required to take certain core classes. One of these notorious freshman classes is 18.03, or Differential Equations, taught by one Hartley Rogers. According to rumor, Hartley is a direct decendent of William Barton Rogers, the founder of MIT, and there are students at MIT now whose parents took 18.03 from him when they were freshmen.
    Hartley is one of those love-him-or-hate-him professors, and no freshman gets through the class without having at least one long debate about whether or not Hartley can teach. And that’s just how it was when a pair of freshman hackers were sitting around in the Coffeehouse one evening tooling on some problem sets. One student would say “He’s the worst professor I’ve ever had! He’s boring, has terrible handwriting, and he can’t explain anything!” And another would reply “What are you talking about? Hartley’s great! He’s the best teacher I’ve ever seen!” And the argument would continue. This gave them an idea.

    A few weeks later, there comes a morning with a perfectly ordinary 18.03 lecture. Hartley is standing next to the board, writing with his oversized lecturer’s chalk and muttering into the microphone. The students in the first rows are diligently taking notes, the students in the middle rows just as diligently passing notes, and the students in the back of the lecture hall are catching up on the previous night’s sleep, when suddenly, the door in the back of the lecture hall bursts open and a figure comes running in. He’s dressed head-to-toe in black, with a black fencing mask and a saber, and he races down the stairs on the side of the lecture hall, calling out: “Hartley Rogers! For too many years you have bored the freshmen, put them to sleep with your matrices and your differential equations, and I’ve come to put a stop to it! Defend yourself!”

    Now Hartley Rogers is standing there looking astonished, and the students have all realized by now that there’s something more interesting than mathematics going on. Before Hartley can reply, however, the other door to the lecture hall burts open and another figure comes running in. This one is dressed from head to toe in white, with a white mask and carrying a saber, and as he runs down the stairs into the hall he cries out “Hartley Rogers! For years, you have taught the freshmen well, and given them the tools they need to learn. I will defend your honor!”

    And as the students and Hartley look on, the two fencers have a furious swordfight back and forth across the classroom. Finally the swordsman in white disarms the swordsman in black, and the two mysterious figures disappear again, and Hartley goes back to his lecture.”

  17. A.A. permalink
    November 18, 2010 4:41 am

    I am truly a novice in this field, and know very little about it, so I apologize in advance if this question is misplaced:

    If a problem is merely a set of strings, what constitutes a solution to a problem?

    • rjlipton permalink*
      November 19, 2010 8:22 am

      A solution is generally a “computational device” of some kind that can tell if an x is in the set or not.

  18. Joel Seiferas permalink
    November 21, 2010 11:22 pm

    I took the recursion theory course from Hartley Rogers, and I enjoyed it so much that I did so *again*. (I think I also took a third, follow-up course in “higher recursion theory” from him.) Both iterations were after the book was already out; but all three were quite different, and we hardly touched the book. Notes copied straight from the board each time, however, could easily pass for a textbook as good as the one we ignored. No one in a *recursion theory* course taught by Professor Rogers could possibly think he wasn’t a supremely clear and inspiring teacher.

  19. Rajeev permalink
    November 16, 2011 12:51 pm

    I’m not a theorist, or even a mathematician, but i loved this. Very clear. Classy. I follow this blog regularly. Thank you.

Trackbacks

  1. Tweets that mention What Is A Complexity Class? « Gödel’s Lost Letter and P=NP -- Topsy.com
  2. Leibniz, Gottfried Wilhelm « Earthpages.ca
  3. Definition of a problem [closed] | MoVn - Linux Ubuntu Center
  4. Psst—It’s The Definition | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,540 other followers