# Whose Complexity Is It Anyway?

* The three meanings of complexity *

Steven Nixon is an expert on national security policy. He has been recognized for distinguished work in the various fields of engineering, intelligence, Congressional staff advising, and space policy. He has worked for various government agencies in the past, and has now founded his own firm, Steven Nixon Consulting.

Today I want to report on a discussion of complexity theory—but not in our sense.

I just heard Nixon speak at a conference run by a group called TTI/Vanguard. They are a for-profit organization that organizes conferences at which their subscribers can interact with leading lights. They run five conferences each year on various aspects of advanced technology. It costs quite a bit to be able to go to even one of their meetings—an order of magnitude more than our meetings and sold in blocks of five—but my “seat” is currently paid for by the College of Computing, so I attend a few of them each year.

Their meetings are fun, with interesting speakers who give strong talks, and structured for lots of audience interaction. We are encouraged to jump in and ask questions—attendees have their own microphones, so it is not too difficult to ask whatever is on your mind. They do not publish proceedings—mainly you have to be there.

This meeting’s focus was on “taming complexity.” As a theorist that is something that I would like to be able to do—is this why it’s called the Complexity Zoo? But, as you might have guessed, their notion of complexity is not our notion of complexity. This caused me think about the various uses of the word “complexity,” and what they mean. I could think of at least three types of complexity.

** Three Roads to Complexity **

Two major technical fields I know have areas that call themselves “complexity”—ours and dynamical systems. Yet the meaning used at the conference and discussed directly by Nixon was neither of these. Instead it reflects the way we use “complex” in ordinary speech: big, involved, hard-to-describe, requiring much effort even to comprehend. It included a cool notion that I think you may like called **wicked problems**, which I will define and discuss in a moment. This is different from our notion of a problem being wickedly hard—it has to do with framing the problem itself.

One of best quotes of the meeting was from Nixon’s talk, where he quoted the late President Dwight Eisenhower as once saying:

If a problem cannot be solved, enlarge it.

I wonder how we could use that to attack some of our problems. What does that mean for ?

Note that already we use *problem* in two senses: is a research problem, and it is about particular computational problems such as . The former easily enlarges itself—with a few special exceptions, we do not even know how to prove super-linear lower bounds on time or circuit size for problems within the possible range of feasibility. What I’m asking is whether we should enlarge our notion of computational problem, with due formalism including what it means to be complex. First let’s try to categorize the three meanings of complexity.

** Complexity: Resource Based **

When *we* say “complexity” we almost always are referring to the resources needed to compute and solve some problem. The resources measured can be: time, space, randomness, nondeterminism, rounds, quantum operations, or other resources. Indeed sometimes we mix these together to get measures that limit two or more resources.

In our complexity theory we like two types of results: upper bounds and lower bounds. An upper, of course, shows that some problem X can be solved in a certain resource bound. A lower shows that some problem X cannot be solved in a certain resource bound. For example, one of the great upper bounds is that we can sort numbers in at most comparisons, and the corresponding lower bound that comparisons are needed.

It is a shame that we cannot prove more such results, especially more lower bounds. But as we have talked before, many times, showing a lower bound is very difficult. It amounts to showing that there is no way to be “clever” and avoid some “obviously” required computations. Even the sorting result must be examined carefully. The lower bound relies on the fact that *only comparisons* are allowed between numbers. It is possible to sort faster than this if one can violate the model and do more than just comparisons.

A good example of this is the **bead sort** created by Joshua Arulanandham, Cristian Calude and Michael Dinneen in 2002—see here. There are potential hardware implementations that allow this method to run in linear time, but they seem not to be practical.

Initially the integers to be sorted are represented as numbers of beads. The beads for each integer occupy a row, and importantly, are left-justified in that row (diagrams from Wikipedia).

After the “fall” the poles contain the sorted order. The surprising point is that the individual beads making up each integer have been scrambled, but owing to the left-justification, one can prove that each integer is preserved in the final output.

I raise this example just to remind us how hard it is to envision all possible ways that can be used to tackle a problem—that is why lower bounds are so difficult.

** Complexity: Behavior Based **

A Google search for the word complexity hits another notion of complexity first. It finds the notion of “a complex system.” We are not far behind, but we are not first. When people think of complexity, not theorists, they usually think of systems that look like this:

When many say complexity theory, they usually are referring to the type of behavior of a system. This type of complexity theory is all about behavior, about prediction, about chaos, about the tremendous forking of paths (bifurcations) that can arise from even simple systems that evolve over time.

I wish, sometimes, that they had called their area something else other than complexity theory. But there is not much we can do about it now, since their use is well established and well funded. For example, the the Sante Fe Institute has the motto:

Complexity theory research expanding the boundaries of science.

Great motto—wish we had a cool one like this. Oh well.

There are many potential connections between our notion of resource based complexity and their behavior based complexity theory. Some of them may come through the theory of complexity over the reals and other infinite fields, aspects of which we covered here. I will leave more on those for another time.

** Complexity: Real-World Based **

One of the foundational assumptions that I believe is bedrock, solid, unchangeable, cannot question, is that theory has made progress by its way of formulating problems as objects of analysis in themselves. I will not budge on this issue. But Nixon’s talk recently made me question the related assumption this this manner of stating problems encompasses all the ones we can rigorously attack.

The key is the notion of a wicked problem(WP). It is not that easy to define what they are, but here is the standard list of properties they have:

- The problem is not understood until after the formulation of a solution.
- Solutions to wicked problems are not right or wrong.
- Every wicked problem is essentially novel and unique.
- Every solution to a wicked problem is a “one shot operation.”
- Wicked problems have no given alternative solutions.

The notion of a WP is not new. In 1967 West Churchman introduced the concept of wicked problems in a journal article, where he discussed the moral responsibility of Operations Research

to inform the manager in what respect our `solutions’ have failed to tame his wicked problems.

Horst Rittel and Melvin Webber formally described the concept of wicked problems later in 1973 where they contrasted WP’s from “trivial” problems such as those from mathematics, chess, or puzzle solving. Ouch. I believe that these problems are pretty “wicked,” but I do agree that they are not wicked in the sense used by Churchman, Rittel, and Webber.

I like the idea that some problems are inherently wicked. That they do not have precise statements of what is the *problem*. Therefore they may not have *solutions*, at least not in the usual sense. I wonder if it is possible to help develop a theory of WP’s? To develop a theory that addresses problems that are not well stated. Does this even make sense, or is this an impossible task?

I think it is not. Perhaps the start of a model could be based on algorithmic game theory. Imagine that a group of players have conflicting notions of what they want for a *solution*. Could notions from game theory help make sense of this? Is the very beauty of the the idea of a Nash Equilibrium a type of solution to a problem that at one time appeared to be wicked?

** Open Problems **

Can we help and formalize the notion of a wicked problem? What would such a theory look like?

An example of a “wicked problem” in the context of orthopaedic surgery is a small, complicated, frayed tendon lesion that is impracticable to repair. Yet the tourniquet clock is running — the tourniquet

mustbe released within a set time and so the repairmustbe completed within that time. In such “wicked” cases, the surgical strategy is summed-up by three simple words:That is, convert the small complex lesion (e.g., a partial-thickness tendon tear) to a larger simpler lesion (a full-thickness tendon tear) … the latter being far easier to repair (and often healing faster too). Seems obvious, but this option is not easy to recognize “over the board” (especially when tourniquet time is running short).

Similarly in mathematics and science, it’s not so easy to recognize creative opportunities associated to the wicked challenge of “enlarging the problem”, in part because enlarging the scope of a problem requires the merging of disciplines previously considered separate. The Nobelist Charles Townes provides a readable and highly entertaining case history of such a problem-enlarging discipline-merging strategy in his essay

Ideas and stumbling blocks in quantum electronics(IEEE Journal of Quantum Electronics,1984).Nowadays, no one conceives of quantum-coherent light generation as a “wicked problem”, but one upon a time (in the 1950s) it most certainly was. As Townes essay emphasizes:

How might the domain of complexity-theoretic investigations be enlarged and refined, Townes-style? What ” mistakes and illogicalities” in complexity theory are we collectively overlooking? What will complexity theory’s no-longer-wicked problems and their solutions look like in years to come? That is a terrific question!

“Can we help and formalize the notion of a wicked problem? What would such a theory look like?”

Easy. Social / Political Global problems + local solutions (i.e. solution mechanisms such as markets and democracies acting localy due to existence of diferent languages) = wicked problems

Wicked problems seem similar to the idea of an outside context problem although that might be considered a wicked problem an civilizational scale. An outside context problem is an idea proposed by Iain M. Banks who suggested that civilizations can encounter problems that are so outside their usual context for thinking about things that they need to use radically different thinking to deal with them. One classic example of this is the the Native Americans’ encounter with European explorers.

Joshua, don’t wicked problems typically include an element of what Wittgenstein called “aspect-blindness?” Although Wittgenstein conceived of aspect blindness as an artificially constructed philosophical disability, nowadays aspect-blindness is recognized as being ubiquitous in human cognition. And yet, human cognition is remarkably aspect-blind to its own aspect-blindness (although we humans mostly don’t enjoy being reminded of that).

From a computational perspective, what’s surprising is not that we humans are slow to overcome our aspect-blindness — why weren’t Riemannian geometry, Darwinian evolution, Gödel undecidability, Einsteinian relativity, to say nothing of technological innovations like steam engines, telegraphy, radar, masers and lasers, MRI imaging, scanning probe microscopy, all discovered decades or even centuries earlier than they were? — but rather that human cognition can overcome aspect-blindness at all.

Because surely from a complexity-theoretic point-of-view, remediating our own aspect-blindness belong to one of the very hardest complexity classes. It’s not surprising that we succeed slowly; it’s surprising (and gratifying, and fortunate for our species) that we succeed at all! 🙂

Two small remarks, One is that after reading the post I don’t really have enough of an informal idea of what a wicked problem is to be able to think about how one might formalize it. Do you have an example or two of problems that are considered definitely wicked?

I also have a minor quibble about your assertion that the fact that there are clever and unexpected algorithms around is the reason that proving lower bounds is so hard. I find that a much less compelling reason than the various barriers to complexity. For example, there may well be clever and unexpected proofs of Ramsey’s theorem. Is that the reason it’s so hard to improve the asymptotic lower bounds for R(k,k)? I don’t think so. (I could have gone the other way: there may be some very clever examples of graphs out there, but that isn’t the reason that improving the upper bound is hard.) I just don’t see why even if there are brilliantly clever algorithms, some not yet discovered, we might not be able to reason theoretically about the class of all algorithms, with the help of concepts such as circuits. Of course, what the unexpected algorithms do show us is that we shouldn’t try to reason by making precise some probably bogus intuition that “the only way of solving this problem is to do such and such.”

The list of conditions given in this article doesn’t seem to correspond particularly well with the list of problems on the linked wikipedia page. Perhaps I would argue that any “as of now unknown possibility causing a nuclear power plant to undergo a major accident” has some of the features listed here. But on the other hand there is a much better name and a description for this particular problem – the already mentioned “outside context problem” by Ian Banks.

I would also like to see some solid examples.

Examples of Wicked Problems:

* How do you reduce the amount of crime in Denver?

* The Manhattan Project.

* Wha narrative of human pre-history can we construct from available evidence?

* How should you treat ADHD?

* How should abnormal psychological conditions be defined and classified?

* What strategies work to prevent terrorist incidents? What is the best way to prevent aircraft hijackings?

* How do you reduce accidents on an airline?

* What is the “Theory of Everything”?

Examples of problems aligned with Sante Fe Institute notions of complexity (beyond more bare bones notions of chaos theory), which are potentially cleanly defined problems that are characterized by emergence or having many parts rather than by the resources needed to solve them:

* Why do certain rules concerning how elections are conducted cause characteristic numbers of political parties to emerge?

* What properties of the human eye and the physics of light cause societies to make distinctions between colors in a particular order referring to particular parts of the visual spectrum as they grow more and more specific as they identify colors?

* What methods can be used to discern the times of multiple instances of admixture between ancestral populations, the nature of the ancestral populations and their relative contributions, in an existing population from the population genetics of a current population. (The single instance problem may be hard, but is not really complex. The multiple layers problem makes it complex.)

I consider cluster analysis a wicked problem. There is clearly a need for algorithms that find clusters in data. However, the problem is woefully underspecified, and can be approached using a huge variety of mathematical tools. For example, optimisation approaches constrain the problem too much and are not adequate to serve the full realm of appliations. Every year I estimate there are hundreds but more likely thousands of paper published inventing new algorithms to solve this problem. Because of the underspecified nature, it is all too easy to find cases Q where algorithms A outperforms algorithms X, Y, and Z. The case for cluster analysis as a wicked problem is interesting if we check all the requirements from the post.

– The problem is not understood until after the formulation of a solution

This one is tricky, due to the underspecified nature of the problem. I might advocate that underspecified (but nonetheless clear and present) problems are wickeder.

– Solutions to wicked problems are not right or wrong.

Different cluster algorithms have different properties. Some are clearly bad, but otherwise it is very much a partial ordering.

– Every wicked problem is essentially novel and unique.

– Every solution to a wicked problem is a “one shot operation.”

Exaggerating, every cluster analysis *application* is novel and unique. “There are more cluster algorithms than there are practitioners.”

– Wicked problems have no given alternative solutions

I did not get this one.

– The planner has no right to be wrong (from the wikipedia page)

In many areas of research cluster analysis is a bootstrapping tool right at the start of the data mining stage. Criticising this step is like shooting fish in a barrel, but some reviewers do it nevertheless. It is justified in some but not all cases.

That certainly helps. I wonder whether pattern recognition would count. It seems to have many of those properties as well.

I can see the link with pattern recognition – I would argue that there are many cognitive aspects to cluster analysis (if we consider toy cases that can be visualised for example). Perceived cluster structure is informed by prior experiences and expectations, and local outcomes (“is this a cluster?”) are affected by global outcomes, i.e. global traits such as (perceived) regularity. Wickedness seems related to the inability to fit something in a known frame of reference, or derive a new frame to achieve the same. Cognition is, perhaps, the ability to just do that, or, equivalently, to automagically fit the right types and numbers and levels of and connections between parametric models. In this picture cluster analysis is stuck at bootstrap-level zero. It sounds like waffle, admittedly, but I blame the fuzzy concepts involved. For something more solid, I am reminded of this “out of the box” puzzle:

http://en.wikipedia.org/wiki/Thinking_outside_the_box. I was not aware it was that well known, actually.

“Wicked problems have no given alternative solutions”

I think the notion here is to distinguish between solutions in which there are lots of pros and cons to each possible choice, but the ultimate answer is a multiple choice question. For example, a decision like “who should I vote for when I vote for the President?”, may have many of the other features of a wicked problem, but has given alternative solutions.

It is interesting to note that The West Churchman article:

“…to inform the manager in what respect our `solutions’ have failed to tame his wicked problems.”

predates the Garey and Johnson book by twelve years:

“This book’s introduction features a humorous story of a man with a line of people behind him, who explains to his boss, “I can’t find an efficient algorithm, but neither can all these famous people.” “

It would be very nice to integrate the dynamical systems notion of complexity with the colloquial notion. Chaos is not that difficult to understand, and I believe that it is a profoundly important idea, socially speaking. As we build bigger (and more complex?) systems, being unaware of their inherent limitations is very dangerous. Everyone understands that the weatherman is most likely wrong, but they believe that law enforcement can predict the future and prevent terrorist attacks. It is common for people to believe that if we simply had enough laws, and they were applied strictly enough, that nothing bad could ever happen. These systems lead to tremendous problems with unintended side-effects, and there seems to be a complete lack of public understanding that these problems are absolutely necessary and can never be solved. Dealing with complex issues such as economics, governance, etc is approached in an entirely different way when you understand that creating a perfect system is not simply difficult, but provably impossible.

Otakucode,

A perfect system may be impossible because of the very nature of the perfection–i.e., it is a moving target.

But let’s look at any feedback control systems– temperature control, Cruise control and many others. There is always an “error” between the “reference” and the “achieved” parameters. But they do work.

A law enforcement system, e.g., can also be viewed essentially as a feedback control system, however flawed it may be.

So a basic question would be– how much is the error between what can be practically achieved, and where we are today?

Clearly, no one claim that we close to what may be theoretically possible.

Answering question would perhaps involve extending the concept of a “feedback control system”.

Javaid, the nature of the perfection isn’t too much of an issue. It can be proven that for ANY definition of ‘perfection’, it is impossible to approach. The systems that you mention, such as cruise control and temperature control, they operate at a specific scale – human scale. It doesn’t matter to us if once every 650,000 years, the furnace is going to explode or lead to an extinction-level event. I’m talking about the realm of mathematical proof, and scientific understanding. Chaos shows that if you define an error threshhold, your predictions will be unable to provide anything except predictions whose error bars are larger than the prediction itself.

AN example which is actually pretty complicated, but I will try to be very terse about. American Heart Association (AHA) sees a statistical relationship between high cholesterol and heart disease. They conclude that if they reduce the average diets % from saturated fats, it will lower cholesterol and reduce heart disease. They take various steps to do this. They succeeded completely, in every respect. They reduced the average Americans saturated fat intake by 15%, their goal. Did it reduce heart disease? No, it led to more heart disease than ever before, an obesity episdemic, and epidemic diabetes. Because when fat was taken out of food, it was replaced with sugar and salt to restore flavor. We are not so shortsighted, any longer, that we could claim that efforts like those of the AHA were more likely to be helpful than harmful. We knew, before anything was done, that the conclusions they were drawing were not just unfounded, but impossible to support. But that was ignored. And now, millions die for that.

We know things about the nature of centralized systems that lead directly to weakness and dangers. But we ignore them. We know how to make voting systems more effective, but we ignore that. We know how to test whether an election was fair, and we ignore it. These are all ignored because they are LIMITATIONS on the idea that science and math can address everything exactly. That it can’t is not an admission of defeat, it is an important enlightenment. And complexity is at the heart of it.

Actually, I’ve never read anything which claimed that every system which science and math cannot address with precision is complex. Is this true? Or is it only the converse that is true?

I always thought things like Logic (as in Philosophy, not circuits) have the effect of solving “wicked” problems. It is always interesting how it is a matter of distilling some complex experience into a formula with a definite answer.

Mine is a layman’s perspective though. My wife was the real expert!

To the contrary, one of the almost definitional aspects of a “wicked problem” is that it cannot be solved by logic. Problems that can be solved by logic alone are “trivial.”

I actually meant the process in logic where you take an argument and place labels on different aspects and construct relationships between the variables you’ve established. Anytime you are faced with more detail you can handle it becomes important to find a way to generalize in a way that is still accurate enough, yet efficient.

This is similar to benchmarking where you create a metric. The metric might be extremely helpful in one situation or completely inaccurate in another. This is because the generalization made, while critical to providing insight, sacrifices detail. Hopefully that clarifies my original comment 😉

I am hoping John Sidles might be able to provide further insight into otakucode’s comments/reply.

Javaid, for the past 2400 years up to and including the present era, the practice of medicine is governed by Hippocrates’ maxim:

As a concrete example, in the United States the prevalence of coronary heart disease has diminished steadily over the past 50 years, for reasons that are well-understood … well understood to be poorly understood, that is! 🙂

Certainly too, present epidemiological trends provide solid grounds for apprehension that heart disease prevalence may increase in coming decades.

So is disease prevention a “wicked problem, ” both generically and with specific regard to otakucode’s example of heart disease? Everyone agrees it is.

And yet somehow, viewed generation-by-generation, the aggregate progress of medical capabilities is genuinely awe-inspiring. Which is good. And it is my personal, very hopeful, view that the pace of medical progress will not slow in coming decades, but rather will accelerate sharply (for informatic reasons that would be a separate essay).

As a followup Jarvaid, my wife and I are reading the third volume in Jonathan Israel’s history of the Enlightenment, titled

Democratic Enlightenment: Philosophy, Revolution, and Human Rights, 1750-1790(36-pageIntroductionhere). Be aware that these volumes are not light reading … it takes my wife and me about as long to read them, as it takes Prof. Israel to write them! 🙂Israel’s in-depth account encourages us to view the Enlightenment as the humanity’s ongoing struggle against “wicked problems.” For by studying the wicked problems of the past, we illuminate the wicked problems of the present, and attain a more reasonable hope of solving the wicked problems that are sure to press humanity hard in coming decades.

Another notion of “complexity” is what Scott Aaronson wrote about some time back: http://www.scottaaronson.com/blog/?p=762

It’s “complexity” in the sense of “interestingness” of a system. Vague, but a fairly intuitive notion.

Thank you John for your elaborate enlightenment!

Although my thinking is perhaps too simple minded, I beleive we humans have a long way to go in terms of applying the disciplines & techniques that we so readily apply to the artifical domains and entities, but fail to do so to a human “organization” or a “body”.

Javaid, you have uttered a great truth. Which means, its opposite also is a great truth. That is, ever since the era (roughly) in which the Egyptians constructed the Pyramids, humanity’s greatest enterprises — “great” in the sense that they successfully attacked “wicked” problems and challenges— generically have been based upon narratives that have solid foundations in mathematics and physical science.

Over the centuries these foundations have become progressively broader and deeper, enabling humanity to conceive and sustain enterprises that have attacked more-and-more wicked problems. Moreover, the conception of the narratives that that launch and organize these great enterprises, is steadily becoming less a matter of chance and circumstance, and more a matter of deliberate intent and design. Good! 🙂

It will be very interesting, therefore, to see what great enterprises are conceived and sustained during our 21st century. A hot resource-limited planet with 10^10 hominins on it — all of whom cherish expectations of finding dignified family-sustaining jobs in communities that are secure, healthy,and free — has no shortage of wicked challenges that (hopefully) will met by great new enterprises.

Travelling and computing have something in common: in both cases there’s a speed limit.

The limit to travelling is the speed of light. The maximum speed is reached by particles with no rest mass like photons or neutrinos. Heavier bodies can’t reach this limit because it would make their mass tend to infinity.

Computational problems have their own speed limit too: polynomial time for NP-complete ones, n*log(n) for sorting, and so on. In this respect a problem such as SAT appears as heavy whereas sorting is a much lighter one.

So maybe travelling and computing are two special cases of a more general phenomenon that we must discover. This is my way of “enlarging the problem”… 🙂

… and the analogy doesn’t stop there. In order to solve a hard problem efficiently you have to write a longer program. The more efficient the algorithm, the longer its size… just like the mass of a moving body whose speed tends to that of light. Funny, isn’t it?

Serge, in regard to the general topic of linking information theory with physics, please let me commend to you the live-blogging coverage that Steve Flammia is providing of this week’s conference at the University of Montreal workshop “Quantum Information in Quantum Many-Body Physics 2011”.

In particular, Tobias Osborne’s Tuesday talk

The continuum limit of a quantum circuit: variational classes for quantum fieldsdiscusses the topic you are interested in, namely, computational descriptions of field dynamics. As the abstract of Osborne’s recent preprintHamiltonian Complexityputs it (arXiv:1106.5875):Steve Flammia’s ongoing (and much-appreciated) live coverage of the Montreal Workshop can be read on

The Quantum Pontiff.John,

Thank you very much for letting me know of Hamiltonian complexity. It indeed looks like a promising field of research. However, my initial insight wasn’t so much about the simulation of physical systems. My idea was rather to make use of some knowledge from physics – namely special relativity – to better understand the wicked problem of separating some complexity classes.

I wonder if any computer theorist has ever considered studying the size of the smallest program that can solve a problem in less than a certain number of steps. I believe this size is analogous to a mass and this number to a speed. I’m aware that a great great deal of undecidability must hide in there, but it would be interesting anyway to watch how the “mass” of a problem behaves as its “speed” varies.