Skip to content

Evolution: A Complexity View

September 22, 2014


The role of sex?

PapadimitriouTV
Cropped from source

Christos Papadimitriou has a joint paper with Adi Livnat, Aviad Rubinstein, Gregory Valiant, and Andrew Wan that will appear soon at FOCS 2014. The conference is October 19–21 in Philadelphia, with workshops and tutorials on the 18th. Here are the accepted papers, several others of which interest me a lot. The last parallel session on Monday afternoon before my own award lecture has three of them in one room, including a paper co-authored by my recent student Danupon Nangonkai, and three on quantum—it would be nice to be in a quantum superposition and attend both sessions.

Today Ken and I want to discuss their paper, which is on complexity-theoretic aspects of evolution.

Yes, evolution. Theory’s long reach has in the past touched diverse fields of science, engineering, and many other areas of human endeavor. But evolution is now getting more and more interest from theorists. Wikipedia begins its description with the sentence:

Evolution is the change in the inherited characteristics of biological populations over successive generations.

This doesn’t sound too far different from the idea of a computation “evolving” over successive steps. What distinguishes the biological kind is Charles Darwin’s identification of natural selection as the main mechanism, specifically for adaptation after changes in the environment. The discovery that genes in DNA are describable by strings, likewise their mutations, powerfully suggests a mating of biology and the “it from bit” view from theory. Can we use theory to turn Darwin inward and explain how complexity and diversity can arise from a simple Boolean model?

Theory and Fact and Paradigm

For many reasons that we will avoid, of all theories in science, evolution gets the most push-back. There are not many popular attacks on the inverse square law of gravity, even though string theory provides contexts in which it is false at short ranges. That matter is made of atoms used to be contentious, but no more, and there is no argument over how chemical bonds work. Evolution seems to get more than its share of attacks. Way more.

There is even a Wikipedia article on why this is so:

Many scientists and philosophers of science have described evolution as fact and theory, a phrase which was used as the title of an article by Stephen Jay Gould in 1981. He describes fact in science as meaning data, not absolute certainty but “confirmed to such a degree that it would be perverse to withhold provisional assent.” A scientific theory is a well-substantiated explanation of such facts. The facts of evolution come from observational evidence of current processes, from imperfections in organisms recording historical common descent, and from transitions in the fossil record. Theories of evolution provide a provisional explanation for these facts.

man

Ken thinks Gould’s point is better stated as evolution being not just a theory but also a paradigm. Within it, theories of particular mechanisms are evaluated; outside, the paradigm is applied via “evolutionary approaches to…” just about everything. The former give the impression that there is disagreement about the fact of evolution itself, while the latter include enough overreaches and oversimplifications to muddy the whole enterprise. The fact of evolution is also confused with questions about its rate and hill-climbing ability when “left to itself,” though experience in hospitals and our current lack of knowledge about complexity should forestall negative pronouncements on the latter.

What compounds the matter is that evolution can be enlarged to the study of how complex organized information systems arise. Indeed, several books by Richard Dawkins emphasize information content in biology and the quest for a “crane-lifting” mechanism for information in physics analogous to Darwin’s in biology. There the paradigm intersects directly with computer science:

cpu
src

The serious question arises of how fast evolutionary-information processes can work as measured by standard yardsticks of complexity, and how much large-scale change they can effect. This makes a context for the paper.

The Paper

Their paper is titled, “Satisfiability and Evolution.” Its abstract is:

We show that, if truth assignments on n variables reproduce through recombination so that satisfaction of a particular Boolean function confers a small evolutionary advantage, then a polynomially large population over polynomially many generations (polynomial in n and the inverse of the initial satisfaction probability) will end up almost surely consisting exclusively of satisfying truth assignments. We argue that this theorem sheds light on the problem of the evolution of complex adaptations.

Ken and I really like this work. Really. One reason is that it is a positive result. Often—not always—but often when we apply theory to areas from other parts of science we run into negative results. Telling people in another area of research that they cannot do X is not usually helpful. In many cases they need to do the work anyway, and so telling them a negative is really less than useful.

Another reason is that they work uses the evolution setup to show that a certain reasonable type of behavior happens. This is a quite pretty result about Boolean functions, independent of any connection to evolution. Further it shows that there are likely to be more such results in the near future. I think this paper is potentially quite important.

I am sorry that I missed the evolution workshop last winter at the Simons Theory Center—I got a bad cold and could not travel. I imagine that some of the work that is in this paper came out of that event. However, Ken heard a related talk invited to the Algorithmic Aspects of Information Management (AAIM 2014) conference in Vancouver in July, where Ken also had a paper. So we can turn the rest of this post over to Ken.

Benefits With Benefits

I, Ken, felt like I was following Christos around in summer 2012. He gave one version of this talk at the Turing Centennial in Princeton in May, which we covered. Then I heard much the same talk invited to AAAI-2012 that July in Toronto, where I was battling a robot. Then I was honored to speak in the University of Bergen’s Turing Centennial series in September a couple weeks after he did. His talk there was related more to his graphic novel Logicomix, which later appeared under our Christmas tree.

So, this past July, I was curious to see how his talk (later video) had evolved. Well, that is a mis-use of “evolved” unless he and his co-authors introduced processes to mutate slides and select—over successive generations—the ones with the highest research fitness. Perhaps they did. I was glad that a genetic example from the 2012 talks survived, since I had advertised—in encouraging my wife to join for the 8:20am start—that the talk would include sex.

We reproduce his first main example here. Consider two genes, {A} and {B}, with 4 and 5 alleles (mutable variants) respectively occurring in a population. Suppose the benefits of each of the 20 possible combinations are given by this {4 \times 5} matrix:

\displaystyle  \begin{array}{r|ccccc} & B_1 & B_2 & B_3 & B_4 & B_5\\ \hline A_1 & 3 & 2 & 4 & 5 & 4\\ A_2 & 1 & 0 & 0 & 7 & 2\\ A_3 & 2 & 1 & 0 & 4 & 3\\ A_4 & 1 & 8 & 1 & 3 & 2 \end{array}

The argument is that global optimization techniques, such as simulated annealing, will tend toward the combinations with highest fitness unto themselves, here {(A_4,B_2)} scoring {8} and {(A_2,B_4)} scoring {7}. These are likened to asexual propagation. However, row {A_1} has the best fitness expectation over a partner drawn randomly from the partition {B_1..B_5}. Likewise columns {B_4} and {B_5} “mix best” when combined over the range of gene {A}. The sexual recombination algorithm gives {(A_i,B_j)} a random partner {(A_k,B_\ell)} according to the population’s distribution, and each offspring gets {A_i} or {A_k}, likewise {B_j} or {B_\ell}, with 50-50 probability.

In a 2011 paper with Livnat and Marcus Feldman formalizing this process, he showed that isolated “peaks” like {(A_4,B_2)} and {(A_2,B_4)} tend to wear down in frequency, but “plateaus” like row {A_1} and some of the {B} columns survive. This showed why we need sex—or at least gave an algorithmic mechanism to answer the question:

Why is recombination (i.e., sexual reproduction) more successful than asexual reproduction?

The Missing Link to Complexity

Despite this answer, Christos admitted that “the role of sex is to this day not understood.” His talk went immediately on to a question that is more eye-opening than eyebrow-raising:

How can this crude mechanism and model account for the miracle [of complexity that] we see around us?

As stated in the new FOCS paper:

Given the reshuffling of genomes that occurs through recombination, how can complex traits that depend on many different genes arise and spread in a population?

The “complex traits” are approached and modeled by standard tools in complexity, most notably Boolean functions. Here is a mutated version of an example in the talk: Suppose we have monotone predicates {P,P'} of genetic factors {x = x_1,\dots,x_n}, where {P(x)} enables a trait to be possible and {P'(x)} guarantees it. Let {h} be Hamming weight, and let {g} be a numerical function of the environment {E}. Suppose a trait {\tau} follows a sigmoid probability function with high slope around a threshold {t} as follows:

\displaystyle  h(x : P(x)) + g(E) \geq t.

If the environment changes in a way that both increases {g(E)} and makes {\tau} advantageous, then not only will the incidence of {\tau} go up, but the above recombination algorithm will favor organisms that possess all or almost all of the factors. Then even if {g(E)} reverts to a lower level, the new population may have so many members with high {h(x)} that not only does {P(x)} remain satisfied but also {P'(x)}. Then the trait is locked in to much of the population. This yields an interpretation of a famous experiment by Conrad Waddington which is common to the talk and paper.

The Results

The FOCS paper uses none of {E}, {g}, {h}, {P}, {P'}, or {t}—these characters (some by me, some by them) die out or maybe stay latent. What it preserves is an abstraction of the counting advantage that having more factors {x_i = 1} gives in the above analysis. Let {f} be a general Boolean function whose satisfaction confers a {(1 + \epsilon)} advantage in the following way: Define {f'(x) = 1 + \epsilon f(x)}; that is, {f'(x)} is {1 + \epsilon} on satisfying assignments, {1} on unsatisfying ones. Let the probability distribution {\mu_t} on {x \in \{0,1\}^n} at time {t} be defined by independent probabilities {\text{\bf Pr}_t[x_i = 1]} for each bit, and updated by the rule

\displaystyle  \text{\bf Pr}_{t+1}[x_i = 1] = \frac{\text{\bf E}_{x: \mu_t}[x_i\cdot f'(x)]}{\text{\bf E}_{x: \mu_t}[f'(x)]}. \ \ \ \ \ (1)

We can re-cast this using the original {f} and probabilities rather than expectations:

\displaystyle  \begin{array}{rcl}  \text{\bf Pr}_{t+1}[x_i = 1] &=& \frac{\text{\bf Pr}_t[x_i = 1] \;+\; \epsilon\text{\bf Pr}_{x: \mu_t}[x_i \wedge f(x)]}{1 \;+\; \epsilon\text{\bf Pr}_{x: \mu_t}[f(x)]}\\ \\ &=& \text{\bf Pr}_t[x_i = 1]\left(\frac{1 \;+\; \epsilon\text{\bf Pr}_{x: \mu_t}[f(x)~|~x_i = 1]}{1 \;+\; \epsilon\text{\bf Pr}_{x: \mu_t}[f(x)]}\right). \end{array}

The question is, does the trait of satisfying {f} come to dominate the population? Let {p_0 = \text{\bf Pr}_{x: \mu_0}[f(x)]} be the initial satisfaction probability under whatever initial distribution {\mu_0} is given, say uniform. They prove:

Theorem 1 If {f} is monotone, then

\displaystyle  \text{\bf Pr}_{x: \mu_t}[f(x)] \;\;\geq\;\; 1 \;-\; \frac{n}{t}\cdot\frac{(1+\epsilon)}{\epsilon p_0}.

The bulk of the paper, however, is devoted to how far this extends when {f} is not monotone. The process has to be modified, the numerator in equation (1) is rewritten to confer possible advantage when {x_i = 0} too (according to {\epsilon} and the variance of bit {i} in {\mu_t}), {f} is replaced by a multilinear extension {\tilde{f}} with values in the interval {[1,1+\epsilon]}, the size {N} of the population needs to be large enough, more-complicated polynomial functions of {n}, {1/p_0}, and {1/\epsilon} are involved, and there are interlocking bounds on {N}, {\epsilon}, {p_0}, and the success probability. As usual see the paper for details, but we can abbreviate their result drastically as follows:

Theorem 2 Taken over all {f},

\displaystyle  \text{sex} \implies \text{complex}.

We suspect that no human relationship counselor would disagree.

Open Problems

The paper includes a long discussion of possible variants and improvements to the model and results. Insofar as {p_0}, namely the initial probability of a random assignment satisfying {f}, is a factor in the denominator, the results lose force when {p_0} is tiny. Indeed the authors disclaim relevance to the task of finding satisfying assignments to a hard formula {f} with small {p_0}. What they call “weak selection” owing to the smallness of {\epsilon} in their main results becomes an interesting problem of how far {\epsilon} can be increased, and maybe this informs general issues about the complexity of Boolean functions and applicability of Fourier techniques, not just evolution.

[changed order of names in first sentence from paper's alpha order]

Let’s Mention Foundations

September 18, 2014


Congratulating Dick on the 2014 Knuth Prize

LiptonSecondary
Cropped from source

Dick Lipton is of course the founder and driving writer of this weblog. He is also a computer scientist with a great record of pathbreaking research. The latter has just been recognized, I am delighted and proud to say, with the award of the 2014 Knuth Prize. The prize is awarded jointly by the ACM Special Interest Group on Algorithms and Computation Theory and the IEEE Technical Committee on the Mathematical Foundations of Computing, and was instituted in 1996, shortly after the formal retirement of the great—and very much active—Donald Knuth.

Today I am glad to give my congratulations in public, and also my thanks for a wonderful and long association.
Read more…

Some Strange Math Facts

September 14, 2014


Things we did not know

250px-STAN_ULAM_HOLDING_THE_FERMIAC
Ulam holding a strange device

Stanislaw Ulam was a Polish-American mathematician whose work spanned many areas of both continuous and discrete mathematics. He did pioneering research in chaos theory and Monte Carlo algorithms, and also invented the concept of a measurable cardinal in set theory. His essential modification of Edward Teller’s original H-bomb design is used by nearly all the world’s thermonuclear weapons, while he co-originated the Graph Reconstruction conjecture. His name is also associated with the equally notorious 3n+1 conjecture. Thus he was involved in some strange corners of math.

Today Ken and I want to talk about some strange facts observed by Ulam and others that we did not know or fully appreciate.
Read more…

A Challenge From Dyson

September 9, 2014


A reversal question

Unknown

Freeman Dyson celebrated his {\text{90}^{th}} birthday last December. He is world famous for his work in both physics and mathematics. Dyson has proved, in work that was joint with Andrew Lenard and independent of two others, that the main reason a stack of children’s blocks doesn’t coalesce into pulp is the exclusion principle of quantum mechanics opposing gravity. He shaved a factor of {\sqrt{2}} off the exponent for bounds on rational approximation of algebraic irrationals, before the result was given its best-known form by Klaus Roth. He has received many honors—recently, in 2012, he was awarded the Henri Poincaré Prize at the meeting of the International Mathematical Physics Congress.

Today Ken and I want to talk about one of his relatively recent ideas, which is more mathematics than physics. Perhaps even more theory than mathematics.
Read more…

Enriching the Frankl Conjecture

September 2, 2014


See a number, make a set

BruhnSchaudt

Henning Bruhn and Oliver Schaudt are mathematicians or computer scientists, or both. They are currently working in Germany, but wrote their survey on the Frankl Conjecture (FC) while working together in Paris. Schaudt is also known as an inventor of successful mathematical games.

Today Ken and I wish to talk about the Frankl conjecture and a principle of mathematics. Read more…

Who Invented Pointers, Amortized Complexity, And More?

August 26, 2014

Some algorithmic tricks were first invented in complexity theory

Screen Shot 2014-08-26 at 4.00.14 PM

Andrey Kolmogorov, Fred Hennie, Richard Stearns, and Walter Savitch are all famous separately; but they have something in common. Read on, and see.

Read more…

Why Is 290 Special?

August 23, 2014
by


How exceptions in theorems may affect their complexity

BhargavaindiaToday
Cropped from India Today source

Manjul Bhargava is a mathematician who just won one of the 2014 Fields Medals. We offer our congratulations on this achievement. He is an expert in number theory, which makes him special to us among Fields medalists. His Fields citation includes his doctoral work on a powerful reformulation and extension of Carl Gauss’s composition law for quadratic forms. He also proved a sense in which 290 is special to us among numbers, since we have been thinking recently about quadratic forms as tools in complexity theory.

Today we talk about his “290 Theorem” with Jonathan Hanke, which is quite accessible, and also raise complexity-related questions about this result. Read more…

Follow

Get every new post delivered to your Inbox.

Join 1,773 other followers