Skip to content

Priming Random Restrictions

January 14, 2021


Can we expand the base of their use?

Technion commemoration src

Bella Subbotovskaya was doing Boolean complexity lower bounds in 1961. She originated the method of restrictions of Boolean functions and proved that the formula size of the parity function of {n} variables is at least {n^{3/2}}. The parity bound was improved to quadratic in 1971 by Valeriy Khrapchenko, who passed away only last year and was the only child of the Soviet literature and arts leader Mikhail Khrapchenko. Then Johan Håstad strengthened the whole method to quadratic.

Today we discuss not possible further sharpenings but rather how to extend the ways of applying it.

Subbotovskaya was most unfortunately killed in a mysterious manner in 1982. George Szpiro in 2007 wrote an article reviewing the circumstances amid her questioning by the KGB for co-founding and hosting the “Jewish People’s University” amid rampant anti-Semitism in Soviet academies. She was also a talented violist and vocalist and investigated computer-assisted musical composition and analysis. See also her MacTutor biography.

As presented by Stasys Jukna in his 2012 book Boolean Function Complexity, Subbotovskaya gave an algorithm for optimizing the trimming of a formula over the {\wedge,\vee,\neg} basis at its leaves by substituting for {m} of the {n} variables, one at a time. She used an averaging argument that readily extends to give the same-order bounds on random restrictions. Random restrictions have other advantages that have led to wonderful work besides those above, including the recent oracle separation of {\mathsf{BQP}} from the polynomial hierarchy. We want to discuss targeting the restrictions toward possible particular sensitivities of the Boolean functions computed by the formulas.

The Method

Subbotovskaya’s argument considers every {\wedge} or {\vee} gate {g}, in a minimal formula {\phi} computing the function {f(x_1,\dots,x_n)}, that has {x_i} or {\bar{x}_i} as an input, for some {i = 1} to {n}. Note that because {\phi} is a formula, one of the two possible assignments {a_i} to {x_i} renders the other input {T_{g,i}} to {g} immaterial. If {\phi} has {s_n} leaves, then we can not only choose {i} so that {x_i} or {\bar{x}_i} occurs at least {\frac{s_n}{n}} times, we can set it to wipe out at least half of the subtrees {T_{g,i}}.

The final note is that doing so does not double-cancel any variable: if {T_{g_i}} contains {x_i} or {\bar{x}_i}, then assigning {1 - a_i} to that occurrence leaves the function computed by {g} unchanged—but that contradicts the minimality of {\phi}. Hence no subformula {T_{g,i}} contains {x_i} or {\bar{x}_i}. The upshot is that we can set {x_i} so that the resulting formula {\phi'} in {n-1} variables has {s_{n-1} \leq s_n - \frac{3s_n}{2n}} leaves. It is convenient to weaken this inequality by

\displaystyle  s_{n-1} \leq s_n\left(1 - \frac{3}{2}\frac{1}{n}\right) \leq s_n\left(1 - \frac{1}{n}\right)^{3/2},

so that by iterating we can exploit the identity that for any {k \leq n},

\displaystyle  \left(1 - \frac{1}{n}\right)\left(1 - \frac{1}{n-1}\right)\cdots\left(1 - \frac{1}{k+1}\right) = \frac{k}{n}.

The upshot is that for any {m}, we can set {n-m} variables so that the size {s_{m}} of the resulting formula in {m} variables obeys the bound

\displaystyle  s_{m} \leq s_n\left(\frac{m}{n}\right)^{3/2},\qquad\text{so}\qquad s_n \geq s_{m}\left(\frac{n}{m}\right)^{3/2}.

Because the parity function is such that any way of setting {n-1} variables leaves the formula needing (at least) one read of the remaining variable, we can carry this all the way down to {m=1} to deduce {s_n \geq n^{3/2}}. Note: this is concrete, not asymptotic.

Via induction on gates {g} deeper than those at the leaves, and techniques that adapt to how closely the two subtrees of {g} are balanced, Håstad increased the exponent from {3/2} to {2} in all cases, on pain of carrying some lower-order terms that make the bounds asymptotic. His lemmas rely more on the restrictions being random, including one that estimates the probability that the restriction already leaves a formula dependent on only one variable.

An Intuitive Framing

For sake of exposition we wave away the lower-order terms and write {\gtrdot} for the numerical inequalities. We frame the restrictions with notation that can help us target subsets {I} of variables to restrict, outside of which {f} remains sensitive.

Definition 1 Let {f : \{0,1\}^n \rightarrow \{0,1\}} be a Boolean function. We say that

\displaystyle  \textsf{free}(f,I,\alpha)

holds provided {I \subset [n]} and {\alpha: I \rightarrow \{0,1\}} and for some {x \neq y} in {\{0,1\}^n} the following hold:

  1. The inequality {f(x) \neq f(y)} holds.

  2. For all {k \in I},

    \displaystyle  x_k = y_k = \alpha(k).

Thus {\textsf{free}(f,I,\alpha)} holds provided the value {f(x)} is not determined by just knowing {x} on the set of positions in {I}. The remaining {m} positions are needed.

Theorem 2 (Main Theorem of Random Restriction Theory) Let {f : \{0,1\}^n \rightarrow \{0,1\}} be a Boolean function. Suppose that for a constant positive fraction of {I,\alpha} with {I} of size {n-m},

\displaystyle  \textsf{free}(f,I,\alpha)

holds. Then

\displaystyle  L(f) \gtrdot \frac{n^2}{m^2}.

An Example and Nonexample

To see how this scheme applies in the case of the parity function, we can set {m=1}. No matter which inputs are selected, the value of the function stays undetermined after {n-1} inputs are set. That is, the following is always true:

\displaystyle  \textsf{free}(\textit{parity},I,\alpha),

for all {I} of size at most {n-1} and all {\alpha}. Thus the main theorem says that the formula lower bound for the parity function is order {n^2}.

Note that the only property of the parity function {f} being used is the global way it satisfies {\textsf{free}(f,I,\alpha)}. We want to look for other patterns that can yield good bounds. Of course, patterns of sensitivity of Boolean functions have been looked at in many ways. We want to try to build a bridge from them to number theory.

For a non-example, consider the set {E} of even numbers. Let us number the variables so that {x_1} stands for the ones place in binary notation, {x_2} for the twos place, and so on. Then {E} depends only on {x_1}. Whether {\textsf{free}(f,I,\alpha)} holds depends entirely on {1} not belonging to {I}. If {m = n/c} for some {c > 1} then it holds with constant probability, and the framework gives

\displaystyle  L(E) \gtrdot c^2.

Well, OK, we really know {L(E) = 1} so this is where the overhead hidden by {\gtrdot} can’t be ignored. But at least this illustrates how the framework doesn’t get too excited about a frequently alternating numerical function. Similar remarks apply when the function is multiples of {2^k} where {k} is small.

A Prime Objective

We would like to apply the random restrictions method to the Boolean function {\pi(x)} that identifies whether the {n}-bit binary string {x} denotes a prime number. It has been known for decades that the Boolean circuit complexity of {\pi} is polynomial. This follows from the existence of a polynomial random algorithm for {\pi} due to Solvay-Strassen or the later Miller-Rabin algorithm and Adleman’s connection between such algorithms and Boolean circuit complexity.

There are strong lower bounds on special circuit models, but nothing so general as formulas, let alone unrestricted circuits. Certainly {\pi} is a harder function that parity, but we don’t know of a quadratic formula lower bound for it. Can we prove one in the framework?

Our intuition comes from considering various ways in which the primes are dense and uniformly—but not too regularly—distributed. Two simple ways are:

  1. The average size gap between prime numbers is {O(\log n)}.

  2. The maximum size gap between prime numbers is {O(\log^{2} n)}.

The first is of course a consequence of the prime number theorem. The second follows from the widely believed Cramér conjecture.

We will not really need this conjecture—we will not need it to hold everywhere. Our basic framework will only need it to hold for a positive fraction of small intervals of numbers that we sample. What we need instead is for a positive-fraction version to hold under a more liberal notion of “gap.”

Here is an outline to try to prove a formula size lower bound of {n^{2-\epsilon}} on {\pi(x)}, for some {\epsilon > 0}. Let

\displaystyle  t = n^{1-\epsilon}.

We will use the random restriction method for {p = 1/t}. Apply the method. The issue is now if we leave only {m} input bits unset, then what is the probability that

\displaystyle  \textsf{free}(\pi,I,\alpha)?

Certainly we cannot always leave {1} bit unset like with the XOR function. Look at this:

\displaystyle  11*111.

The input is either

\displaystyle  111111 = 63 \text{ or } 110111 = 55 .

And both are not a prime. In general, our finger has come to rest on the question of what is known or can be shown about the behavior of the set of primes on “hypercubic” sets—that is, sets that add or subtract all sums from a selected subset of the powers of {2}.

Open Problems

There must be other lower bounds on the primes. We cannot seem to find them. Any help?

2 Comments leave one →
  1. Frank Vega permalink
    January 14, 2021 10:28 pm

    Hi Dick,

    This is an idea: maybe could be helpful or maybe not.

    Can be calculated parity in n^(2-\epsilon) when you can calculate the exact number of primes between two powers of 2?

    It’s just an intuition….Certainly, if we obtain that \pi(x) in O(n^(2-\epsilon)), then the exact number of primes between the two powers of 2 can be calculated in O(n^(2-\epsilon))) as well. However, could be that an evidence for the existence of exactly two bits (two different powers of two) inside the binary string we are working on. I think this can be extended in smarter way for several bits.

    This might not actually work, but certainly using that idea (of using previous knowledge to show new things by reduction ad absurdum) is not new and might be useful and be an easier way of trying to solve the problem.

    I hope my comments could be useful in the same way your blog and comments have helped me.

    Best,
    Frank

  2. January 20, 2021 8:30 pm

    :'( remarkable story; died under mysterious circumstances? complexity theory meets conspiracy theory.
    jukna’s book on boolean circuit complexity is a world treasure, it seems nothing like it has been written before or since.
    esp like some of the reformulations of P vs NP into graph theory, itd be cool if you cover them someday, they are somewhat obscure.
    speaking of P vs NP, every year you say we’re no closer; itd be neat if you cover approaches/ directions/ areas of the field that “seemingly” or “conceivably” advance the area and could one day lead to resolution.
    ok, yes ofc you do this to some degree already, but maybe theres something overlooked/ missed? 😀 😎
    for example there is a lot of work on VP ?= VNP have you guys written on that at all?
    https://cstheory.stackexchange.com/questions/529/does-vp-neq-vnp-imply-p-neq-np

Leave a Reply to vznvznCancel reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading