Skip to content

Progress On Modular Computation

May 28, 2016


New results on computing with modular gates

ChenPapakonstantinou

Shiteng Chen and Periklis Papakonstaninou have just written an interesting paper on modular computation. Its title, “Depth Reduction for Composites,” means converting a depth-{d}, size-{s} {\mathsf{ACC^0}} circuit into a depth-2 circuit that is not too much larger in terms of {d} as well as {s}.

Today Ken and I wish to talk about their paper on the power of modular computation.

One of the great mysteries in computation, among many others, is: what is the power of modular computation over composite numbers? Recall that a gate {\mathsf{MOD}_{r}(x)} outputs {1} if {(x_{1}+\cdots+x_{n)} \equiv 0 \bmod r} and {0} otherwise. It is a simple computation: Add up the inputs modulo {r} and see if the sum is {0}. If so output {1}, else output {0}. This can be recognized by a finite-state automaton with {r} states. It is not a complex computation by any means.

But there lurk in this simple operation {\mathsf{MOD}_{r}} some dark secrets. When {r} is a prime the theory is fairly well understood. There remain some secrets but by Fermat’s Little Theorem a {\mathsf{MOD}_r} gate has the same effect as a polynomial. In general, when {r} is composite, this is not true. This makes understanding {\mathsf{MOD}} gates over composites much harder: simply because polynomials are easy to handle compared to other functions. As I once heard someone say:

“Polynomials are our friends.”

Chen and Papakonstaninou (CP) increase our understanding of modular gates by proving a general theorem about the power of low depth circuits with modular gates. This theorem is an exponential improvement over previous results when the depth is regarded as a parameter rather than constant. Their work also connects with the famous work of Ryan Williams on the relation between {\mathsf{NEXP}} and {\mathsf{ACC^0}}.

Main Theorem

We will just state their main result and then state one of their key lemmas. Call a circuit of {\mathsf{AND}}, {\mathsf{OR}}, {\mathsf{NOT}}, and {\mathsf{MOD}_{r}} gates (for some {r}) an {\mathsf{ACC}}-circuit.

Theorem 1 There is an efficient algorithm that given an {\mathsf{ACC}} circuit of depth {d}, input length {n}, and size {s \ge n}, outputs a depth-2 circuit of the form {\mathsf{SYM}\circ\mathsf{AND}} of size {2^{(log s)^{O(d)}}}, where {\mathsf{SYM}} denotes some gate whose output depends only on the number of {1}s in its input.

This type of theorem is a kind of normal-form theorem. It says that any circuit of a certain type can be converted into a circuit of a simpler type, and this can be done without too much increase in size. In complexity theory we often find that it is very useful to replace a complicated type of computational circuit with a much cleaner type of circuit even if the new circuit is bigger. The import of such theorems is not that the conversion can happen, but that it can be done in a manner that does not blow up the size too much.

This happens all through mathematics: finding normal forms. What makes computational complexity so hard is that the conversion to a simpler type often can be done easily—but doing so without a huge increase in size is the rub. For example, every map

\displaystyle  f \colon A \rightarrow \mathbb{Z},

can be easily shown to be equal to an integer-valued polynomial with coefficients in {\mathbb{Q}} provided {A} is a finite subset of {\mathbb{Z}^{n}}. For every point {a = (a_1,\dots,a_n) \in A}, set

\displaystyle  g_a(x_1,\dots,x_n) = \prod_{i=1}^n \prod_{b_i \neq a_i} (x_i - b_i),

where the inner product is over the finitely many {b_i} that appear in the {i}-th place of some member of {A}. Then {v_a = g_a(a_1,\dots,a_n)} is an integer and is the only nonzero value of {g_a} on {A}. We get

\displaystyle  f'(x_1,\dots,x_n) = \sum_{a \in A}\frac{f(a)}{v_a}g_a(x_1,\dots,x_n),

which is a polynomial that agrees with {f} on {A}.

Well, this is easy but brutish—and exponential size if {A} is. The trick is to show that when {f} is special in some way then the size of the polynomial is not too large.

Lemma 5

One of the key insights of CP is a lemma, Lemma 5 in their paper, that allows us to replace a product of many {\mathsf{MOD}} gates by a summation. We have changed variables in the statement around a little; see the paper for the full statement and context.

Lemma 5 Let {x_{1},\dots,x_{n}} be variables over the integers and let {r,q} be relatively prime. Then there exist {m \leq r^{n+1}} integral linear combinations {y_{1},\dots,y_{m}} of the variables and integer coefficients {c_{j} \in [q]} so that

\displaystyle  \prod_{1 \le i \le n} \mathsf{MOD}_{r}(x_{i}) = \left(\sum_{1 \le j \le m} c_{j}\mathsf{MOD}_{r}(y_{j})\right) \bmod{q}.

The value of {r} can be composite. The final modulus can be {s = qr} in place of {q} and this helps in circuit constructions. Three points to highlight—besides products being replaced by sums—are:

  1. The bound {m \leq r^{n+1}} on the number of terms in the sum depends only on {r}, not {q} (or {s}); this may have other uses.

  2. The arithmetic of the sum is over an extension of {r} to {s}.

  3. The coefficients are integral. That is, the conclusion is doing more than just linearizing with complex coefficients, which as they remark in their proof would be easier. This also helps in building circuits where bits set to {1} are being counted.

Further all of this can be done in a uniform way, so the lemma can be used in algorithms. This is important for their applications. Note this is a type of normal form theorem like we discussed before. It allows us to replace a product by a summation. The idea is that going from products to sums is often a great savings. Think about polynomials: the degree of a multi-variate polynomial is a often a better indicator of its complexity of a polynomial than its number of terms. It enables them to remove layers of large {\mathsf{AND}} gates that were implementing the products (Lemma 8 in the paper) and so avoids the greatest source of size blowup in earlier constructions.

A final point is that the paper makes a great foray into mixed-modulus arithmetic, coupled with the use of exponential sums. This kind of arithmetic is not so “natural” but is well suited to building circuits. Ken once avoided others’ use of mixed-modulus arithmetic by introducing new variables—see the “additive” section of this post which also involves exponential sums.

Open Problems

The result of CP seems quite strong. I am, however, very intrigued by their Lemma 5. It seems that there should be other applications of this lemma. Perhaps we can discover some soon.

One Comment leave one →
  1. May 29, 2016 1:05 am

    Is there a known limit on how strong circuit lower bounds can be proven within Williams’ program (as it’s called in [CP])? [I have not read Williams’ paper in detail so I am not sure if my question has a trivial answer.]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s