Skip to content

The Network Coding Conjecture Is Powerful

May 6, 2019

More hard Boolean functions

Peyman Afshani, Casper Freksen, Lior Kamma, and Kasper Larsen (AFKL) have a recent paper which we just discussed.

Today Ken and I will update our discussion.

Their paper assumes the network coding conjecture (NCC) and proves a lower bound on the Boolean complexity of integer multiplication. The main result of AFKL is:

Theorem 1 If the NCC is true, then every Boolean circuit that computes the {\mathsf{shift}} function has size {\Omega(n \log n)}.

The {\mathsf{shift}} function is: Given an {n}-bit number {x} and a number {k} so that {1 \le k \le n}, compute the {2n}-bit product of {x} by {2^{k}}:

\displaystyle  x \times 2^{k}.

This is a special case of the integer multiplication problem. In symbols it maps {x} and {k} to {0^k x 0^{n-k}}, as in our photo above.

Our point, however, is not about integer multiplication. Nor even about NCC—no knowledge of it will be needed today, so read on even if you are not aware of NCC. No. Our point is that a whole lot of other Boolean functions would inherit the same {\Omega(n \log n)} circuit lower bound as {\mathsf{shift}}. And several aspects of that seem troubling.

Some Worry

We are impressed by the AFKL paper but also worried. Proving a super-linear lower bound in the unrestricted Boolean complexity model has long been considered a difficult problem. Maybe a hopeless problem. Yes they are proving it not for a single-output function; they are proving it for a multiple-output function. Still I thought that it seems too good to be correct. Even worse, assuming NCC they also resolve other open problems in complexity theory. I am worried.

What we suggest is to catalog and study the consequences of their results. If we find that their results lead to a contradiction, then there was something to be worried about. Or perhaps it would mean that NCC is false. If we find no contradiction, then everything we discover is also a consequence of NCC. Either way we learn more.

AFKL Functions

Let’s call a Boolean function an AFKL function provided it has Boolean circuit complexity {\Omega(n \log n)} if the NCC is true. Thanks to AFKL, we now know that integer multiplication is an AFKL function. I started to think about: What functions are in this class? Here are some examples:

  • Integer multiplication

  • Integer multiplication by a power of {2}

  • {\mathsf{FLIP}}

  • Discrete convolution

  • Sparse convolution

We describe the last three next. We show they have linear size-preserving reductions from the {\mathsf{shift}} function.

The Flip Function

Define {\mathsf{FLIP}_{n}: \{0,1\}^{n} \rightarrow \{0,1\}^{n}} by

\displaystyle  \mathsf{FLIP}_{n}(0^{k}1\alpha)= 1\alpha 0^{k}.

for {k \ge 1}. For any input not of this form, let {\mathsf{FLIP}_{n}} be {0^{n}}.

Theorem 2 The Boolean function {\mathsf{FLIP}_{n}} is an AFKL function.

Proof: Let {x,k} be the input to {\mathsf{shift}} where {|x| = n} and {k \leq n} in binary. In linear size we can test {k = 0}, when there is nothing to do, so presume {k \geq 1}. The first step is to create

\displaystyle  0^{n-k}10^{k-1}.

This is just binary-to-unary conversion and has linear-size circuits—as in multiplex decoding and as remarked by AFKL. This becomes the first {n} bits of an application of {\mathsf{FLIP}} to the {2n}-bit string

\displaystyle  0^{n-k}10^{k-1} x.

It yields

\displaystyle  10^{k-1} x 0^{n-k}.

Changing the first bit to {0} then leaves the desired output of {\mathsf{shift}}. \Box

The point is that {\mathsf{FLIP}_{n}} is a super-simple function. It just moves the initial block of {0}‘s in a string to the end. It is amazing that this function should have only non-linear, indeed {\Omega(n\log n)}-sized, circuits.

This also means that Ken’s function, which takes {x \in \{0,1,2\}^*} and moves all the {2}s to the end of {x}, is hard even in the special cases where all the {2}‘s are at the front. What’s strange is that Ken proves his function equivalent to another special case where {|x|} is even and exactly half the characters are {2}. This latter case is one in which {\mathsf{FLIP}} is easy, but the two cases are separate. All this is touch-and-go enough to compound our “worry.”

The Sparse Convolution Function

The following is also an AFKL function.

\displaystyle  y_{l} = \bigvee_{i=1}^{n-l} w_{i} \wedge x_{i+l},

for {l=1,\dots,n} where an empty OR is defined to be {0}. This can even further be restricted to the case where exactly one of the {w_{i}} are {1} and the rest are {0}. Call this the sparse convolution function.

Theorem 3 The sparse convolution is a monotone AFKL function.

Proof: We will give a sketch of why this is true. Define

\displaystyle  y_{l} = \bigvee_{i = 1}^{n-l} \bar{x}_{1} \wedge \cdots \wedge \bar{x}_{i} \wedge x_{i+1} \wedge x_{i+l}.

It is not hard to show that this yields the FLIP function. We can reduce computing it to a convolution of the {x_{i}}‘s and {\Gamma(i)} where

\displaystyle  \Gamma(i) = \bar{x}_{1} \wedge \cdots \wedge \bar{x}_{i} \wedge x_{i+1}.

The key is to note that exactly one {\Gamma(i)} will be non-zero, and so the convolution is sparse. \Box

The sparse convolution function raises an interesting question: Are the methods for sparse FFT useful here? The lower bound for AFKL functions suggests that they are not applicable.

Is the NCC-Boolean Connection New?

The subtitle of our post marveled that a core-theory advance on circuits for multiplication had come via the practical side of throughput in computer networks. AFKL deserve plaudits for linking two communities. We should mention that one theoretician we both know well, Mark Braverman, with his students Sumegha Garg and Ariel Schvartzman at Princeton, proved a fact about NCC that is relevant to this discussion:

Theorem 4 Either NCC is false, or bit-operations save a whole {(\log n)^{\Omega(1)}} factor in the network size.

Even this paper, however, does not address lower bounds on Boolean circuits. The only prior link between NCC and Boolean complexity is a 2007 paper by Søren Riis, which is cited by AFKL, and has a 2011 followup by Demetres Christofides and Klas Markström. The paper by Riis has a new “guessing game” on graphs and a demonstration that a lower-bound conjecture of Leslie Valiant needs to be rescued by dividing by a {\log\log n} factor. Theorem 4, however, seems to say that no such {\log\log n} shading can apply to NCC.

When we ask Google “network coding conjecture Boolean circuit lower bounds” (without quotes), the first page shows AFKL, our posts, and this 2014 survey by Ryan Williams—which mentions neural networks but not NCC. On the next page of hits we see Riis and the followup paper but nothing else that seems directly relevant. Nor does appending `-multiplication’ help screen out AFKL and our posts.

There is said to be empirical evidence for NCC. We wonder, however, whether that has reached the intensity of thought about circuit lower bounds. We say this because the implications from NCC make three giant steps:

  1. Not only does it assert a super-linear circuit lower bound (okay, for a function with {n} output bits), but…

  2. …it asserts {\Omega(n\log n)}

  3. …for functions that are easily in Turing machine linear time.

So one side of our worry is whether NCC can actually shed light on so many fundamental issues from complexity theory, more than absorbing light. At the very least, AFKL have re-stimulated interest in all of these issues.

Open Problems

Is {\mathsf{FLIP}} hard? Is NCC true? What other Boolean functions are AFKL functions? What about other consequences of the NCC to complexity theory?

8 Comments leave one →
  1. May 7, 2019 1:51 am

    I don’t understand why Lipton makes a big deal of this non-bidirectional result. It is clear integer multiplication is a deeper problem and $NCC$ is false.

    • rjlipton permalink*
      May 7, 2019 7:06 am

      Dear L:

      We believe that NCC could be true. It has been around a while and no one has yet shown it to be false. We do feel that strong conjectures help advance any math type area. We see that clearly in many parts of mathematics in general and complexity theory in particular. So that is why we started to study more consequences of NCC.

      I hope that helps.


      • May 7, 2019 9:49 am

        There is no way Integer Multiplication is that easy. Also what do you mean by ‘…for functions that are easily in Turing machine linear time’? Integer multiplication is not in linear time. Can you make that clear?

  2. May 7, 2019 9:50 am

    There is no way Integer Multiplication is that easy. Also what do you mean by ‘…for functions that are easily in Turing machine linear time’? Integer multiplication is not in linear time. Can you make that clear?

  3. May 7, 2019 10:01 am

    There is no way Integer Multiplication is that easy.

    1. What do you mean by ‘…for functions that are easily in Turing machine linear time’? Integer multiplication is not in linear time. Can you make that clear? Are you referring SHIFT?

    2. You say ‘We describe the last three next. We show they have linear size-preserving reductions from the {\mathsf{shift}} function’. However clearly you have avoided the harder DISCRETE CONVOLUTION. Does this truly linearly reduce to SHIFT?

  4. May 7, 2019 12:47 pm

    For your question 1, mainly we mean that shift and flip are simple linear-time computable functions. For your question 2, the point is the opposite direction: shift linearly reduces to discrete convolution.

  5. May 7, 2019 6:55 pm

    ‘shift linearly reduces to discrete convolution’. So INTEGER MULTIPLICATION is indeed the real deal and not SHIFT. Get INTEGER MULTIPLICATION in linear time and you have disproved $NCC$ (in a harder way) and proved linearity of others. Ok I am giving a relaxed bet ‘There is a physically realizable model of computation in which INTEGER MULTIPLICATION is in linear time’.

  6. May 8, 2019 2:10 pm

    To be honest, I don’t see why you’re surprised that FLIP also needs an n log n size circuit, or at least why you find it any more surprising that SHIFT needs n log n size. What are barriers for proving superlinear size lower bounds for circuits?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s