The Network Coding Conjecture Is Powerful
More hard Boolean functions
Peyman Afshani, Casper Freksen, Lior Kamma, and Kasper Larsen (AFKL) have a recent paper which we just discussed.
Today Ken and I will update our discussion.
Their paper assumes the network coding conjecture (NCC) and proves a lower bound on the Boolean complexity of integer multiplication. The main result of AFKL is:
Theorem 1 If the NCC is true, then every Boolean circuit that computes the
function has size
.
The function is: Given an
-bit number
and a number
so that
, compute the
-bit product of
by
:
This is a special case of the integer multiplication problem. In symbols it maps and
to
, as in our photo above.
Our point, however, is not about integer multiplication. Nor even about NCC—no knowledge of it will be needed today, so read on even if you are not aware of NCC. No. Our point is that a whole lot of other Boolean functions would inherit the same circuit lower bound as
. And several aspects of that seem troubling.
Some Worry
We are impressed by the AFKL paper but also worried. Proving a super-linear lower bound in the unrestricted Boolean complexity model has long been considered a difficult problem. Maybe a hopeless problem. Yes they are proving it not for a single-output function; they are proving it for a multiple-output function. Still I thought that it seems too good to be correct. Even worse, assuming NCC they also resolve other open problems in complexity theory. I am worried.
What we suggest is to catalog and study the consequences of their results. If we find that their results lead to a contradiction, then there was something to be worried about. Or perhaps it would mean that NCC is false. If we find no contradiction, then everything we discover is also a consequence of NCC. Either way we learn more.
AFKL Functions
Let’s call a Boolean function an AFKL function provided it has Boolean circuit complexity if the NCC is true. Thanks to AFKL, we now know that integer multiplication is an AFKL function. I started to think about: What functions are in this class? Here are some examples:
- Integer multiplication
-
Integer multiplication by a power of
-
- Discrete convolution
- Sparse convolution
We describe the last three next. We show they have linear size-preserving reductions from the function.
The Flip Function
Define by
for . For any input not of this form, let
be
.
Theorem 2 The Boolean function
is an AFKL function.
Proof: Let be the input to
where
and
in binary. In linear size we can test
, when there is nothing to do, so presume
. The first step is to create
This is just binary-to-unary conversion and has linear-size circuits—as in multiplex decoding and as remarked by AFKL. This becomes the first bits of an application of
to the
-bit string
It yields
Changing the first bit to then leaves the desired output of
.
The point is that is a super-simple function. It just moves the initial block of
‘s in a string to the end. It is amazing that this function should have only non-linear, indeed
-sized, circuits.
This also means that Ken’s function, which takes and moves all the
s to the end of
, is hard even in the special cases where all the
‘s are at the front. What’s strange is that Ken proves his function equivalent to another special case where
is even and exactly half the characters are
. This latter case is one in which
is easy, but the two cases are separate. All this is touch-and-go enough to compound our “worry.”
The Sparse Convolution Function
The following is also an AFKL function.
for where an empty OR is defined to be
. This can even further be restricted to the case where exactly one of the
are
and the rest are
. Call this the sparse convolution function.
Theorem 3 The sparse convolution is a monotone AFKL function.
Proof: We will give a sketch of why this is true. Define
It is not hard to show that this yields the FLIP function. We can reduce computing it to a convolution of the ‘s and
where
The key is to note that exactly one will be non-zero, and so the convolution is sparse.
The sparse convolution function raises an interesting question: Are the methods for sparse FFT useful here? The lower bound for AFKL functions suggests that they are not applicable.
Is the NCC-Boolean Connection New?
The subtitle of our post marveled that a core-theory advance on circuits for multiplication had come via the practical side of throughput in computer networks. AFKL deserve plaudits for linking two communities. We should mention that one theoretician we both know well, Mark Braverman, with his students Sumegha Garg and Ariel Schvartzman at Princeton, proved a fact about NCC that is relevant to this discussion:
Theorem 4 Either NCC is false, or bit-operations save a whole
factor in the network size.
Even this paper, however, does not address lower bounds on Boolean circuits. The only prior link between NCC and Boolean complexity is a 2007 paper by Søren Riis, which is cited by AFKL, and has a 2011 followup by Demetres Christofides and Klas Markström. The paper by Riis has a new “guessing game” on graphs and a demonstration that a lower-bound conjecture of Leslie Valiant needs to be rescued by dividing by a factor. Theorem 4, however, seems to say that no such
shading can apply to NCC.
When we ask Google “network coding conjecture Boolean circuit lower bounds” (without quotes), the first page shows AFKL, our posts, and this 2014 survey by Ryan Williams—which mentions neural networks but not NCC. On the next page of hits we see Riis and the followup paper but nothing else that seems directly relevant. Nor does appending ‘-multiplication’ help screen out AFKL and our posts.
There is said to be empirical evidence for NCC. We wonder, however, whether that has reached the intensity of thought about circuit lower bounds. We say this because the implications from NCC make three giant steps:
-
Not only does it assert a super-linear circuit lower bound (okay, for a function with
output bits), but…
-
…it asserts
…
- …for functions that are easily in Turing machine linear time.
So one side of our worry is whether NCC can actually shed light on so many fundamental issues from complexity theory, more than absorbing light. At the very least, AFKL have re-stimulated interest in all of these issues.
Open Problems
Is hard? Is NCC true? What other Boolean functions are AFKL functions? What about other consequences of the NCC to complexity theory?
I don’t understand why Lipton makes a big deal of this non-bidirectional result. It is clear integer multiplication is a deeper problem and $NCC$ is false.
Dear L:
We believe that NCC could be true. It has been around a while and no one has yet shown it to be false. We do feel that strong conjectures help advance any math type area. We see that clearly in many parts of mathematics in general and complexity theory in particular. So that is why we started to study more consequences of NCC.
I hope that helps.
Best
There is no way Integer Multiplication is that easy. Also what do you mean by ‘…for functions that are easily in Turing machine linear time’? Integer multiplication is not in linear time. Can you make that clear?
There is no way Integer Multiplication is that easy. Also what do you mean by ‘…for functions that are easily in Turing machine linear time’? Integer multiplication is not in linear time. Can you make that clear?
There is no way Integer Multiplication is that easy.
1. What do you mean by ‘…for functions that are easily in Turing machine linear time’? Integer multiplication is not in linear time. Can you make that clear? Are you referring SHIFT?
2. You say ‘We describe the last three next. We show they have linear size-preserving reductions from the {\mathsf{shift}} function’. However clearly you have avoided the harder DISCRETE CONVOLUTION. Does this truly linearly reduce to SHIFT?
For your question 1, mainly we mean that shift and flip are simple linear-time computable functions. For your question 2, the point is the opposite direction: shift linearly reduces to discrete convolution.
‘shift linearly reduces to discrete convolution’. So INTEGER MULTIPLICATION is indeed the real deal and not SHIFT. Get INTEGER MULTIPLICATION in linear time and you have disproved $NCC$ (in a harder way) and proved linearity of others. Ok I am giving a relaxed bet ‘There is a physically realizable model of computation in which INTEGER MULTIPLICATION is in linear time’.
To be honest, I don’t see why you’re surprised that FLIP also needs an n log n size circuit, or at least why you find it any more surprising that SHIFT needs n log n size. What are barriers for proving superlinear size lower bounds for circuits?
Ok I agree that the possibility exists than integer mult is in $\Omega(n\log n)$ and perhaps it is in $\Omega(n\log n)$ in all physically realizable models of computation.
‘..I agree that the possibility exists that* integer multiplication*..’
I am backtracking my backtrack. I feel it is still plausible the bet ‘There is a physically realizable model of computation in which INTEGER MULTIPLICATION is in linear time’ is safe.
Would this disprove NCC https://arxiv.org/pdf/1911.07124.pdf?
Wrong?