# Network Coding Yields Lower Bounds

*Practice leads theory*

Peyman Afshani, Casper Freksen, Lior Kamma, and Kasper Larsen have a beautiful new paper titled “Lower Bounds for Multiplication via Network Coding”.

Today we will talk about how practical computing played a role in this theory research.

The authors (AFKL) state this:

In this work, we prove that if a central conjecture in the area of network coding is true, then any constant degree boolean circuit for multiplication must have size , thus~~almost~~completely settling the complexity of multiplication circuits.

We added the strikeout because of the upper bound that we discussed recently here.

AFKL have conditionally solved a long standing open problem: “How hard is it to multiply two -bit numbers?” Their proof shows that a conjecture from practice implies a circuit lower bound. This is rare: using a conjecture from practice, to solve a complexity open problem. We have used conjectures from many parts of mathematics, and from some parts of physics, to make progress, but drawing on experience with practical networking is strikingly fresh.

## Integer Multiplication

The authors AFKL explain the history of the multiplication problem. We knew some of the story, but not all the delicious details.

In 1960, Andrey Kolmogorov conjectured that the thousands of years old -time algorithm is optimal and he arranged a seminar at Moscow State University with the goal of proving this conjecture. However only a week into the seminar, the student Anatoly Karatsuba came up with an time algorithm. The algorithm was presented at the next seminar meeting and the seminar was terminated.

Ken and I wish we could have Kolmogorov’s luck, in one of our seminars. Partly because it would advance knowledge; partly because it would let us out of teaching. Sweet.

The main result of AFKL is:

Theorem 1Assuming the Network Conjecture, every general boolean circuit that computes the product of two -bit integers has size order at least .

This says that the boolean complexity of multiplication is super-linear. No restriction of a bounded depth, no restriction on the operations allowed, no restrictions at all. Given our non-existent lower bounds this is remarkable. If it was unconditional, it would be a terrific result. But it still is a strong one.

We will next explain what the Network Coding Conjecture (NCC) is.

## Network Coding

One of the basic papers was authored by Rudolf Ahlswede, Ning Cai, Shuo-Yen Li, and Raymond Yeung here. The paper has close to ten thousand citations, which would be amazing for a theory paper.

In basic networks each node can receive and send messages to and from other nodes. They can only move messages around—they are not allowed to peer into a message. The concept of *network coding* is to allow nodes also to decode and encode messages. Nodes can peer into messages and create new ones. The goal, of course, is to decrease the time required to transmit information through the network.

The following example combines figures from a 2004 paper by Zongpeng Li and Baochun Li which formulated the NCC. At left is a situation where two senders, with an -bit message and with an -bit message , wish to transmit to respective receivers and . The network’s links are one-way as shown, with two intermediate nodes and , and each link can carry bits at any one time.

If and are black-boxes that must be kept entire, there is no way to solve this in three time steps. But if the nodes can read messages and do lightweight computations, then the middle diagram gives a viable solution. Node reads and and on-the-fly transmits their bitwise exclusive-or to node . Node relays this to each receiver, who has also received the other party’s message directly. The receivers can each do a final exclusive-or to recover the messages intended for them.

The ability to look inside messages seems powerful, and there are networks where it helps even more dramatically. Incidentally, as noted by Wikipedia, the exclusive-or trick was anticipated in a 1978 paper showing how the two senders can exchange their messages and by relaying them to a satellite which transmits back to each.

## The Conjecture

However, there is another solution if the links are bi-directional and messages can be broken in half. Sender simply routes half of one way around the network and the other half the other way. Sender does similarly. This is shown at far right. Each link never has more than bits of total load and the three-step elapsed time is the same. Moreover, the link from to is not needed. This is just an undirected network commodity flow with fractional units.

In fact, *no* example is known in an undirected network where encoding beats fractional routing. That is, the known network encoding rate is just the flow rate of the network. The network coding (NCC) conjecture is informally:

The coding rate is never better than the flow rate in undirected graphs.

The paper by Li and Li gave formal details and several equivalent statements. Quoting them:

For undirected networks with integral routing, there still exist configurations that are feasible with network coding but infeasible with routing only. For undirected networks with fractional routing, we show that the potential of network coding to help increase throughput in a capacitied network is equivalent to the potential of network coding to increase bandwidth efficiency in an uncapacitied network. We conjecture that these benefits are non-existent.

## Good and Bad News

What has become of the NCC in the fifteen years since? Here’s how Ken and I see it:

*The Good News*: The NCC helps solve long-standing open problems. Since this conjecture is widely believed this is impressive. Besides integer multiplication, NCC has been used to prove other lower bounds. For example, Larsen working with Alireza Farhadi, Mohammad Hajiaghayi, and Elaine Shi used it to prove lower bounds on sorting with external memory.

*The Bad News*: The NCC helps solve long-standing open problems. This suggests that this conjecture could be deep and hard to resolve. The boolean complexity of integer multiplication is a long standing open question. Since the NCC leads to a non-linear lower bound, perhaps proving this conjecture could be hopeless.

I have mixed feelings about these lower bound results. They are impressive and shed light on hard open problems. But I wonder if the NCC could be wrong. There is a long history in complexity theory where guesses of the form:

The obvious algorithm is optimal

have failed. The situation strikes us as resembling that of the (Strong) Exponential Time Hypothesis, in ways we discussed four years ago.

## How the New Paper Works

The authors AFKL did not know that an upper bound had been proved for integer multiplication when they posted their paper. They did, however, prove a stronger version of Theorem (1) for a problem with a known upper bound. This is to create circuits with input gates and output gates that given and a binary number output the string whose bits through equal , with other bits . A conditional lower bound on this *shift* task implies the same for multiplication, since the shift is the same as multiplication by .

Theorem 2Assuming the NCC, circuits for the shift task need size order .

The proof is disarmingly elementary: The input gives “senders” and each value of creates a different set of “receivers.” With a circuit fixed, they show one can fix a shift so that the average distance from sender to receiver in an undirected multi-commodity flow is , giving total flow. If achieves smaller size, then it represents a counterexample to the NCC.

Pretty neat—this is half a page in the paper. The paper proves more intricate results relating to conjectures by Les Valiant about Boolean circuits of bounded fan-in and depth that compute permutations and their reduction to depth-3 circuits of unbounded fan-in. This also may extend to the sorting/shifting problem Ken wrote about long ago in a guest post for Lance Fortnow and Bill Gasarch’s blog.

## Open Problems

Is the NCC true? Can it be proved for some interesting classes of graphs? I believe it is known for tiny size graphs of at most six nodes. What about, for example, planar graphs?

[inserted “conditionally” before “solved” in intro]

[Fixed AFKL typos]

### Trackbacks

## Blogroll

- Algorithmic Game Theory
- Algorithms, Nature, and Society
- AMS
- Annoying Precision
- Azimuth Project
- Banach's Algorithmic Corner
- Bits and Pieces
- Blame It On The Analyst
- Computational Complexity
- Division by Zero
- Ernie's 3D Pancakes
- gentzen
- Gil Kalai
- Gowers’s Weblog
- Kamathematics Home
- Luca Trevisan
- M-Phi
- Martin Schwarz
- math less traveled
- mathbabe
- Matt Baker
- Michael Mitzenmacher
- Michael Nielsen
- Minimizing Regret
- My Biased Coin
- Not Even Wrong
- Oddly Shaped Pegs
- Peter Cameron's Blog
- prior probability
- Random bits
- Rich DeMillo
- Rigorous Trivialities
- Scott Aaronson
- Sebastian Pokutta
- Secret Blogging Seminar
- Suresh Venkatasubramanian
- Tanya Khovanova
- tcs math
- Terence Tao
- the Higher Geometer
- The Mathematical Tourist
- The nth Root
- the polylogblog
- The Unapologetic Mathematician
- Todd and Vishal's blog
- Turing Machine
- windows on theory

You might want to clarify that the NCC is about unicast messages (so S1 is only trying to deliver a to T1 not both T1 and T2). This tripped me up for a bit.

The wording you use was a bit unclear as to whether they wanted to transmit the message to both or just one and I had to open up the paper to figure out what was going on. Otherwise great explanation!

Dear Peter:

Sorry we did not get the explanation as clear as we wish. But you are kind.

Best

How does this fit with multiplying in a different representation of the integers? For example in the “Residue number system”/representation (not sure if I can provide links in this forum, but it is on wikipedia) multiplication seems to be easier, and trivially parralizeable.

Maybe the storage size of numbers in that representation is n log n of the size n in normal binary format? Hmm… but then multiplying each “piece” of size m_i might take m_i log m_i (maybe less actually since it is multiplication modulo a number of size m_i), so circuit complexity is Sum m_i log m_i. And m << n. How does this all fit together with the n log n result? Or is there an implicit assumption in this result that the representation of the integers is the usual binary representation?

Maybe I'm missing something obvious. Any insights or clarification appreciated.

Dear Peng:

We have discussed this very issue before. In other representations it is possible to make addition and multiplication faster. But it seems that it is not possible to also include comparison: That is decide if X < Y? If that was also easy then that would be a great

result.

Best

Perhaps $NCC$ is false. Integer multiplication is a far deeper problem and perhaps harder than $P$ vs $NP$.

Dear L:

I agree completely. I am preparing a follow up post.

Best

Hm, I am very confused after reading your explanation. The abstract of the original paper you cite talks about “CONSTANT DEGREE boolean circuit”, but in your formulation of their theorem “Theorem 1 Assuming the Network Conjecture, every general boolean circuit..” you talk about GENERAL circuits, and explicitly write below: “No restriction at all”.

Another big confusion is that their bound applies to the multiplication by $2^k$ (shift task). Unless I miss something, there is a (trivial) computer program which can do the shift task in linear time. And the bound in the paper is superliner. So, a problem can have superlinear CIRCUIT lower bound but still can be linearly solvable on regular computer?

Dear Bogdan:

They do allow general circuits. They allow any functions at a gate. They do not restrict the circuits to have , for example, only monotone gates. They allow any topology for the circuit provided it is acyclic. This means they allow arbitrary depth, for example. The only restriction they use is that the degrees of the circuits are bounded. For in-degree this is standard; for out-degree this can easily be seen to not be an issue.

For details see http://cs.brown.edu/people/jsavage/book/pdfs/ModelsOfComputation_Chapter9.pdf

at theorem THEOREM 9.2.1.

Your second point is correct. The result shows that here is a function that has a linear time algorithm but does not have a circuit that is linear size. This is surprising, but not impossible.

Best

Just pointing that you wrote AKKL twice, instead of AFKL.

Dear Cristóbal Camarero:

Oops. I will fix that. We try to get things correct, but we do make errors. Thanks for the comment.

Best