Can one route requests forever on an expander graph?

Noga Alon is one of greatest combinatorial experts in the world. He has won many awards and has made countless contributions to almost all aspects of combinatorics and theory.

Today I want to talk about a problem that occurred to me after listening to his talk at our recent theory day. It concerns a neat result that he has on routing on expander graphs.

On-line Routing on Expanders

Alon and Michael Capalbo have a pretty paper on routing on sufficiently strong expanders. Let’s call them AC—not to be confused with the complexity class. Suppose that ${G}$ is an ${n}$ vertex graph that is a strong expander—for now just think an expander. Then, they consider the following natural routing game. There are two players, the requester and the router. The requester is allowed to make a request of the form ${(s,t)}$ where ${s}$ and ${t}$ are vertices of the graph ${G}$. The router must then find a path from ${s}$ to ${t}$ in the graph. We will say that all the edges on this path are in use. Think of edges as network links that can only carry one “conversation’ at a time. The game allows the requester to continue to make one request after another. The router must satisfy each request as it is made. Of course the router can only use edges that are not in use: thus, once an edge is used for a request it cannot be part of another path in the future.

Further the router has no knowledge of the future requests that are going to be made—it is solving an on-line game. This is what makes the game both realistic and interesting. The goal of the router, of course, is to never fail—to always be able to handle the requests.

There are two simple restrictions on the requester. First, the requester is restricted to make requests such that each vertex appears as a source or sink of a request at most once.

Note, AC, remember that’s Alon and Capalbo, are able to relax this quite a bit, but some restriction is needed. Otherwise, just make ${d+1}$ requests using the same vertex ${s}$ of degree ${d}$ and the router must fail. Second, the requester is limited to a total of some ${m}$ requests. This is also natural. Otherwise, since edges can only be used once the router would fail for the simple reason that all the edges are gone. The question is how many requests can the router handle? Their beautiful result is:

Theorem: On any strong expander, there is a deterministic strategy that can successfully route any ${O(n/\log n)}$ number of requests.

This is essentially optimal, since on average the paths with be of length ${\Omega(\log n)}$, and so any more requests would potentially use up all the edges. See their paper for the exact definition of strong expander.

The proof of the result is quite neat and involves some clever maintenance of a core partial expander with a special structure. Note, prior to this result there were weaker bounds. For example, Alan Frieze has an off-line randomized algorithm with the same bound on the number of requests. AC get the optimal bound, their algorithm is deterministic, and also it is on-line: the router is able to make the decisions as they come—a very realistic assumption.

Routing Forever

After theory day I soon thought about how to improve AC’s theorem. Why not see if there could be a router that could handle requests forever? Of course, this is impossible, since the router must run out of edges eventually.

But wait. Let’s change the game as follows. The requester cannot expect the router to handle more than order $m={O(n/\log n)}$ requests. Thus, let’s have the requester have a new ability: the requester is allowed to ask the router to release a previous request. This will free up the edges that the router used to fill that request.

The question now is simple: Suppose that at no time are there more than ${m}$ requests that have not been released. Can the router go on forever? Or must it eventually get stuck?

I have thought about this with my graduate students, especially Atish Das Sarma, Subrahmanyam Kalyanasundaram, and Danupon Nanongkai. We cannot prove that there is a forever router, nor can we prove that none exists. The difficulty is that even though the there are no more than ${O(n/\log n)}$ active requests, the requests and releases can be asked for in an adversarial manner. The requester could perhaps force the router to make a “bad” choice, then release some requests, and by repeating this over and over eventually get the router into a bad state.

We were able to prove the following special case:

Theorem: On any sufficiently strong expander, there is a deterministic strategy that can successfully route forever, provided the requests are released in a FIFO order, where the FIFO is length order ${m}$.

Thus, as the requests come in they are placed in a FIFO of length $m={O(n/\log n)}$. Once the FIFO is full the next request forces the oldest request to be released. This is a pretty realistic model—it seems reasonable that old connections are eventually released.

The proof of this theorem uses AC’s result. We tried to redo their proof for this theorem, but finally realized that a very simple strategy would work, and we only needed to use their router’s strategy as a black-box.

We imagine that our expander is strong enough to contain two strong expanders on the same ${n}$ vertices. Each of the expanders is strong enough in the sense of AC’s theorem. Call the first expander the “top” and the other the “bottom” expander. Also, we assume that the router knows which edges are from each expander.

The routing strategy is simple: Initially both expanders have no paths. On the first ${m=O(n/\log n)}$ requests use the top expander to handle all the requests. Follow AC’s strategy here. Then switch to the bottom expander. Again use AC’s strategy here for the next ${m}$ requests. Then, switch to the top expander. The key point is that by now all the requests there have been released. Thus, the expander is “clean” and we can again use AC’s strategy there. This goes on forever switching between the two expanders.

Its a pretty simple idea. But, it shows that there is some hope that routing can be done forever. Actually at first we thought the FIFO order might even be the worst case. Note, even if one request is allowed to stay for a very long time, then when we switch to the top expander it will not be “clean” and AC’s strategy does not apply.

A final comment. Instead of requiring the expander to have two copies of a strong expander, the same result would follow if we allowed each edge to be used at most twice. Thus, in network terms if each edge/link had the capacity to carry two conversations, then our router can route forever.

Finally, we have several ideas and other partial results, some of which we hope could help us generalize the above result substantially. More on this in the future.

Open Problems

Does there exist a forever router that can handle non-FIFO releases?

December 6, 2009 3:18 pm

I find this expander routing result and related result of Frieze a bit over hyped. Allowing some constant congestion allows much easier results even for moderate expanders. The stronger results require a “strong” expander which boils down to being able to create a constant number of copies of expanders which can then be used to avoid congestion. No one can even easily state the definition of a strong expander. In fact it is defined so that one can get copies easily. What is the value of an “optimal” result after changing the set up? The results are not technically easy but that doesn’t make them automatically interesting or well-motivated.

December 6, 2009 3:27 pm

I thought the point here is to allow the router to go on forever. That is the only issue that we addressed. But constant congestion is probably as a practical matter okay.

December 6, 2009 9:43 pm

Suppose that the router gets stuck for instance I at some point, that is, at time t-1 it was serving a set of request R and at time t it gets a new request r which it cannot fulfill. Can’t we consider R + r as an instance that contradicts the AC procedure? Or, although it is deterministic, the state we have over instance I at time t is different than running it over R + r; or the constraint on the requests make R + r a non-feasible instance…

December 7, 2009 8:04 am

No. The point is that requests and releases do not fit the AC inductive proof. So it would not be a problem for their proof.