Error correction for chatting
Bernhard Haeupler is an Assistant Professor in Computer Science at CMU. He previously spent a year as a postdoc at Microsoft Research, Silicon Valley, and a year visiting Bob Tarjan at Princeton. He is currently hard at work on the Program Committee of STOC 2015.
Today we wish to talk about the problem of error correction for chatting.
Often in communication between two or more parties errors can occur owing to the nature of the communication link, and these errors can corrupt the meaning of the messages. Error detection and correction is a well-studied, immensely important area, started formally by Claude Shannon in 1948 and still extremely active today.
At the recent FOCS 2014 meeting held in November, Haeupler presented work on the problem of adding error correction efficiently to an interactive channel. Thus there are still interesting basic questions in error correction.
Chats and Errors
The situation is simple: Imagine that Alice wants to chat with Bob, her constant companion. Alice sends a message to Bob; he thinks a while, then sends a message back to Alice. She thinks some and replies with her next message, and so on. This continues for some time, and of course they both want the chat to be accurate. Since the channel they are using may add errors, Alice and Bob want to sure that no errors are propagated:
Alice: Hi Bob let’s go to lunch. Are u free?
Bob: Sure. Do u know Thai place on New St.?
Alice: I ate there once. Good.
Bob: Ok. How about 12:30?
Alice: See u then.
Of course if a small error occurred and the message “Ok. How about 12:30?” became “Ok. How about 12:00?” then Bob would be waiting 30 minutes for Alice. This requires only a two-bit error. A larger error could make the third message say, “I hate their food.”
The standard answer is that they should use some type of error correction to protect against that. But how to do that in the most efficient manner is the subject of Haeupler’s paper, “Interactive Channel Capacity Revisited.”
There seems to be a simple answer: just use error correction on each message and you are done. Suppose the message takes up bits in the absence of errors. If you need to transmit bits to cope with the errors, then your overhead factor is and your rate of meaningful bits is . As a function of an error parameter on the channel, is also called the capacity , but the bounds to come will be easier to understand if we think of as the “tax” imposed by the channel. Thus to get overhead we need to bound away from .
Shannon showed that for the binary symmetric channel, in which is the independent probability of flipping a transmitted 0 to 1 or 1 to 0, one can get the tax down to
asymptotically as , with tiny chance of overall failure. Efficient coding schemes were found to approach this, and were later extended to most cases where up to errors can be made in an adversarial manner, provided the adversary observes only what is transmitted on the channel.
The issue, however, is that when there are multiple messages, the error factor applies to the whole of them. This situation may seem artificial—hey it’s theory—but is interesting. The considerations might not apply in force to human chats, but could become significant in chats between agents.
Let’s say Alice and Bob have some conversation in that can be done in the error-free case in symbols. Here this means is the classical communication complexity of their conversation, but let’s suppose is just the total length. To simplify further let’s suppose that in the error-free case they would alternate in rounds, with each message being bits. How can they cope if and are known to the channel adversary in advance, and if the adversary can read their transmissions before deciding whether to strike? Quoting the paper:
They want a coding scheme which adds redundancy and transforms any such conversation into a slightly longer -symbol conversation from which both parties can recover the original conversation outcome even when any fraction of the coded conversation is corrupted.
The reason this is harder that the usual error model is that suppose Alice and Bob use a standard error correction method. The adversary could destroy completely any one message and still not affect more that an fraction of the bits exchanged, so long as , which means using rounds. It is this type of error burst that makes the problem harder than the usual error model. There is also the question of whether Alice and Bob’s revised chat of bits must stick to a similar fixed order with messages of some fixed length .
It is not at first clear that there is any solution. What if the first message is completely wiped out? Suppose the error wasn’t Alice saying she hated the food, but rather starting with: “I hate u Bob”? The key idea is that after some subsequent incoherence they could take a deep breath, reset, figure the channel adversary had shot most of his bolt of error, and settle down assured of a relatively error-free retake provided they keep small. But how can they communicate so as to gauge how much error, without taking up much more bandwidth?
Leonard Schulman, in his 1993 paper, “Deterministic Coding for Interactive Communication,” gave a clever solution that bounded the tax away from 1, that is by for constant , provided either for adversarial errors, or with independent random errors. But even with the tiny his was proved existentially rather than computed. Although his rate was linear, it was impractical and nowhere close to the Shannon bound.
At FOCS 2012, Zvika Brakerski and Yael Kalai—not related to our friend Gil Kalai—made Schulman’s bound constructive and reasonable for . At FOCS 2013, Gillat Kol and Ran Raz upper-bounded the tax by
with , provided Alice and Bob’s pre-error chat alternates with at most
bits per message. Most significantly, they gave an asymptotically matching lower bound on the tax of
when , even if the errors are independent, provided the encoded chat similarly follows fixed alternation with . For small enough , this showed that Shannon’s was concretely unachievable, thus separating the interactive and non-interactive cases.
Note, however, that the message length per round expands as gets smaller. Though this might seem an artificial tradeoff, it carries an important interpretation:
What empowers the adversary is not so much the ability to wipe out any one message, as the ability to strike when one of Alice or Bob is scheduled to speak for an extended time but the other has important timely knowledge.
The most flexible way for Alice and Bob to chat is to alternate bits, and just ignore one person’s bits when the other holds the floor. This policy could immediately double the overhead factor . However, we can use it to circumvent the above restriction on the period. Can this make a difference?
The New Solution
Haeupler’s new theorem improves the upper bound on the tax in a way tabbed as “surprising” because it shatters the optical impression of Kol and Raz’s matching bounds and holds for adversarial error. His main result is:
Theorem 1 For small enough , Alice and Bob can simulate any protocol of classical communication complexity by a protocol alternating bits and tolerating adversarial errors with tax
If the errors are random then the tax is .
Although square roots of numbers smaller than 1 can be tricky, to see that this is an improvement, note that Kol and Raz’s bound is
whereas Haeupler’s is
And it is just when each interaction is on the binary symmetric channel. Despite having just broken through someone else’s matching bounds, Haeupler goes on to boldly conjecture that his bounds are optimal in their respective settings. This would imply that a higher tax for interaction than the Shannon bound cannot be avoided.
Further reason to consult his paper for details is that his proof method is quite pretty. Both Alice and Bob chat as if there were no errors. However, they do the following:
- At sporadic intervals, each sends a concise summary of the conversation up to that point.
- The summary is basically a hash value of that person’s recovery of the intended original error-free transcript.
- If the summaries match then the conversation continues.
- If the summaries do not match, because the errors caused a misunderstanding, then the parties backtrack.
The key is that although an adversary can destroy completely some earlier messages, the periodic checks will reveal that and all will be saved. The shortness of the checks is crucial to the savings.
Can his bounds be improved, or are his optimality conjectures correct? Is this a useful model?