Breaking Provably Secure Systems
Provably secure crypto-systems can and have been broken
Neal Koblitz is a strong number theorist who is famous, among other things, for inventing elliptic curve cryptography. This was also independently invented by Victor Miller at the same time—1985. Neal, during his career, has also worked on critical reviews of applications of mathematics to other areas. Mathematics is a perfect fit for the physical sciences, but is less clear how it fits with the social sciences. Neal was, for instance, involved in a potential misuse of mathematical methods to social science—see my discussion earlier about Serge Lang and his book “The File.”
Today I plan on talking about a recent article he wrote with Alfred Menezes for the AMS Notices entitled:
The Brave New World of Bodacious Assumptions in Cryptography.
Not the usual math title—I have never seen a title like this in an AMS publication, or in any other publication. According to Wikipedia bodacious means:
- Remarkable, courageous, audacious, spirited;
- In CB radio jargon, a general-purpose word of praise;
- A variety of iris (plant);
- Voluptuous, attractive, “hot,” appealing to the eye;
- Bo-Day-Shus!!!, an album by Mojo Nixon and Skid Roper.
Neal has been critical of theory’s approach to cryptography, and has written some very critical pieces. I do not agree with him, but I do think he raises some important issues.
Early Days Of Cryptography
I have two background stories I wish to share—hopefully they will help explain my views on security, and why I think Neal has raised some interesting issues. It will also show how stupid I was, but I will get to this in a moment.
Cheating At Mental Poker: Adi Shamir, Ronald Rivest and Leonard Adleman (SRA) in 1979 wrote a paper on how to play “mental poker.” They described a protocol for playing two-person poker over a communication channel, without using any trusted party. One player acted as the dealer and handed out the cards, and both then played poker.
When I got the actual paper—this is way before Internet and pdf files—I immediately realized there was a problem with their protocol. Their method used a family of maps
where is a large prime and was odd. The key to their method was the simple observation:
I sent Rivest a letter outlining an attack based on getting at least one bit of information: I used the observation that their maps preserved whether a number is a quadratic residue or a non residue. This allows the player who is “dealing” the cards to cheat and control who gets, for example, the aces.
Ron replied quickly, by return mail, with a fix: this stopped my attack, the dealer no longer could decide who got the aces. However, it still had a problem, the non-dealer could still see a few bits of information about the dealers hand. This is plenty of information to win at poker—in the long run. I should have challenged Ron to actually play for money, but instead wrote a paper on how to cheat at mental poker. The paper was not widely read, even though it was published as part of an AMS workshop. At least one paper cites me as “Ron Lipton”—oh well.
Shortly after this Shafi Goldwasser and Silvio Micali wrote a beautiful paper on how to use probabilistic methods to construct a protocol that avoided the problems I had pointed out. I was stupid, since I did not try to find such a provable protocol. At the time, I was working on complexity theory, and did not follow up on my crypto ideas. Too bad.
A few years ago, at a very pleasant lunch in San Francisco with Dick Karp, Shafi and Silvio pointed out to Dick how much they were influenced by my cheating paper. Even though I usually get little—no credit?—for my work, they were quite adamant how important it was to their thinking about semantic security. I thanked them.
In the SRA paper, they did not realize their protocol leaked at least one bit—more than enough to win at poker. This simple example, in the hands of Shafi and Silvio, led eventually to important new ideas in cryptography. The second story is also about how hard it is define what is security.
Proving An OS Kernel is Secure:
In the summer of 1976, Anita Jones and I were part of a RAND summer study group organized by Stockton Gaines on security. Anita and I eventually wrote a fun paper during the summer: a paper with a long, complex, and crazy story; I will share it with you another day.
During the workshop, Gerry Popek visited us one day, and gave a talk on his then current project to prove a certain OS kernel was secure. This was a large project, with lots of funding, lots of programmers, and he said they hoped in two years to have a proof of the OS’s correctness. What struck me during his talk was he could write down on the board, a blackboard, a relatively simple formula from set theory. The formula, he stated, captured the notion of data security: if a certain function had this property, then he would be able to assert his OS could not leak any information. Very neat.
At the end of his talk I asked him if he wanted a proof now that his function satisfied the formula. He looked at me puzzled, as did everyone else. He pointed out his was defined by his OS, so how could I possibly prove it satisfied his formula—the was thousands of lines of code. He added they were working hard on proving this formula, and hoped to have a full proof in the next 24 months.
I asked again would he like a proof today? Eventually, I explained: the formula he claimed captured the notion of security was a theorem of set theory—any function had the property. Any. He said this was impossible, since his formula meant his system had no leaks. I walked to the board and wrote out a short set theory proof to back up my claim—any had his property. The proof was not trivial, but was not hard either.
The point of the story is: the formula captured nothing at all about his OS. It was a tautology. What surprised me was his response: I thought he would be shocked. I thought he might be upset, or even embarrassed his formula was meaningless. He was not at all. Gerry just said they would have to find another formula to prove. Oh well.
I hope I do not lose my theory membership card, but I think Neal has made some serious points in his recent paper. But, first let’s look at the foundations of modern cryptography. This is based on turning “lemons” into “lemonade”: turning hardness of certain computations into security of certain crypto-protocols. You probably all know the framework, but here is my view of it:
- Let be a protocol between Alice and Bob. This is nothing more than a fancy way of saying Alice and Bob have an algorithm—almost always random—they plan to execute together. The goal of the algorithm is to send a message, to share a secret, to agree on a value, to do a computation together, and in general to do some interesting information processing. Of course, Alice and Bob are using the protocol to avoid others from learning all or even parts of their information.
- Let be some computational problem we believe is “hard.” The assumption that requires a great deal of computation can vary greatly: it can mean worst case cost, average cost, and can, today, mean the cost on a quantum computer.
- Let be the class of “attacks” allowed against the protocol. Usually these have to be explicitly stated and carefully defined. Sometimes, we allow simple attacks, sometimes we allow more powerful attacks. For example, allowing the attacker to influence Alice or Bob is a much more powerful attack, than an attacker who passively watches their protocol.
- Finally, the proof of security is a mathematical theorem of the form:
Theorem: Any efficient attack of type against the protocol , implies the problem is not hard.
Of course, such theorems would have more quantitative bounds, but this is their general form.
The framework is very pretty, but can fail in multiple ways—as Neal points out in his papers. Let’s take a look at what can go wrong. It is convenient to do this in reverse order of the parts of the framework: the theorem first, then the attacks, then the hardness assumption, and finally the protocol.
The Theorem: Any theorem, especially a complex theorem, can have an incorrect proof. I have talked about this is several previous discussions—see here. Even great mathematicians make mistakes, so it should come as no surprise if cryptographers make mistakes.
There are two reasons errors here are perhaps more likely than in some other areas of mathematics. First, there may be much less intuition about the behavior of a complex protocol. No matter how precise you are in your reasoning in any proof, intuition is a great guide. Sometimes, I have thought I had proved something, but it did not fit with my intuition. Usually, this led, after more careful thought, to the discovery of an error in the proof.
Second, as Rich DeMillo, Alan Perlis, and I have pointed out in our famous—infamous?—paper on social processes, the social network of mathematicians is a requirement for confidence in proofs. Without others checking your theorem, teaching it in their class, using it in their own work, finding new proofs of your theorem, we would be flooded with errors. The trouble with crypto theorems is they do not always have this social network to back them up. Many crypto papers prove theorems used by others, but many do not.
The Attacks: This is in, in my opinion, the main issue: do the attacks include all the possible attacks? My examples on mental poker and OS kernels show two simple examples where attacks were not properly defined. There are many attacks on crypto protocols that were not envisioned initially. These include: timing attacks, fault based attacks, composition of protocols, and many others.
The Hardness Assumption: There are hardness assumptions, and there are hardness assumptions. Clearly, if the problem is not as hard as you thought, then even if the theorem and attacks are correct, the theorem is meaningless.
The failure of a hardness assumption to really be hard has happened over and over in modern cryptography. Famous examples include:
- The assumption about hardness of knapsack problems;
- The assumption in the original RSA paper on the cost of factoring;
- The assumptions about solving certain Diophantine equations—especially the work of John Pollard and Claus Schnorr on solving binary quadratic forms, and Don Coppersmith on low exponent RSA.
The Protocol: Finally, the protocol itself is a potential failure point. If the protocol is implemented incorrectly, then all the other parts of the framework are useless.
Neal’s Main Attack
Neal’s main attack is on a paper written by of one of own faculty at Georgia Tech: Sasha Boldyreva. She with Craig Gentry, Adam O’Neil, and Dae Hyun Yum wrote a paper on a certain crypto protocol in 2007. In order to prove their protocol was secure they needed to assume a certain technical problem was intractable. They argued their assumption was reasonable: they even could prove it was hard provided the groups involved were generic.
I will not explain exactly what this means, but in my view it is equivalent to really restricting the attacks allowed. Roughly, if a group is generic, then there are very limited operations an attacker can perform on the group. Neal quotes them as saying: This has become a standard way of building confidence in the hardness of computational problems in groups equipped with bilinear maps.
The problem is the attacker can use “burning arrows:” the attacker can use the structure of the group in question, and is not limited to treat it as a generic group. The attacker can do what ever they want, as long as the computations they perform are efficient. This is exactly what happened two years later: Jung Hwang, Dong Lee, and Moti Yung showed Sasha’s protocol could be easily broken. Neal—to hammer home his point—gives the attack in full, since it is so simple. Not simple to find, but simple since it involves only solving a few equations over the group.
The lesson here is a valid one I believe: we must be very careful in making assumptions about what an attacker can do. An equivalent way to look at this situation is: the hardness assumption was not true, if sufficiently general attacks against it are allowed. Neal then adds a comment:
The 4-page proof is presented in a style that is distressingly common in the provable security literature, with cumbersome notation and turgid formalism
This is unfair—I always have felt whether a proof is clear is a subjective statement. But, there is a lesson here. Just as SRA did not allow a simple attack that defeated their mental poker protocol, Sasha and her colleagues did not allow non-generic attacks against her protocol.
Rich DeMillo, Alan Perlis, and I, in our social process paper, were most concerned with implementations of complex software. The verification community idea, at the time, was to use mathematics to proof correctness of such systems.
But, perhaps one way to summarize is this: the whole framework of crypto-provable systems needs the same social processes we outline in our old paper. The exact specification of attacks, and the proof of the security of a protocol are just as vulnerable to problems as the implementation of any large software.
An open problem is to see more proofs of the completeness of attacks. I am unsure how we can do this, but I think this would be very confidence building for the field.