Provably secure crypto-systems can and have been broken

Neal Koblitz is a strong number theorist who is famous, among other things, for inventing elliptic curve cryptography. This was also independently invented by Victor Miller at the same time—1985. Neal, during his career, has also worked on critical reviews of applications of mathematics to other areas. Mathematics is a perfect fit for the physical sciences, but is less clear how it fits with the social sciences. Neal was, for instance, involved in a potential misuse of mathematical methods to social science—see my discussion earlier about Serge Lang and his book “The File.”

Today I plan on talking about a recent article he wrote with Alfred Menezes for the AMS Notices entitled:

The Brave New World of Bodacious Assumptions in Cryptography.

Not the usual math title—I have never seen a title like this in an AMS publication, or in any other publication. According to Wikipedia bodacious means:

• Remarkable, courageous, audacious, spirited;
• In CB radio jargon, a general-purpose word of praise;
• A variety of iris (plant);
• Voluptuous, attractive, “hot,” appealing to the eye;
• Bo-Day-Shus!!!, an album by Mojo Nixon and Skid Roper.

Neal has been critical of theory’s approach to cryptography, and has written some very critical pieces. I do not agree with him, but I do think he raises some important issues.

Early Days Of Cryptography

I have two background stories I wish to share—hopefully they will help explain my views on security, and why I think Neal has raised some interesting issues. It will also show how stupid I was, but I will get to this in a moment.

${\bullet}$ Cheating At Mental Poker: Adi Shamir, Ronald Rivest and Leonard Adleman (SRA) in 1979 wrote a paper on how to play “mental poker.” They described a protocol for playing two-person poker over a communication channel, without using any trusted party. One player acted as the dealer and handed out the cards, and both then played poker.

When I got the actual paper—this is way before Internet and pdf files—I immediately realized there was a problem with their protocol. Their method used a family of maps

$\displaystyle E_{a}: x \rightarrow x^{a} \bmod q$

where ${q}$ is a large prime and ${a}$ was odd. The key to their method was the simple observation:

$\displaystyle E_{a} ( E_{b} (x) ) \equiv E_{b} ( E_{a} (x) ) \bmod q.$

I sent Rivest a letter outlining an attack based on getting at least one bit of information: I used the observation that their maps preserved whether a number is a quadratic residue or a non residue. This allows the player who is “dealing” the cards to cheat and control who gets, for example, the aces.

Ron replied quickly, by return mail, with a fix: this stopped my attack, the dealer no longer could decide who got the aces. However, it still had a problem, the non-dealer could still see a few bits of information about the dealers hand. This is plenty of information to win at poker—in the long run. I should have challenged Ron to actually play for money, but instead wrote a paper on how to cheat at mental poker. The paper was not widely read, even though it was published as part of an AMS workshop. At least one paper cites me as “Ron Lipton”—oh well.

Shortly after this Shafi Goldwasser and Silvio Micali wrote a beautiful paper on how to use probabilistic methods to construct a protocol that avoided the problems I had pointed out. I was stupid, since I did not try to find such a provable protocol. At the time, I was working on complexity theory, and did not follow up on my crypto ideas. Too bad.

A few years ago, at a very pleasant lunch in San Francisco with Dick Karp, Shafi and Silvio pointed out to Dick how much they were influenced by my cheating paper. Even though I usually get little—no credit?—for my work, they were quite adamant how important it was to their thinking about semantic security. I thanked them.

In the SRA paper, they did not realize their protocol leaked at least one bit—more than enough to win at poker. This simple example, in the hands of Shafi and Silvio, led eventually to important new ideas in cryptography. The second story is also about how hard it is define what is security.

${\bullet}$ Proving An OS Kernel is Secure:

In the summer of 1976, Anita Jones and I were part of a RAND summer study group organized by Stockton Gaines on security. Anita and I eventually wrote a fun paper during the summer: a paper with a long, complex, and crazy story; I will share it with you another day.

During the workshop, Gerry Popek visited us one day, and gave a talk on his then current project to prove a certain OS kernel was secure. This was a large project, with lots of funding, lots of programmers, and he said they hoped in two years to have a proof of the OS’s correctness. What struck me during his talk was he could write down on the board, a blackboard, a relatively simple formula from set theory. The formula, he stated, captured the notion of data security: if a certain function ${f}$ had this property, then he would be able to assert his OS could not leak any information. Very neat.

At the end of his talk I asked him if he wanted a proof now that his function ${f}$ satisfied the formula. He looked at me puzzled, as did everyone else. He pointed out his ${f}$ was defined by his OS, so how could I possibly prove it satisfied his formula—the ${f}$ was thousands of lines of code. He added they were working hard on proving this formula, and hoped to have a full proof in the next 24 months.

I asked again would he like a proof today? Eventually, I explained: the formula he claimed captured the notion of security was a theorem of set theory—any function ${f}$ had the property. Any. He said this was impossible, since his formula meant his system had no leaks. I walked to the board and wrote out a short set theory proof to back up my claim—any ${f}$ had his property. The proof was not trivial, but was not hard either.

The point of the story is: the formula captured nothing at all about his OS. It was a tautology. What surprised me was his response: I thought he would be shocked. I thought he might be upset, or even embarrassed his formula was meaningless. He was not at all. Gerry just said they would have to find another formula to prove. Oh well.

Provable Security

I hope I do not lose my theory membership card, but I think Neal has made some serious points in his recent paper. But, first let’s look at the foundations of modern cryptography. This is based on turning “lemons” into “lemonade”: turning hardness of certain computations into security of certain crypto-protocols. You probably all know the framework, but here is my view of it:

• Let ${{\cal P}}$ be a protocol between Alice and Bob. This is nothing more than a fancy way of saying Alice and Bob have an algorithm—almost always random—they plan to execute together. The goal of the algorithm is to send a message, to share a secret, to agree on a value, to do a computation together, and in general to do some interesting information processing. Of course, Alice and Bob are using the protocol to avoid others from learning all or even parts of their information.
• Let ${{\cal H}}$ be some computational problem we believe is “hard.” The assumption that ${{\cal H}}$ requires a great deal of computation can vary greatly: it can mean worst case cost, average cost, and can, today, mean the cost on a quantum computer.
• Let ${{\cal A}}$ be the class of “attacks” allowed against the protocol. Usually these have to be explicitly stated and carefully defined. Sometimes, we allow simple attacks, sometimes we allow more powerful attacks. For example, allowing the attacker to influence Alice or Bob is a much more powerful attack, than an attacker who passively watches their protocol.
• Finally, the proof of security is a mathematical theorem of the form:

Theorem: Any efficient attack of type ${{\cal A}}$ against the protocol ${{\cal P}}$, implies the problem ${{\cal H}}$ is not hard.

Of course, such theorems would have more quantitative bounds, but this is their general form.

Provable Security?

The framework is very pretty, but can fail in multiple ways—as Neal points out in his papers. Let’s take a look at what can go wrong. It is convenient to do this in reverse order of the parts of the framework: the theorem first, then the attacks, then the hardness assumption, and finally the protocol.

${\bullet}$ The Theorem: Any theorem, especially a complex theorem, can have an incorrect proof. I have talked about this is several previous discussions—see here. Even great mathematicians make mistakes, so it should come as no surprise if cryptographers make mistakes.

There are two reasons errors here are perhaps more likely than in some other areas of mathematics. First, there may be much less intuition about the behavior of a complex protocol. No matter how precise you are in your reasoning in any proof, intuition is a great guide. Sometimes, I have thought I had proved something, but it did not fit with my intuition. Usually, this led, after more careful thought, to the discovery of an error in the proof.

Second, as Rich DeMillo, Alan Perlis, and I have pointed out in our famous—infamous?—paper on social processes, the social network of mathematicians is a requirement for confidence in proofs. Without others checking your theorem, teaching it in their class, using it in their own work, finding new proofs of your theorem, we would be flooded with errors. The trouble with crypto theorems is they do not always have this social network to back them up. Many crypto papers prove theorems used by others, but many do not.

${\bullet}$ The Attacks: This is in, in my opinion, the main issue: do the attacks include all the possible attacks? My examples on mental poker and OS kernels show two simple examples where attacks were not properly defined. There are many attacks on crypto protocols that were not envisioned initially. These include: timing attacks, fault based attacks, composition of protocols, and many others.

${\bullet}$ The Hardness Assumption: There are hardness assumptions, and there are hardness assumptions. Clearly, if the ${{\cal H}}$ problem is not as hard as you thought, then even if the theorem and attacks are correct, the theorem is meaningless.

The failure of a hardness assumption ${{\cal H}}$ to really be hard has happened over and over in modern cryptography. Famous examples include:

• The assumption about hardness of knapsack problems;
• The assumption in the original RSA paper on the cost of factoring;
• The assumptions about solving certain Diophantine equations—especially the work of John Pollard and Claus Schnorr on solving binary quadratic forms, and Don Coppersmith on low exponent RSA.

${\bullet}$ The Protocol: Finally, the protocol itself is a potential failure point. If the protocol is implemented incorrectly, then all the other parts of the framework are useless.

Neal’s Main Attack

Neal’s main attack is on a paper written by of one of own faculty at Georgia Tech: Sasha Boldyreva. She with Craig Gentry, Adam O’Neil, and Dae Hyun Yum wrote a paper on a certain crypto protocol in 2007. In order to prove their protocol was secure they needed to assume a certain technical problem ${{\cal H}}$ was intractable. They argued their assumption was reasonable: they even could prove it was hard provided the groups involved were generic.

I will not explain exactly what this means, but in my view it is equivalent to really restricting the attacks allowed. Roughly, if a group is generic, then there are very limited operations an attacker can perform on the group. Neal quotes them as saying: This has become a standard way of building confidence in the hardness of computational problems in groups equipped with bilinear maps.

The problem is the attacker can use “burning arrows:” the attacker can use the structure of the group in question, and is not limited to treat it as a generic group. The attacker can do what ever they want, as long as the computations they perform are efficient. This is exactly what happened two years later: Jung Hwang, Dong Lee, and Moti Yung showed Sasha’s protocol could be easily broken. Neal—to hammer home his point—gives the attack in full, since it is so simple. Not simple to find, but simple since it involves only solving a few equations over the group.

The lesson here is a valid one I believe: we must be very careful in making assumptions about what an attacker can do. An equivalent way to look at this situation is: the hardness assumption was not true, if sufficiently general attacks against it are allowed. Neal then adds a comment:

The 4-page proof ${\dots}$ is presented in a style that is distressingly common in the provable security literature, with cumbersome notation and turgid formalism ${\dots}$

This is unfair—I always have felt whether a proof is clear is a subjective statement. But, there is a lesson here. Just as SRA did not allow a simple attack that defeated their mental poker protocol, Sasha and her colleagues did not allow non-generic attacks against her protocol.

Open Problems

Rich DeMillo, Alan Perlis, and I, in our social process paper, were most concerned with implementations of complex software. The verification community idea, at the time, was to use mathematics to proof correctness of such systems.

But, perhaps one way to summarize is this: the whole framework of crypto-provable systems needs the same social processes we outline in our old paper. The exact specification of attacks, and the proof of the security of a protocol are just as vulnerable to problems as the implementation of any large software.

An open problem is to see more proofs of the completeness of attacks. I am unsure how we can do this, but I think this would be very confidence building for the field.

March 13, 2010 11:37 am

Hi Dick,

Your account of the problem with our paper is unfortunately not correct. (For that matter, though, neither is Koblitz and Menezes’s…) There was a bug in our proof that the assumption we made held for generic groups.

In other words, the problem was not “burning arrows” but a flawed proof. In my view generic groups remain a useful heuristic in validating protocol design. In fact I don’t know any example of a “natural” cryptographic problem that is hard in generic groups but becomes easy, say, in appropriate elliptic curve groups. In this sense the heuristic is much like random oracles.

I also point out that we’ve since corrected the assumption, protocol, and proof. (The link you provide goes to the corrected paper.)

March 13, 2010 11:42 am

Thanks for correction. I believe the points I make about need for social processes here are still valid.

March 13, 2010 1:07 pm

Yes, but I also think it’s important to get the story straight so that we can learn from this experience as best we can.

Just to be clear, the attack on our original scheme works in generic groups and was possible because our claimed proof was flawed. I should also clarify what I mean by “natural” in my previous comment: I don’t mean “non-contrived” in the sense that the assumption should be simple or a natural-looking computational problem outside the context of cryptography. (Indeed, our assumption in the paper is contrived in this sense, and if it was false in appropriate groups I would still consider it to be a counter-example.) Rather, I mean that its input and solution are a set of group elements satisfying some generic relation (a relation that can be represented by generic operations).

• March 13, 2010 1:21 pm

This result from Asiacrypt’09 shows that computing the Jacobi symbol in generic rings is equivalent to factoring.

http://eprint.iacr.org/2009/621

March 13, 2010 9:49 pm

> In fact I don’t know any example of a “natural” cryptographic problem that is
> hard in generic groups but becomes easy, say, in appropriate elliptic curve groups.

By the powerful index-calculus method, discrete log problem is only of subexponential complexity in F_q. This is because you can trivially lift the points to Z . A similar type of index-calculus attack works on the class group of curves of genus>2. (See the Gerhard Frey’s article in the same Notices issue)

The lifting idea cannot work for a “generic” group. If I am not mistaken, there is an exponential lower-bound on the complexity of DLP in the “generic” model (due to Shoup, I believe.)

Something being hard in the “generic” model doesn’t say much (if anything?) in real life. It merely says things like “if you want to solve DLP more efficiently, you must use the structure of group too; don’t treat the group elements generically”. This is an interesting statement, but I don’t believe mathematicians would have forgotten about the “non-generic” structure of the group to begin with.

On the other hand, assuming that a hard problem is easy in the “generic” model, is a very strong statement. So it is not surprising that if you assume the Jacobi symbol computation has a generic algorithm, then you can have strong consequences, like factoring. (factoring -> Jacobi was essentially known to Gauss.)

March 14, 2010 11:23 am

Yes, you are absolutely right. This makes the issue tricky and more attention must be paid to what groups we should use to “instantiate” a hard problem and with what parameters, since there could be known better-than-generic attacks (causing us to have to increase parameters to compensate). Koblitz and Menezes in their article “ANOTHER LOOK AT NON-STANDARD DISCRETE LOG AND DIFFIE-HELLMAN PROBLEMS” have some other interesting examples relating to this.

I’m certainly not arguing that a generic group proof says anything formal about the problem in computationally efficient groups used in practice; of course it doesn’t. What I’m arguing is that it seems to be a reasonable heuristic that if we show a (natural) problem is hard in generic groups, then we will be able to find an appropriate group to use in practice for which the problem is hard even against non-generic attacks.

March 13, 2010 11:52 am

March 13, 2010 11:57 am

By the way, I’ve often seen Goldreich credit you in his writings on foundations of crypto. So you do get some credit…

4. March 13, 2010 4:27 pm

The problems with Koblitz and Menezes’s arguments are not that crypto papers don’t occasionally have bugs, or that our assumptions aren’t sometimes a bit “bodacious,” or that our attack models don’t always capture the full capabilities of the adversary. Indeed, I agree that more emphasis ought to be placed on writing proofs rigorously and checking them closely, and that new assumptions should be viewed more skeptically. But with no offense intended to those whose works have been unfairly singled out, the central ‘canon’ of crypto has been thoroughly vetted, proved and re-proved with as much rigor as any other modern mathematical discipline. And the history of the field is in large part the story of security against increasingly (and often absurdly) strong forms of attack, from chosen-ciphertext to adaptive corruptions to side-channel leakage and so on.

No, the problems with K-M’s arguments are that they misrepresent the papers that they attack (and the broader claims of the “provable security” community), tar the entire field of crypto with the (usually minor and ancillary) mistakes and over-promises of the few, and take the patronizing and insulting view that cryptographers obviously don’t know what a “real” proof is. Which is all the more perplexing given their adamant defense of “intuitive,” non-rigorous protocols and heuristics like the random oracle model, and total silence on the matter of precise, conservative attack models and security definitions. I don’t think I’m alone in wondering how they would propose to evaluate a protocol for a task as complex and subtle as mental poker, let alone the security of an encryption scheme under a chosen-ciphertext attack.

March 13, 2010 4:35 pm

Is there an online version of your paper on cheating at mental poker? If yes, can you please link to it? If not, why not?

March 13, 2010 8:26 pm

I have lost track of the article. This is in the old typewriter days. I will try and get a link for you.

March 13, 2010 6:37 pm

I have made some complementary observations (also “higher up the stack”) here: http://blog.eqnets.com/2009/08/12/common-data-sets-and-the-illusion-of-scientific-security-testing/

March 13, 2010 7:38 pm

someone should point out his controversial articles and the backlash letters to the editor of notices ams.

March 13, 2010 8:13 pm

Koblitz does have some valid points, if only he would express them in a more constructive way. As I wrote some 2.5 years ago (see here; my letter to AMS Notices did end up getting published), Koblitz and Menezes seem to want to throw out the baby with the bathwater.

Any cryptographer who cares about practice understands the limitations of proofs as well as Koblitz does; the difference is that Koblitz seems to think these limitations make proofs irrelevant whereas cryptographers view proofs as an essential first step to assuring security of a protocol.

March 13, 2010 8:19 pm

One thing I wanted to add — it is really unclear to me why the AMS Notices continues publishing Koblitz’s articles. For starters, I have yet to see an article in the AMS Notices presenting a counterpoint. Second, do articles bashing other areas of mathematics regularly (ever?) appear there? Finally, how interested is a general mathematics audience in practical crypto?

March 13, 2010 8:25 pm

Jonathan, a great point. Should we put together a counterpoint article?

March 14, 2010 9:05 am

I’ve read the two linked articles and it seems to me that his points are:

– “security proof” is a marketing term and is not to be taken literally. I think everyone gets this. Why would we change the term?

– security proofs are hard to read and sometimes wrong because no one reads them other than the authors. Everyone agrees. How do you improve on this?

– computer science does conferences as opposed to mathematics which does journals. Journal papers are reviewed much more thoroughly and therefore less likely to be incorrect. He doesn’t seem to mind that publishing a journal paper takes as much time as a woman needs to have three babies (naturally and sequentially). How about simply having properly reviewed conference papers?

– people doing security proofs invent “intractable” problems which suit the protocol whose security is being proved rather than using standard intractable problems. Sometimes these newly-invented problems turn out not to be so intractable. So if you find a new cryptographic protocol but can’t reduce an old intractable problem to it, you don’t publish?

– asymptotic proofs are not practical proofs. Ok. Everyone gets this.

– an editor tried to block one of his flames from appearing in a journal using a religious analogy. Ok.

– the random oracle model is intuitive even if formally broken. Maybe this should be further discussed in the community.

So in the end I’m not sure there is any suggestion for improvement of the state of the art…

March 15, 2010 1:16 am

For some reason whenever people discuss K-M’s articles, they always feel compelled to say that “they bring up good points”. As the previous post describes, all the “good points” K-M bring up are well known by the crypto community and have been the source of active discussions both public (for example at conference panels and workshops) and private for a long time (way before K-M’s articles). Absolutely nothing they bring up is new or particularly enlightening.

March 15, 2010 7:53 pm

Yeah, but I guess their point is that these issues can make it hard for an “outsider” interested in practical crypto, who is not privy to these “discussions within the community”, to get a sense of the state of the art. Such people would certainly benefit by hearing a balanced and factually accurate account of them, of course…

March 15, 2010 3:30 am

Formalized proofs (using proof assistant software like Coq or Twelf) are all the rage in programming language theory. Using those methods in crypto seems to be rather hard. Any idea why?

March 15, 2010 4:33 am

Probably just the usual reason: you need to know programming language theory to effectively use Coq or Twelf, and so formalizing a cryptographic proof would require knowing both crypto and PL — that’s a small set of people. That said, people are doing this; Gilles Barthe, Benjamin Grégoire and Santiago Zanella Béguelin had a paper at POPL last year about their work in mechanizing correctness proofs of cryptographic protocols.

March 24, 2010 10:46 pm

I think that there are arguments in [KM] work that are good for introspection in the field of cryptography. I totally disagree with many of the philosophical issues (and I suspect they may be tied to commercial interest of some protocols that are of commercial value but not “provable secure,” so all these papers while presenting some good criticism are also under the above suspicion, which may be the elephant in the room). In any case, philosophical arguments in math journals may always create confusion and I avoid extreme positions on such easily manipulated philosophical issues. I believe that in the “break paper” cited we carefully showed the “bug in proof” which is subtle (which was found after the attack was first found to see what went wrong, so we kept the historical line of discovery in the presentation). Overall, with provable security the notion of “cryptanalysis” is moved to “scrutiny of design and proof” (and then to interpretation for implementations). I do not view the break paper as anything more than “the usual development of cryptographic research.” This is clearly stated in the paper: there was an original paper, and then there is a break and an explanation (something in the proof went wrong). Any use of this “regular course of research in cryptography” against the field or, worse, against the authors of the original paper, is misuse of our results (especially that the process of blowing this “cryptanalysis” up beyond proportions is not accurate: if one advocates accuracy one has to be doubly careful in accurate use of material).
Cryptography is not the only field where math is used and then interpreted (first on pure abstract sense, and even more so, when result are interpreted to be used in real life). I think it is upon the AMS notices that keeps on hammering criticism of “proofs in cryptography” to give us an example of another mathematical, engineering, or other technical area where things are done right and where there was never any mathematical proof that was shown wrong/ was misinterpreted. If the issue is not unique to cryptography the [KM] works are nice series of crypto-gossip and as such they are useful!
I usually keep opinions on this “philosophical issues” to myself but this time could not!

13. April 1, 2010 5:46 am

Hi everyone

I feel cool posting here in rjlipton.wordpress.com ,. I am fine. I really found about a fantabulous resources on gambling and casino and i would like to share all of them with you. You can hit upon them here.

online free gambling

Bye friends

14. June 11, 2010 1:33 pm

Hey very nice blog!!

15. June 12, 2010 4:38 pm

Of course, what a great site and informative posts, I will add backlink – bookmark this site? Regards, Reader

16. January 15, 2012 12:48 pm

I am intresting in the problem of “Provable security”… what is the importance background needed for M.Sc. or PhD postgraduate students to have a research problem in this field?
Best Regards