Breaking All the Security Rules
A radical approach to computer security
Andrew Odlyzko is a number theorist, a complexity theorist, a cryptographer, and a deep thinker. He has proved some very beautiful theorems, written non-trivial software, and managed large projects. One of his coolest projects is the search for a “bad” zero of the Riemann Zeta function. For example, he points out:
Thus zero # is actually
As expected its real value is exactly .
Today I will talk about a presentation he gave the other day called: Providing security with an insecure cyber-infrastructure. This talk was presented at a general conference on CyberInSecurity—note the “In.”
Also at the conference was another neat talk by Chris Gentry on How the cloud can process data without seeing it. I will talk about his beautiful work in the near future.
Doug Solomon, the CTO of the company Ideo, gave a very cool talk on user interfaces. His work is not related to Andy’s, but I cannot resist giving a pointer to a video parody of the facebook interface: lookup “Facebook in Reality.” It is well worth two minutes, if you need a chuckle, but it is a bit out there humor—it’s rated R so I will leave a direct link out.
The Fundamental Tenets of Security
Modern computer security is based on several basic tenets:
- People want secure systems.
- People can implement systems correctly.
- People should assume the system is completely known to the adversary.
The first is clear: who would argue that we want insecure systems. Of course we want systems that are secure and protect our data, for example. The second, is also clear: we must be able to implement the security protocols correctly. If we cannot implement them properly, then what good are they. Finally, as Claude Shannon put it bluntly; “the enemy knows the system”. Modern security always assumes the system is known, only the “keys” are secret.
The Fundamental Tenets of Security
Andrew’s talk was surprising, it went against the basic tenets of security, and went against ideas he said he had believed for decades. Basically, he argued against each of the tenets I just gave. Let’s look at his arguments for each, and then turn to his solution for security in the future.
People want secure systems. He pointed out a really secure system would be useless. Even if we could build a secure system, it would be difficult, if not impossible, to use. For example, a system where secretaries cannot commit “forgery”, sign for their bosses, will not work in the real world.
Another way Andy attacked this point is he discussed the importance of digital signatures—one of the great results of modern cryptography. He worked on such systems in the 1980’s, and made important contributions. For example, Andy did seminal work on breaking a number of signature schemes.
But, he added, we have long used faxed signatures to buy and sell many things: including large purchases like houses. Faxes are not provably secure, they are not secure period. Yet commerce relies on them. Today small businesses deposit checks by first scanning them, and then sending them to their bank via email. This is also completely insecure. So where are the digital signature applications?
People can implement systems correctly. His point is one many may disagree with, but Andy was very convincing. He claimed, toy systems aside, a system of any complexity is not secure—people just cannot build complex systems that are correct. If they are not correct, they cannot be secure—there will be “holes” that will allow cyber-attacks. I think he is right: whether you believe in formal methods, or informal methods, there is no way to make really secure systems.
To amplify this point I would add: modern software systems are huge, often distributed, and consequently always will have bugs. Humans seem unable to avoid this—Andy’s point is valid, in my opinion. Correct real systems cannot be constructed.
People should assume the system is completely known to the adversary. Andy disagreed with this point too, and it leads to his “solution” for building secure systems. I will discuss it in the next section.
Then, Andy made a blanket statement and perhaps this is his most controversial point, he called it: The dog that did not bark. The quote is, of course, a variant of the famous quote from the Sherlock Holmes story Silver Blaze by Sir Arthur Conan Doyle. There Holmes answers to
“the dog did nothing in the night-time;”
with the famous, “that was the curious incident.”
Andy argued cyberspace is horribly insecure, but where are the big disasters? Apparently the cost of all cyber-crime to banks is just about equal to actual bank robberies. His point is even more interesting since just last Thursday (May 6, 2010) some error—human or computer—caused the US stock market to lose about one trillion dollars in minutes. Now I call this a real “barking dog.”
In summary his points are: cyber-space is insecure, it will always be insecure, and it does not matter. I am not sure I agree completely with his points, but they form a quite interesting position.
Andy said there is no solution, no easy way to secure cyberspace, no way to be perfectly safe. But, he had one idea that runs counter to modern cryptography principles—which he helped create when he worked in cryptography. He directly attacked the tenet: People should assume the system is completely known to the adversary. This assumption is like, in complexity theory, our love of the worst case model. If we make the worst case assumption that the adversaries know the system and we can still prove it is secure, this is wonderful. Unfortunately, Andy’s other points claim this is a false premise. If “pigs can fly,” then anything is true. Since no system is secure, we must change from using the worst case model in security.
He does have an approach, and it is—security through obscurity. He said lets build messy, not clean systems. He really means this, and had several concrete suggestions. He said code obfuscation or “spaghetti code,” would make the adversaries job all the more difficult. Andy notes the black hats use code obfuscation, so why should we not?
Here is an example of code that has been obfuscated as part of the yearly contest:
For further data, discussions, and speculations see his papers and presentation at this site.
I liked his talk for raising some controversial points. I think he raises several real open problems. I know there has been theory work on code obfuscation, but I think he is interested in more practical methods—see this.
Another view of what he is saying is this: can we make systems that are so complex that they are very difficult to break, even with the source?