Skip to content

Why Do We Need Cyber-Security?

January 11, 2010
tags:


Rich DeMillo’s answer to a recent editorial in CACM

Howard Schmidt is to become the US cybersecurity chief. I am puzzled why we need him, since according to Moshe Vardi, verification of computer systems works. Vardi is the editor-in-chief of the Communications of the ACM, who recently attacked Rich DeMillo and myself for our 1976 paper on this very subject. I am including verbatim Rich’s response that is being sent to Vardi.

In his Editor’s Letter in the January 2010 issue of CACM entitled “More Debate Please”, Moshe Vardi makes a plea for controversial topics in these pages, citing a desire to “let truth emerge from vigorous debate.” It is a sentiment that we support as well. But we question Mr. Vardi’s judgment in using his editorial position to mount an attack on colleagues who were neither forewarned nor given an opportunity to respond. Mr. Vardi’s target was our 1979 critique of formal program verification entitled “Social Processes and Proofs of Theorems and Programs,” It was co-authored with the late Alan Perlis, one of the originators of the field and a lifelong advocate for the kind of open discussion that the Editor advocates. We can only hope that future contributors have higher standards for debate than does the current Editor, because his out-of-context references to the 1979 debate over the practical efficacy of formal verification, his ex-cathedra determination that the article was “misguided” and his ill-informed view of the decision to publish it have no power to illuminate a serious subject.

We do not care to respond to Mr. Vardi’s substantial mischaracterizations and misstatements at this time, but we do think it is fair to point out that the publication of “Social Processes and Proofs of Theorems and Programs,” was not a singular event that might be classified as either misguided or not. “Social Processes” was a refereed article. A preliminary version was accepted by a highly selective conference program committee in 1976 and its presentation was attended by virtually every living contributor to the field. It was then submitted to this journal and reviewed by anonymous referees. Its publication was followed by many months of public presentations and workshops, letters to the Editor, written reinforcements and rebuttals, and — several years later — a special issue of this journal devoted to the topic. Mr. Vardi faults the editorial board for not publishing an opposing “counterpoint” article, a suggestion that — although it has all the “fair and balanced” trappings — would have been hard to reconcile with the confidentiality usually accorded to contributed articles that are sent to referees for review. The irony is not be lost on us that we were offered no such opportunity prior to publication of his letter.

The article itself has been reprinted dozens of times and has appeared in several anthologies in the philosophy of mathematics. Its publication and the ensuing debate have been the subject of social science research (Donald MacKenzie’s 2001 Mechanizing Proof remains the definitive sociological and historical analysis of both the paper and its implications for the field). If our arguments seem off the mark to Mr. Vardi, then perhaps the right course of action is to resurrect the social process that led to the article’s publication in the first place and jump into the fray. Until that time, the correct editorial position for CACM and its Editor is to let both the paper and the written record that surrounds it speak for themselves. It strikes us as inappropriate, after thirty years of silence, to use the cover of an Editorship to attack unsuspecting passersby, especially while touting the moral virtues of free and vigorous debate.

Open Problems

Rich choose to raise the role of an editor—not the substance of our paper. That’s fine, but I believe that Rich and I will write a piece soon on the larger issue of what did verification mean in 1976, and what does it mean today in 2010. I believe that we were right then, that we are right now, and that we will be right in another 30 years. Complex software-hardware systems are not proved correct then, they are not proved correct today, and will not be proved correct in the future.

If I am wrong why do we need cyber-security? Why do systems get attacked and broken into every day? Why is security a multi-billion dollar a year industry? I think the answer is obvious—Howard Schmidt is going to very busy.

34 Comments leave one →
  1. January 11, 2010 11:55 am

    The problem with security isn’t verification, it’s specification. “Secure” isn’t really formally definable, at least not yet. You may want to argue that both problems need to be solved before claiming victory of program verification, but I think that needs justification.

    • January 11, 2010 12:01 pm

      And to be fair, many many systems are attacked using security holes that we CAN and regularly do verify against (automatically in most cases). The problem there is that getting industry to adopt tools is hard, and getting them to adopt and use tools not aimed directly at the bottom line is much harder. There are many popular and powerful tools that are in use, but if you walk into a random development house, odds are that they’re not using them or not using them regularly.

      There’s no excuse for any software shipping today to have buffer overflows or anything else that we’ve been able to automatically detect for a decade. But it still happens everyday…

      • Jonathan Katz permalink
        January 11, 2010 8:15 pm

        I agree with Mark. While there is some connection between cybersecurity and verification, they are not equivalent. Even if the problem of verification could be solved, it would not solve the problem of cybersecurity. Conversely, one could imagine ways of solving cybersecurity that do not involve formal program verification.

  2. rjlipton permalink*
    January 11, 2010 11:59 am

    verification [ˌvɛrɪfɪˈkeɪʃən]
    n
    1. establishment of the correctness of a theory, fact, etc.
    2. evidence that provides proof of an assertion, theory,

    • eqnets permalink
      January 12, 2010 11:22 am

      When I hear someone talking about verification or formal methods or that sort of thing in the context of cybersecurity my level of skepticism immediately rises. Now I’m not even a knowledgeable person on formal methods, and it’s not to say that proving the correctness of the 2-3KLOC in a Linux kernel isn’t worth doing, just that there often seems (to me) to be a wide gap between what can be done (and what is advertised) and what can be effective in practice.

      • Peter Lund permalink
        January 15, 2010 1:12 am

        2-3KLOC ?!

        The whole thing is more than 11 MLOC (including comments and empty lines). A lot of that is filesystems and drivers which are not in use on most machines, granted. That still leaves vastly more than just 2-3KLOC. Just the kernel/ directory of the source code (without the subdirectories having to do with profiling, suspend/resume, etc) is 100KLOC raw and 67KLOC as measured by sloccount.

        (The above numbers are somewhat out of date as they were measured on 2.6.32-rc5.)

      • Scott Cotton permalink
        February 25, 2010 5:38 am

        “””
        just that there often seems (to me) to be a wide gap between what can be done (and what is advertised) and what can be effective in practice.
        “””

        I second that. The field needs to be more scientific — double blind reviews, a requirement of independently checkable results, and a much stronger recognition that negative results are valuable.

  3. January 11, 2010 1:01 pm

    Characterizing the editorial (http://cacm.acm.org/magazines/2010/1/55739-more-debate-please/fulltext) as an “attack on colleagues” is the sort of thing that prevents the kind of debate that everyone agrees is desirable. There is a difference between “attacking colleagues” and attacking an article. Vardi did the latter.

    • rjlipton permalink*
      January 11, 2010 1:03 pm

      I do not know. Perhaps. But calling our article misguided is pretty close. Who misguided it other then the authors?

      • January 11, 2010 1:17 pm

        Everyone makes mistakes. When Alice claims that Bob made a mistake, if Bob attacks her for being unprofessional, we can never have a debate.

        I disagree very strongly that attacking Bob and claiming Bob made a mistake are “pretty close”. I think they are only pretty close if you think that mistakes are only made by the careless.

        Yes, if Vardi is right, it is the authors who “misguided” the paper. Nonetheless, to err is human, and we simply must be able to recognize that Newton was partially “misguided” about the nature of light without forgetting that he was a genius.

  4. rjlipton permalink*
    January 11, 2010 1:22 pm

    Okay. I give. Good point.

  5. richde permalink
    January 11, 2010 2:33 pm

    On the other hand, singling out three authors out of the thousands who have published in CACM sure looks like an attack.

  6. Michael Mitzenmacher permalink
    January 11, 2010 3:22 pm

    Richard —

    I read the article in question just the other day, and while I don’t have it in front of me, I don’t remember thinking it was a personal attack. In fact, in some ways, I think it was quite complimentary. It singled out your paper as taking a firm stand on what was possible, and not, in what was then an emerging field. Vardi thinks that, in retrospect, more was possible than your paper suggested; but from his tone I felt he respected your paper for framing the questions and challenging what was then the current line of work.

    I think he didn’t give you and your co-authors a chance to respond because, in the large sense, his article wasn’t about whether you were right or wrong — it was about how to go about having these debates, which are important to computer science. In my eyes, my takeaway from this article — with respect to your paper — was that you had written an important work that helped push an emerging field into new directions. But of course, I’m biased to see the positive.

  7. richde permalink
    January 11, 2010 6:50 pm

    I don’t think anyone suggested a personal attack. The reply makes it clear that the target was the article not the people: “Mr. Vardi’s target was our 1979 critique of formal program verification…”

  8. Paul Beame permalink
    January 11, 2010 8:12 pm

    I think that Moshe viewed your original article as an attack on the value of automated “formal methods” (as the area that was circumscribed as formal verification is called today) as a whole, namely saying that such methods do not and cannot have value for non-trivial systems (in contrast to testing, say). Your article was certainly taken by many as such a broadside on the whole enterprise.

    As an assessment of the value of the area as a whole, the article clearly has not been predictive (though the course of the area was influenced by your article) and maybe that is why Moshe used the term “misguided”. The formal methods approach has certainly proved invaluable, though these days a lot of the benefit of formal methods might be thought of primarily as one of falsification rather than verification (maybe partial verification of abstracted systems if one prefers). The real win with these methods seems to have been in their production of counterexample traces as well as refinement hierarchies for analysis, both of which are artifacts about which there can be the kind of social discourse you found missing in the approaches of the day. However, you seemed skeptical that any such artifacts would emerge from the effort. Without the hubris of those who thought that they might fully “verify” systems, beginning with simple cases such as finite-state systems, I doubt that we would have the successes that we do today.

    • rjlipton permalink*
      January 11, 2010 10:02 pm

      Paul, I agree with your comments. But, it a revision of the past. The community at the time had as the stated goal of the complete verification of real systems. Not find errors, not prove special safety properties, and not do what they do today. That is the target that we addressed.

      Perhaps, I would feel better if Moshe had said that the verification community of the 1970’s was misguided. That their goals were wrong. That they perhaps could attack other problems. But, they never considered, for example, their goal was to be to find errors. They would say, with a smile, that we have “verified X” so it completely correct.

      Here is what we said back then:

      null

      I think they did re-orient their goals. I think that some of the tools they have are useful, and have had a positive impact. But it is not what was the goal when we wrote our article, and it is not “verification” in any reasonable sense.

      • DTML permalink
        January 12, 2010 3:15 am

        I’ve just reread your article and for someone who used to work on formal methods, I think your arguments are really sound.

        What formal methods people do these days to achieve the so called “partial correctness” of a software system. Furthermore, verification in formal methods means to prove that a program satisfied a *formal specification* written in some formal logical language. This approach IMHO seems to be problematic for the following reasons. Even if you have a perfect proof that a program satisfies a specification, how do we verify that a specification is correct? How do we know that the specification is correct. It’s hard to believe that a programmer who have trouble write a correct program in the first place can magically write perfectly correct specification. People with experiences with mathematical logics know how hard and technical it is to encode precisely what we want to prove in a logical language even for relative simple combinatorial facts. I don’t say it’s impossible to write a correct specification, but it is much harder than writing a correct program in the first place.🙂

        The most successful approach in formal methods these days is model checking, which is simply a form of automated testing. Even Ed Clarke had to admit that it’s an open problem to identify when the specification we write is really enough to make sure the program correct. He also said we currently don’t know any good way to gain *knowledge* about a software systems from the counter example generated from the model checker. For latter point, I think your article already gave a good reason why we will never achieve the goal.

        In short, formal methods are powerful as ways to automate the testing and debugging processes. But to call it “verification” I agree is a bit dishonest for the current state of the art.

      • rjlipton permalink*
        January 12, 2010 8:22 am

        I think this is quite reasonable position. Thanks.

      • Paul Beame permalink
        January 12, 2010 7:04 pm

        I don’t quite agree with your comment that it is not “verification” in any reasonable sense. An essential feature of model checking is the verification with respect to specific properties rather than some monolithic notion. It doesn’t result in theorems of the form “This program is correct” but people do verify specific properties of systems successfully and the results are valuable. The correctness properties that can be verified in practice are necessarily limited but they do come from quite expressive classes of properties. This “local” verification does seem to me to be such a reasonable sense of the word.

      • Peter Lund permalink
        January 15, 2010 1:52 am

        Aren’t the Erlang guys at Ericsson doing quite well on the reliability front?

        Joe Armstrong wrote a PhD thesis in 2003 called ” Making reliable distributed systems in the presence of software errors”.

        For some reason, my PDF viewer (evince in Ubuntu 9.10) displays the “software” in the title on the front page as “sodware” when selected (and when pasting)…

        To people who don’t know about Erlang:
        It is a dynamically typed functional language with pattern matching and very strong support for concurrency à la Hoare’s CSP, only asynchronous. There is very strong library support for protocol handling including parsing and sequentialization and for microrebootability and in-place upgrading. They have used it for years in their telephone exchange systems which run a huge part of the GSM/3G/etc systems in the world.

        To sum their approach up:
        manipulate less state, make concurrency is easy, automate memory management, pattern match, make the development process faster and interactive, make the language and library map well to their problems so most of their code is straight forward, contain problems, use good logging, microreboot, allow partial updates of running systems.

  9. January 11, 2010 10:21 pm

    Whatever happened to a mid-1980s draft paper by Dana Angluin titled something like, “A Formal Theory of Computer Viruses”? No quick “BackRub” hits. The content I recall wasn’t much more than certain formalized virus-detection problems being undecidable. The paper seemed motivated more by what it was trying to say: that even if we could be up-to-snuff on verification, we still wouldn’t be able to guarantee security.

    And, can we formalize the proof methods Angluin gives here?

    • Joe Lieberman permalink
      January 11, 2010 11:22 pm

      I think you are referring to Fred Cohen’s “Computer Viruses: Theory and Experiments”, in which a simple diagonalization argument shows that no perfect virus detection algorithm exists. A simple workaround is to design algorithms that detect that a program may be a virus. In this case, the goal becomes that of reducing the error rate (i.e., false positive rate) over the set of benign and useful programs.

    • Rahul Santhanam permalink
      January 13, 2010 9:43 am

      Maybe Ken is referring to Adleman’s “An Abstract Theory of Computer Viruses”?

      • January 15, 2010 2:35 pm

        Could be…but besides recalling “Angluin” not “Adleman”, here’s the other concrete piece of my memory arguing “no”: The proof of one of the main theorems had the line “alter OS”—i.e. in case a given Turing machine halted or etc., the virus was allowed to alter the operating system. I don’t see those words in Adleman’s draft. Of course, Adleman may have had that wording in a prior version, and some of the proofs are similar in look to what I recall. My memory also tells me it was 1986-87 when I was at Cornell, but I was also there in 1988-89.

  10. January 12, 2010 7:58 am

    I found the paper and this discussion really interesting, so much so that I decided to do a post on it – would be very interested to hear people’s comments: http://allydonaldson.blogspot.com/2010/01/verification-controversy.html

  11. Moshe Vardi permalink
    January 12, 2010 9:08 am

    It seems that both Rich DeMillo and Richard Lipton feel slighted by my January 2010 Communications of ACM editorial. I had no intention of slighting them or the paper in question, and I apologize for unintentionally causing them to feel this way.

    Let me address the substantive points in their letter:

    1. I am accused of using my “editorial position to mount an attack on colleagues who were neither forewarned nor given an opportunity to respond.”

    The paper in question is over 30 years old. History it is said, “judges and rejudges.” I hardly view my offering some comments on such a historically important paper as a “personal attack” on its authors. Personally, if someone saw the need to disagree with a paper of mine 30 years after its publication, I’d feel complimented! Most papers are long forgotten after 30 years.

    2. I am accused of not offering Messrs. Demillo and Lipton in question an opportunity to respond prior to publication of editorial letter.

    As Editor-in-Chief of Communications I write bimonthly editorial letters, in which I often express opinions on controversial matters. The proper way to disagree with my editorials, and many people often disagree with my editorials, is to leave comments online or to submit a letter-to-the-editor. This is the standard operating procedure in all publications I am aware of.

    3. It seems that Messrs. DeMillo and Lipton were offended by my usage of the word “misguided”.

    One should read, however, the full context of that word: “With hindsight of 30 years, it seems that DeMillo, Lipton, and Perlis’ article has proven to be rather misguided. In fact, it is interesting to read it now and see how arguments that seemed so compelling in 1979 seem so off the mark today.”

    In the paragraph that preceded that comment I referred to two Turing Awards that were given for works in formal verification. For lack of space, I did not include references to two to ACM Kanellakis Awards and two ACM Software System Awards for works in formal verification. It is in this context that I expressed an opinion that the 1979 article, which implied the futility of formal verification, was “misguided”, with “hindsight of 30 years”, in spite of “its compelling arguments”.

    4. Messrs. DeMillo and Mr. Lipton disagree with my opinion that “the editors of Communications in 1979 did err in publishing an article that can fairly be described as tendentious without publishing a counterpoint article in the same issue.”

    I am actually well aware of the process that led to the publication of the 1979 article. I stand behind my opinion about the lack of counterpoint article. Messrs. DeMillo and Lipton are entitled to a different opinion. We may need to agree to disagree on this one. I do not see why this is an issue that deserves such a strongly worded response, when I expressed strong support for the editorial decision to publish the article!

    5. In a blog comment, Mr. DeMillo said: “On the other hand, singling out three authors out of the thousands who have published in CACM sure looks like an attack.”

    The subject of my editorial was “More Debate, Please! The article in question is one of the most controversial and influential articles ever published in Communications. I read it as a graduate student and was deeply impacted by it. I singled this article out because it was the perfect example to make the point of my editorial.

    6. I’d welcome a new article from Mr. Demillo and Mr. Lipton examining the issues covered in their 1979 article. Of course, I’d seek to publish a counterpoint article in the same issue!

    Moshe Vardi

    • rjlipton permalink*
      January 12, 2010 10:08 am

      Thanks for comments,

      We should get together some time and chat anyway..dick

    • rjlipton permalink*
      January 12, 2010 10:09 am

      By the way had to do next post since linked to my class

      dick

  12. January 12, 2010 10:12 am

    Even if you have a perfect proof that a program satisfies a specification, how do we verify that a specification is correct?

    That says it all, but Platonists never learn (that their fancy “world of ideas” isn’t reality).
    (not putting a smiley because after more than 2000 years this isn’t fun anymore).
    Speaking of program correctness a “perfect example” of a silly failure is just there in the comments CSS, after a few replies the bottom of the replies is hidden below the fold including the REPLY link and to attempt to reply to the proper comment (from DTML) I am hacking this box by pasting the (hopefully correct) comment number from the permalink into the (visible) URL of the reply box of another comment.

    Another fun case of correctness, the “bug free” Google Chrome browser terminally collapses (dies at launch) after just one day of use on my Linux OS (though it’s a bit of a bleeding edge sidux distro).

  13. January 12, 2010 11:56 pm

    great site. Great information. helped me alot thank you very much

  14. Moshe Vardi permalink
    January 13, 2010 4:20 am

    I just came across this relevant press release:
    http://www.japancorp.net/Article.Asp?Art_ID=22405

  15. January 5, 2013 6:56 am

    This is really the 2nd article, of your site I personally
    went through. However I like this particular one, “Why Do
    We Need Cyber-Security? | Gödels Lost Letter
    and P=NP” the very best. Thank you ,Maya

Trackbacks

  1. Random bits « Equilibrium Networks
  2. Thanks For Sharing « Gödel’s Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s