Skip to content

Edmund Clarke, 1945–2020

January 18, 2021


If only model checking could fight the virus

Edmund Clarke passed away due to COVID-19 complications. Ed will be missed. He was a winner of the 2007 Turing award for his seminal work on model checking. This was joint with Allen Emerson and Joseph Sifakis. They were cited for “developing Model Checking into a highly effective verification technology that is widely adopted in the hardware and software industries.”

Today I thought we might highlight some of his contributions.

Ed was someone that I knew since the late 1970’s. We met in 1977 at POPL—a conference that I used to regularly attend—where Ed gave a talk. He was a student of Bob Constable; Ken knew Bob at Cornell the next decade but does not recall meeting Ed. As related in this short IEEE bio, Ed had his bachelor’s and master’s degrees in mathematics and started as a mathematics PhD at Cornell before changing to computer science. His POPL paper was on exactly what his Wikipedia bio notes about his thesis work under Constable: it found programming language constructs that were tough nuts for Tony Hoare’s program verification logics.

The IEEE bio continues:

After graduating in 1976, Clarke joined Duke as a computer science professor. In 1978 he began teaching computer science at Harvard. While there, Clarke and his doctoral student Allen Emerson conducted research on methods that could be used to effectively verify how a system performs without errors. In 1981 they published a paper on model checking, “Design and Synthesis of Synchronization Skeletons Using Branching Time Temporal Logic,” in Logics of Programs.

Model Checking

Model Checking (MC) is related to verification technology (VT). But it is different. MC was initially used mostly for hardware, although it is used for software too. MC also can and is used for partial properties. This to me is the main and most interesting difference. See slides and the book by Ed, Orna Grumberg, Daniel Kroening, Doron Peled, and Helmut Veith.

I have long had doubts about the claims of VT. This goes back to my 1979 paper with Rich DeMillo and Alan Perlis. We had several issues with the claims of VT, issues that still hold today.

The brilliant insight of Ed and his colleagues is that these issues are avoided for the most part by MC. Here are the main ones:

  1. What is correct? A key issue with VT is how do you capture the notion of correctness? We argued even for almost trivial algorithms it is hard to get this right. For sorting type algorithms we noted that even a published account gets the correctness wrong: they left out that the output was not only in sorted order, but also was a permutation of the input.

  2. How do you prove them? Okay suppose you believe that you have captured what correctness means. How do you know that it always holds? VT tries to prove the universal sentence: For all inputs, the algorithm always satisfies the property. The difficulty is that proofs in mathematics are different than proofs that arise in VT. This difference reduces the ability to trust proofs. We trust math proofs for all sorts of reasons that do not apply to VT.

The point of MC is that there are answers to these:

  1. What is correct? MC uses a variety of tools that makes correctness more likely to be captured. I especially like the ability to give partial notions of correctness. It may be hard to state exactly what you want hardware to do, but it may be easy to state something that you want it to satisfy. A server’s correctness may be difficult to capture, but requiring the server to avoid stopping completely, may be still quite useful.

  2. How do you prove them? MC uses automatic tools rather than proofs. This allows tremendous leverage that makes the MC much more reliable than any proof based system. This cannot be stressed enough. Checking via a general tool is immensely more believable than any proof.

Open Problems

Our thoughts and condolences go out to Ed’s family and friends.

One Comment leave one →
  1. Josh Burdick permalink
    January 22, 2021 6:32 pm

    I think people may end up looking at “proofs about programs” more as unit tests: they check that one particular aspect of a program is working, but don’t guarantee perfection.

    Searching for proofs has long been known to have an AND-OR tree structure, somewhat like game trees. (E.g. if you’re trying to prove something, and applying Lemma 1 breaks the problem into two quadratic equations, but applying Lemma 2 breaks the problem into a linear equation and a nasty integral… you probably want to use Lemma 1.)

    My impression is that systems like Isabelle/HOL or Coq have _some_ completely-automatic methods, but those are searching hugely-branching trees. So I’m curious if things along the lines of AlphaGo will be able to learn to prove things (for instance, by learning to steer away from integrals towards easier problems.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s