An excursion in formal verification with Coq

"I think it wise, and only honest, to warn you that my goal is immodest. It is not my purpose to 'transfer knowledge' to you that, subsequently, you can forget again. My purpose is no less than to effectuate in each of you a noticeable, irreversable change."

Edsger W. Dijkstra, EWD 1213 (1995).

Correctness

G.H. Hardy, who coordinated the foundation of the Journal of the London Mathematical Society in 1926, encouraged referees to ask three questions of material under review:
  1. Is it new?
  2. Is it interesting?
  3. Is it correct?
Vladimir Voevodsky famously highlighted that despite best efforts, question 3 can be hard to answer. He argued that certain frontiers in mathematics had reached such a level of complexity that traditional proof checking by human was no longer sufficient. Voevodsky believed it was time for the use of proof assistants to become mainstream in mathematics, and he imagined a future in which mathematicians might typically submit research articles in formal language suitable for computer checking so that question 3 could be delegated to a computer.

Voevodsky also practiced what he preached: he started the Coq-based UniMath project and began building an impressive ediface of mathematics upon it.

The extent to which we should follow Voevodsky's example is not yet clear. What is clear is that proof assistants are helping us find real mistakes in the published mathematical literature, and the mathematics community is taking note.

Foundational issues

The so-called foundational crisis that pestered mathematics in the late 19th / early 20th century stemmed from the inconsistency of naïve set theory (within classical logic). Two families of solutions that proved useful were those of:
  1. Zeremelo: be explicit and judicious about the choice of axioms, e.g., ZFC, NBG, ...
  2. Russell: introduce a type system, e.g., Simple Type Theory, CoC, HoTT, ...
Both options shut the door on the construction of the paradoxical objects like Russell's paradox and its even uglier cousins.

Satisfactory axioms for option A were essentially worked out by the early 1920s whereas option B, type theory, is still settling down. (In fact type theory is rather an exciting field at the moment.)

Very roughly speaking, most mathematicians followed Zeremelo and most computer scientists followed Russell. Partly because proofs assistants tend to be written by computer scientists, and partly because type theory really does seem to be better suited for the task, most proof assistants use a type theory as their foundation.

Type theory vs set theory

At first glance, one might think type theory is quite like set theory. Instead of sets containing elements, one has types inhabited by terms. Given two types we can (usually) consider the type of functions between them, their disjoint union, their cartesian product etc. However the similarity is really quite superficial and is even misleading.

Whereas in set theory if we have two sets $A, B$ we may always ask $A \in B$, in type theory there nothing analogous. We may speak of a term inhabiting a type, e.g., $0 : \mathbb{N}$ but this is not a logical proposition as $A\in B$ is in set theory. Indeed there is no such thing as a logical proposition; unlike set theory, type theory is not founded inside logic.

All we have in type theory are judgements, and a list of rules for forming new judgements from old. A judgement like $0 : \mathbb{N}$ is really just a certificate that $0$ may be used in any typing rule that demands an inhabitant of $\mathbb{N}$.

For example, assuming our type theory includes the natural numbers, there is a typing rule that says if we have a judgement $n : \mathbb{N}$ then have a new judgement $succ(n) : \mathbb{N}$, and thus our judgement $0 : \mathbb{N}$ means that we are allowed form the new judgement $succ(0) : \mathbb{N}$.

Proof assistants

By now we have numerous proof assistants, including: Coq, Isabelle/HOL, Lean, Agda, Idris, NuPRL and many more. Thierry Coquand's Coq, whose underlying type theory is the Calculus of Inductive Constructions is probably the most popular. Isabelle is noteworth because of the associated Archive of Formal Proofs built upon it. Lean, backed by Microsoft Research, is the new kid on the block and seems to be getting a lot of interest.

I want to try Lean next but I decided to begin my education with Coq.

Clipping in Coq

In the land of Coq, the Curry-Howard correspondence is essentially a tautology and so, though my ultimate motivation was to formalise mathematical proofs, I included some practice implementing simple algorithms as part of my Coq training. Besides, I like coding.

One task I set myself was to implement a two-dimensional clipping algorithm in Coq that I had written years before in Haskell. This repo on Microsoft's code-hosting hub contains the results. The problem statement, which I am self-plagiarising from here is as follows:

Input: A point in the plane, known as the eye, together with a list of points in the plane which form the vertices of a polygon. All coordinates are integers.
Output: The set of sides of the polygon which are at least partially visible to the eye (looking in any direction).
For example, in the polygon below, the edges coloured red are those visible to the black dot representing the eye:

Example edge visibility problem

I still like this problem despite once giving an awful talk about it at a Haskell meetup.

A direct transliteration from Haskell would actually be quite easy, and not a great learning experience, so to spice things up I decided to start afresh and to use dependent types fairly heavily.

For example, I wanted to have a ViewPort type representing the current estimate of the triangle-shaped area that the eye can see. Simplifying slightly, I defined this as:

      
Inductive ViewPort :=
  | viewPort : forall (e : Point) (ls : LineSeg),
               ~isPointOnLine e (lineFromLineSeg ls) -> ViewPort.
      
    
This says that to construct a ViewPort instance, we must provide three pieces of data:
  1. a point,
  2. a line segment (essentially a pair of distinct points),
  3. a proof that the point does not lie on the line segment's line.
Item 3 above is where the dependent typing occurs: the type of item 3 depends on the values of items 1 and 2.

Using dependent types in this way makes it much easier to reason about the code, as well as to prove correctness properties. For example, unlike the Haskell implementation, I can tell at a glance that any logic handling a term of type ViewPort does not need to cater for its vertices being collinear.

Expressing the algorithm in terms of types whose constructors demand proofs like the non-collinearity data above, had quite a few ripple effects. In order to be able to provide the proofs demanded by these constructors at various steps of the clipping algorithm, I was forced to prove all sorts of basic facts from plane geometry. Here is an example:

      
Lemma two_points_characterise_line' :
  forall (l : Line) (p q : Point) (evDistinct : ~eqp p q),
  isPointOnLine p l -> isPointOnLine q l ->
  eql l (pointPairLine (pointPair p q evDistinct)).
      
    
By the time we reach this lemma, we already know: This lemma says that if we have a line and two distinct points on that line, then the line defined by those two points coincides with the line we were given! See here to see it in context, and to see the proof.

At a high level the clipping algorithm is synthetic but I did not really try to keep the algebra and geometry apart. I think it might be instructive to implement a purely synthetic solution to this problem, perhaps on top of GeoCoq.

No Law of Excluded Middle

As noted above, set theories are defined inside a flavour of logic (e.g., first order logic) whereas type theories are not. Logic itself must thus be built from the type theory at hand.

The logic obtained from the Calculus of Inductive Constructions is constructive, in the sense that "double negatives do not cancel", i.e., there exists a model in which there is no term of the following type: $$ \forall P : Prop, \neg \neg P \to P. $$ It is consistent to add an inhabitant of this type to the theory, but not necessary.

There are many equivalent formulations of this. For example, equivalently, there is no term of type: $$ \forall P : Prop, P \lor \neg P, $$ the law of excluded middle, $LEM$.

More interestingly (I think) is the equivalence with Pierce's Law. That is, either (and thus both) of the above two types are inhabited iff Pierce's type is inhabited: $$ \forall P~Q : Prop, ((P \to Q) \to P) \to P. $$ Under the Curry-Howard correspondence, this is precisely the type of a continuation. I first learned about continuations years ago, from Ben North in the context of Scheme's call/cc and thought they were rather an exotic idea.

For a long time, I used to dismiss intuitionism but through my experience with Coq, I have come to find it an attractive standpoint. For one thing, you really only gain by trying to be constructive. If, in some logical system without $LEM$, you cannot find a proof of $P$ without using $LEM$, you still have a result, namely you can prove $LEM \to P$. You haven't really lost anything! Furthermore, I have been surprised just how far you can go without $LEM$.

Finding constructive proofs can feel very much akin to removing an unnecessary dependency from a codebase. Also, even if ultimately, you do need $LEM$ somewhere it is often possible to factor it into the "right" places in the proof rather than globally just sticking $LEM \to$ at the front of the whole proof.

Lastly, if you care about first order logic then in fact you should love intuintionistic logic: it is a finer logic that actually contains first order logic.

Not Turing complete

Not only does Coq's logic lack the law of excluded middle, but as a programming language, it is not Turing complete!

This manifests in the form of a restriction Coq places upon any recusively-defined function: a proof that it terminates on all inputs must also be provided. (Actually Coq usually builds this proof itself automatically.) That such a restriction is necessary is obvious for otherwise the following would be valid:

      
Definition nonsense (P : Prop) : P := nonsense P.
      
      or equivalently:
      
Theorem everything_is_true : forall P : Prop, P.
Proof. exact everything_is_true. Qed.
            
    

In practice the above restriction does not really limit the sorts of programs you can write. It can happen that sometimes Coq cannot spot that a recursive function which terminates on all inputs does not loop forever, and so it will fail to compile, i.e., type check. Furthermore since type checking is computing, the halting problem guarantees that this is not some avoidable deficiency in Coq. However in my experience so far, bumping into this restriction is exceedingly rare, and besides, there is always a practical solution: explicitly cap the depth of recursion (manually counting recursive depth via an extra argument if necessary).

After all, our physical computers only have finite memory and we only have a finite amount of time to wait for them to finish.

Extraction

A supremely useful feature of Coq is its support for extraction. This is the process by which you may get Coq to emit code in another language, better suited for computation on modern machines.

Of the languages to which Coq will extract, OCaml is the best-supported, but my experience with extraction to Haskell was near-flawless. (Perhaps if I worked at Jane Street rather than SIG I might have opted for OCaml.)

All I had to do to obtain a working Haskell implementation of my clipping algorithm from the Coq implementation was create this file.

In fact, I could drop most of the content of that file. Much of it just ensures that the extracted Haskell code uses some Haskell-native types to represent corresponding Coq types, for performance reasons. The following would be sufficient to extract:

      
Require Coq.extraction.Extraction.
Extraction Language Haskell.
Extraction "./clipPolygon.hs" clipPolygon.
      
    
I really was very impressed how easy extraction was.

Some rough edges

For all its strengths, there are some rough edges in Coq. For example dependent pattern matching does not automatically specialise the types of free variables to the extent that it might, and does not automatically populate the context with everything that it could.

For a concrete example of the latter, suppose that we wanted to implement the Collatz function in Coq: $$ f : \mathbb{N} \to \mathbb{N}\\ f(n) = \begin{cases} n/2 & \mbox{if $n$ is even},\\ 3n+1 & \mbox{otherwise}. \end{cases} $$ We might begin as follows:

      
Fixpoint is_even (n : nat) : bool :=
  match n with
  | O => true
  | S n' => negb (is_even n')
  end.
      
    
Suppose further that we were very conservative and so we insisted on using a division-by-two function that dependently demanded a proof that $n$ was even, like this:
      
Definition div_2 (n : nat) (evidenceEven : is_even n = true) : nat := Nat.div n 2.
      
    
We might then attempt to continue like this:
      
Definition collatz_broken (n : nat) : nat :=
  match (is_even n) with
  | true => div_2 n missingEvidence (* PROBLEM *)
  | false => 3*n + 1
  end.
      
    
The problem is that in the first branch of the pattern match, Coq does not provide us with the evidence that $n$ is even so we are stuck!

The solution is to pattern match in a way that forces Coq to include the missing evidence. I learned the approach below from Adam Chlipala's book "Certified Programming with Dependent Types" (section 8.4) so I'll follow him in calling it the "convoy pattern". Here's how it goes:

      
Definition collatz_convoy (n : nat) : nat :=
  let evEven := is_even n in
  match evEven as evEven' return (evEven = evEven' -> nat) with
  | true => fun evEven => div_2 n evEven
  | false => fun _ => 3*n + 1
  end (eq_refl evEven).
      
    
It seems to me that Coq could do more to help here, perhaps just by providing syntactic sugar for the convoy pattern.

The HoTT topic

I mentioned above that unlike set theory, type theory seems to be settling down only about now. Without a doubt, the most interesting recent development is Voevodsky's Univalence Axiom.

Equality in type theory is rather a subtle concept, especially in the so-called intensional type theories of Martin-Löf, which seem best suited to our current needs. In intensional type theories, an equalty like $a=b$ is itself a type. This is very useful but raises new issues, for example the question of function extensionality.

Suppose we have two types $A, B$ as well as two inhabitants of the type of functions between them, $f, g : A \to B$. Also suppose that the funtions agree on all arguments, i.e., we have an inhabitant: $$ eq\_pointwise : (\forall a : A, f(a) = g(a)). $$ The question we ask of our type theory's equality is, do we thus have $f = g$? More precisely, does our theory include judgement rules that allow us to obtain an inhabitant $eq : f = g$ from the above judgement for $eq\_pointwise$?

Function extensionality does not hold in Coq's type system, the Calculus of Inductive Constructions, though it may be consistently added (there are good reasons not to include it by default). However adding function extensionality really just solves one instance of a problem that pervades intensional type theories without univalence: things that should be equal (like functions that agree everywhere) are not. Voevodsky recognised that the univalence axiom, which can be regarded as a vast generalisation of function extensionality, solved all such problems in one fell swoop.

Unlike LEM or function extensionality, univalence is inconsistent with the Calculus of Inductive Constructions. However there do exist models of intensional type theories with univalence, i.e., homotopy type theories, so we know that HoTT is at least as consistent as ZFC. In fact, it is possible to do homotopy type theory within Coq but it requires some care to avoid the inconsistency. I hope to see this situation improve; currently there is no proof assistant that is perfectly tailored to HoTT.

Now for the really amazing thing about univalence: the types of a universe for which univalence holds behave like old-fashioned topological spaces up to homotopy! Familiar arguments from 20th century homotopy theory can be ported into results in type theory, and once this point of view is appreciated, at-first-confusing properties of such homotopy type theories make sense in a beautiful way. Adding in higher inductive types enriches the theory even further and deep topological questions like the stable homotopy groups of spheres become relevant.

We essentially obtain a synthetic approach to homotopy theory.

I spent many hours poring over the superbly written HoTT book and I cannot do better than recommend the interested reader to direct their attention there.

Aside from Lurie's Higher Topos Theory (which I regret I have not read at all) the next most interesting developments seem to be:

If I had more time, I could happily spend many hours trying to learn more of this field.

References

For a first introduction to Coq, I highly recommend Pierce's Software Foundations, and for a first introduction to Homotopy Type Theory, I highly recommend the Univalent Foundations Program book. Both are available for free online.
  1. [link] Angiuli, C., and Harper, R., and Hou, K-B., "Cartesian Cubical Computational Type Theory: Constructive Reasoning with Paths and Equalities", CMU preprint (2018).
  2. [link] Bertot, Y. and Castéran, P., "Coq'Art: : The Calculus of Inductive Constructions", Texts in Theoretical Computer Science (2004).
  3. [link] Chlipala, A., "Certified Programming with Dependent Types", MIT Press, (2013).
  4. [link] Cohen, C., and Coquand, T., and Huber, S., and Mörtberg, A. "Cubical type theory 'where univalence is computable'", TYPES (2015).
  5. [link] Dijkstra, E.W., "Introducing a course on calculi", EWD 1213 (1995).
  6. [link] Gouëzel, S. and Schur, V., "A quantitative version of the Morse lemma and quasi-isometries fixing the ideal boundary" (2018).
  7. [link] Lurie, J., "Higher topos theory", Annals of Mathematics Studies, no. 170 (2009).
  8. [link] Pierce, B. et al., "Software Foundations".
  9. [link] Pierce, B., "Types and programming languages", MIT Press (2002).
  10. [link] The Univalent Foundations Program, "Homotopy Type Theory: Univalent Foundations of Mathematics", IAS Princeton (2013).
  11. [link] Voevodsky, V., "The Origins and Motivations of Univalent Foundations", IAS Princeton, Summer letter (2014).

Repositories

  1. Archive of Formal Proofs
  2. Closed Fences
  3. Cubical Type Theory
  4. Univalent Mathematics