Tuesday 9 April 2019

Andromeda and vegetarian mathematics

This post was inspired by a Twitter dialogue between two mathematicians whom I admire.

Andrej Bauer tweeted to John Baez:
John, you should try programming in a language that guarantees your program is correct. It will feel like proving theorems all the way through. (Agda is a good candidate.) 
To which John replied:
Nothing will ever guarantee my programs are correct, because I don't have the patience to work through lots of details of syntax.  It's just not interesting enough to be worthwhile to me.   
I prefer to just think about stuff until I understand it clearly, then explain it.
This exchange motivated me to write down some thoughts on proof assistants, particularly since a wonderful graduate school dedicated to "univalent mathematics" just finished at our university, so the topic is fresh in my mind.


Computer proof assistants have been around since the 1970s and yet the term "assistant" is still almost a misnomer. You don't really feel much "assisted" when using them. I say this even though I like them enough to have introduced them in my university's undergraduate teaching curriculum. Or maybe I don't like them, I like the idea of them.

This is because the main purpose of a proof assistant is not to help you prove a statement, but to prevent you from making a mistake when proving a statement. They are more airbag than sat-nav. More seatbelt than gas pedal. This is why most people feel about proof assistants as they feel about vegetarianism. They respect it. They know the world would be a better place if everyone did it. But they like hamburgers too much.

I think that some in the proof assistant community are beginning to understand that the obsession with error avoidance could be a bit misguided. Andrej Bauer's talk on his new proof assistant Andromeda was truly illuminating. The new proof assistant seems very appealing. The idea can be summarised like this: Errors in proofs could happen because mathematicians make silly mistakes or because the underlying theory is unsound. The former is a practical concern, the latter a foundational concern. Lets focus mainly on former. 

The foundational concern has obsessed logicians and philosophers like Russell and Frege, and we have learned a great deal about how to avoid the most obvious pitfalls. But avoiding them comes at a cost. We need to restrict ourselves to a small set of syntactic primitives. In particular, we need to have as few axioms as possible, since too many axioms may lead to inconsistency. If axioms are the meat of mathematics, then type theory is the ultimate vegetarian cookbook.

The problem, and perhaps this is what John is alluding to, is that in this setting a proof about X often may not feel like a proof about X but like a proof about some Y, the encoding of X into the heavily regulated formal system. There could be a cold remoteness to such an encoding. Compare the concrete beauty of Euclidean geometry to its analytic version -- all symbols, no figures. Some people may like that, there is no accounting for taste, but others would rather just do geometry when they do geometry, not algebra. Lakatos, in his wonderful Proofs and Refutations, works this example in detail and in the process transforms a flawed but intuitively appealing geometrical proof into a flawless but impenetrable algebraic proof. He calls this premium on correctness at the expense of everything else deductivism. He advocates instead heuristic proofs, which have intrinsic value as a mathematical thought process. The most extreme instance of a deductivist proof is the purely computer-generated proof, which confirms that a sentence is true outside of any need for human insight.

An aside: The foundationalist preoccupation with encoding concepts into other concepts is quite pervasive in computer science. The Church-Turing hypothesis, the birth certificate of our discipline, is all about encodings. Its extreme reductionism is sensible in context, but the idea of encoding concepts into other concepts may not always be sensible. It is true that all x86 instructions can be reduced to MOV and it is true that many computational effects can be encoded into the lambda calculus via monads. But this is only interesting if you ignore the cost. In the case of programming the cost is space and time. In the case of mathematics the cost is understanding and insight. 

The practical concern -- mathematicians making mistakes -- is however a very reasonable one. Andrej's Andromeda prover discounts foundational worries and focusses on avoiding mistakes. The mathematician is not prevented from formulating possibly inconsistent axiom systems. This starts to smell like hamburger. This is a tool that lets you get things done with a minimum of fuss. This could be mathematical fast food -- sinful and delicious. Curiously, in some sense it is a throwback to old-fashioned systems such as LCF or ACL, and a departure from more modern systems like Agda or Coq. But it smells like hamburger.

I had my own personal encounter with such foundational issues when working on categorical semantics for digital circuits (paper in CSLinvited talk at Applied Category Theory 2018). A categorical model is ultimately a bunch of axioms. There is a danger of specifying a degenerate category in which the only models are trivial (e.g. everything is equal). To show that this is not the case it is usual to provide one or more interesting models -- in the case of digital circuits, a category of streams and functions on streams. The categorical model had a lot of meat (I mean axioms) so it is a reasonable worry that put together they would amount to a degenerate category. Or is it?

I always felt slightly annoyed about having to provide such a concrete model. It is technically difficult and I felt it was unnecessary. Luckily my colleague Achim Jung worked it out.

When modelling an aspect of physical reality (e.g. digital circuits) there is another non-mathematical possibility of validating the axioms: by experiment. Each axiom can be seen as a (natural) law. And in this case the 'model' of our theory is reality itself. On the other hand, the concrete mathematical model is not something we could validate experimentally, just mathematically. Moreover, the concrete model has more properties than the categorical model, some of them impossible to validate experimentally. So do we really need such a model? If the basis of truth is the reality of digital circuits, could a model of them be experimentally validated yet trivial/inconsistent? Does it even matter?

The categorical model is perhaps not exactly "hamburger", but it does have a lot of meaty axioms. And we can actually compute the properties of some tricky circuits with it! The concrete streams-and-functions models is a requirement only from a foundationalist perspective.

Of course, satisfying the practical and the foundational requirements is a best-case scenario. Having a concrete non-degenerate model or a consistent mathematical theory cannot hurt. On the contrary. Maybe a kind of healthy, environmental, lab-synthesised meat will be possible in the future. Until then, there are difficult choices to be made.

Understanding the issue of equality in Homotopy Type Theory (HoTT) is easier if you are a programmer

We programmers know something that mathematicians don't really appreciate: equality is a tricky concept. Lets illustrate this with a str...