Logical Inference in Computer Science: deduction, induction, abduction

Logic acknowledges three kinds of inference: deductive, inductive, and abductive. What kind of impact did they have on computer science and programming languages in particular?

Deduction is in some sense the golden standard of logical inference. It represents a set of fixed rule through which general statements are made more particular in a way which can be guaranteed to be sound, that is to preserve truth. Aristotle's syllogisms are an example of deductive inference:

All men are mortal.
Socrates is a man.
Therefore Socrates is mortal.

In programming language theory the impact of deductive thinking is best captured by the celebrated Curry-Howard correspondence between propositions (in a logic) and types (in a language). This correspondence is more than a technical result, although it can be precisely formulated. It is a guiding methodological principle which drives the design of functional programming languages. 
Besides type systems, programming logics are usually given a deductive formulation (axioms and rules of inference), and they can be used to provide extremely strong guarantees of correctness for software, at a pretty high cost (i.e. many PhD students working for many years on a medium-sized piece of code). Types and logics can come together, as in the extremely elegant form of dependent types, which lie at the foundation of powerful proof assistants such as Coq or Agda. Coq in particular has made some headway in proving the correctness of significant software projects such as an optimising C compiler (the CompCert project).
Induction is a form of inference in which the premises of an argument are generalised into universal statements. Induction in general must not be confused with mathematical induction, which is still a deductive argument for introducing universal quantifiers. Whereas mathematical induction provides absolute certainty as to the result, in general induction does not. It provides a measure of truth which is merely probable, informally speaking. Induction was long considered at the root of scientific discovery: you observe a bunch of facts, then you generalise your observations into a theory. This is what the pioneers of the scientific method (Galileo, Bacon) proposed. Empiricist philosophers like Hume and JS MIll looked into this way of doing science more critically, but it is perhaps fair to say that the current prevailing view is Popper's who, somewhat dogmatically, says that induction has no place in science. He holds that no amount of data can confirm a theory, but experimental data could contradict a theory, insofar as that theory is formulated in a logically reasonable way ("falsifiable"). 
The most common Computer Science manifestation of inductivism is software testing, or system testing more generally speaking. Usually it is impractical to validate the correctness of a system deductively for the sheer amount of work involved. The developer runs tests, a collection of instances in which, if successful, the system behaves as supposed to. The tests can be extensive and systematic, but even so they do not provide evidence on par with that of deductive reasoning. This is typical inductive reasoning. It is perhaps fair to say that in the practice of verification the Popperian view that inductive arguments play no role is strongly rejected -- rightly or wrongly, I don't know. But it seems to be generally accepted that a thoroughly tested piece of software is more likely to be correct than an untested one. 
A more spectacular instance of inductivism in Computer Science is machine learning. ML is nothing but inductivism at its most unrestrained: the theory is created directly from the data, by-passing the creation of a deductive framework or the requirement of explanatory powers. This is incredibly fashionable now, of course, and it solves many problems in a fully automated way. The problem with it is, well, that it's inductive. Which means that it works until it doesn't. Just like in the case of testing, the predictive power is limited to instances similar to what has already been tried out. 
Abduction seems to be more mysterious, even dangerous. However, it merely describes the logical process of analysing a situation and extracting (abducting) the best possible explanation. Looking at science from a logical perspective, the inductivists (Galileo) thought that universal statements arise naturally, whereas the deductivists (Popper) thought that they are mere guesses. The abductivists think, on the contrary, that the process of formulating "best explanations" can be made systematic and are at the root of how scientific discovery progresses. A persuasive proponent of this view is Lakatos who, despite not using the term "abduction" explicitly, gives a wonderful argument for the use of abduction in mathematical discovery in his book Proofs and Refutations.
In Computer Science I would say that the most interesting manifestation of abductive reasoning is program analysis: starting with a piece of code, extract logical conditions as to when the code does not crash, and logical descriptions as to what the code achieves. The term "abduction" is, as far as I know, only used in a paper describing the logic behind the verification tool Infer by Facebook. The methodology equally applies to other, more common, analyses such as type inference. Because what kind of "inference" are we talking about here? Clearly not deductive, since the type has not been established yet, and obviously not inductive. 
To conclude this brief tutorial, all three forms of logical inference are richly reflected in computer science and programming languages. The one area in which they seem to come together is that of program verification. And this is not accidental. Programming languages are strongly related to logics, and their verification is as hard as it is necessary. We need to use every trick in the book.

Comments

  1. I think the term abduction in CS comes from the book https://www.cambridge.org/core/books/abductive-inference/4866FA1D38820D414FEB7883E0FC5F23.
    More recently, other tools, such as Caper also use the term abduction: https://www.doc.ic.ac.uk/~pmd09/research/publications/2017/esop/caper-automatic-verification-for-fine-grained-concurrency.pdf

    ReplyDelete
    Replies
    1. I was aware of the book and perhaps I should have mentioned it. There is indeed another use of abduction, other than in programming languages and verification, in AI and logic programming. Because the semantics of logic programming is based on proof search, the role of abduction in that context seems to be very important. The only reason I haven't mentioned it is that I don't understand it very well.

      Delete
  2. I have been looking at Robert Kowalski's work on logic production systems. In particular, I am interested in his thoughts on forward and backward reasoning. Abduction plays a key role although I struggle to explain it to others. You r thoughts here help.

    ReplyDelete
  3. You may find this work relevant:
    https://lirias.kuleuven.be/handle/123456789/161887
    (been a while since I last read this paper though)

    ReplyDelete

Post a Comment

Popular posts from this blog

What else are we getting wrong?

Comparing apples with oranges: OCaml vs Java in teaching

Welcome to my blog