Monday 22 May 2017

Reasoning about digital circuits using string diagrams

Hardware and software, two worlds apart

As I drifted between the worlds of programming languages and hardware design over the last ten years or so, I was quite struck by one significant and maybe difficult to explain difference when it comes to modelling and semantics. 

In the world of software the workhorse of formal verification and validation is operational semantics: programming languages come with a set of syntactical rules showing how the program and other components of its configuration (e.g. the state) evolves during execution.  Let's take, for example, a simple language of commands. We may want to model a program (c) by showing how it changes the state (s), which is a function mapping memory location (addresses) into values. A configuration is a program together with its state, (c, s). We show the execution of a program in a state to a new state as run (c, s) = s' or, using fancier mathy notation (c, s⇓ s'. Here are some sample rules (this list is quite boring, you can skip it if you don't care much about details):
  • skip, the trivial program, does nothing: if we run skip in some state it produces the same state. We write this using a formal notation as run (skip, s) = s or (skip, s⇓ s.
  • sequencing is described by a composite rule: if the first command (c) runs in the initial state (s) and produces some new state (s'), and if the second command (c') runs in that state and produces another updated state (s") it is as if we run the composite command (c;c') in the initial state (s), as the same final state results (s"), or, more formally, if run (c, s) = s' and run (c', s') = s" then run (c;c', s) = s". This can be written more compactly as run  (c;c's) = run (c', run (c, s)). Or, using a more formal logical deductive notation:
      (c, s⇓ s'      (c', s') ⇓ s" 
                (c;c's) ⇓ s"
  • if statements: If in some state (s) the expression b is true, and if running the then-branch in that state produces a new state (s'), the same result is achieved by running the if-then-else statement in the initial state.
          (b, s)  true     (c, s)  s'     
     (if b then c else c', s) ⇓ s'
    Note that the behaviour of the else-branch is irrelevant here since it never gets executed. Conversely, if the guard is false, the behaviour is:
          (b, s)  false    (c', s)  s'      
     (if b then c else c', s) ⇓ s'
  • loops, which describe recursive behaviour, are modeled by recursive rules. Evaluating a while-loop in some state is the same thing as evaluating the one-step unfolding of the while-rule:
    (if b then {c; while b do c}, s) ⇓ s'
                   (while b do c, s) ⇓ s'
These rules are concrete enough to allow the definition of a (inefficient but effective) interpreter for the language. Virtually all work done on formal certification of programs is using this kind of operational-style semantics. 

Operational semantics is not the only game in town when it comes to modelling programming languages, it is just the most successful game in town. A much researched alternative is denotational semantics. A denotational semantics works by translating code -- a bit like a compiler -- into a mathematical semantic domain. So you can elevate yourself above the messy irregular world of code, into the pure and abstract world of Platonic mathematical objects. Denotational semantics sheds some much needed light on some foundational aspects of programming languages, in particular recursion via domain theory, but for most practical ends it just doesn't help that much. This is not to say that denotational semantics is a bad idea, on the contrary, it is a brilliant idea. It's just that making it work for realistic languages is quite hard. The world of ideals is not easily accessible. 

When you cross over to the world of hardware something weird happens: you discover that digital circuits have no syntactic-style operational semantics but the workhorse of validation and verification is denotational semantics. Circuits are "compiled" into a semantic domain of functions on streams of data (waveforms). So what doesn't work for programming languages seems to work fine for circuits, and vice versa.

The morass of hardware semantics

When you start teaching yourself the basics of circuit design one of the first things that you learn is that there are two kinds of circuits: synchronous and asynchronous. In the synchronous world circuits come with clocks, and time is sliced by the clock into discrete helpings. If two signals occur within a time slice they can be assumed to be simultaneous, or synchronous -- hence the name. Most electronics such as computers and mobile phones work like this. Asynchronous circuits don't make this assumption and, as a result, are much harder to use to build useful and exciting stuff. This delights academics, who find asynchronous circuits an interesting and fruitful area of research, but industry not so much. Of the various battery-powered devices strewn about your house, the remote controls are likely to contain asynchronous circuitry, but not much else. So let's focus on the more useful synchronous circuits. 

When you delve a bit deeper into the world of synchronous circuit design, and are not content merely to build useful and fascinating stuff, but try to understand what is going on mathematically, you quickly realise that in the synchronous world the greatest challenge is presented by asynchronous effects: the order in which signals happen within a clock slice. These effects are abstracted away by the semantic model but this abstraction is not compositionally sound: when you plug together digital circuits these asynchronous effects must be, disappointingly, taken into account. And there is no easy way to do that because they break the abstraction. 

One common way in which the abstraction is broken is via combinational feedback loops, i.e. feedback which doesn't go through clocked elements. If you are a seasoned digital designer you may instinctively think of circuits with combinational feedback as broken, when in fact it is the semantic model which is broken. This circuit for example has a perfectly well defined behaviour, z=x:


However, most digital design tools for simulation and verification will reject it because of the feedback loop. This is sad because combinatorial loops are what makes memory elements possible:


In the practice of digital design such circuits must be dealt with as "black boxes" because they break the abstraction. If you talk to digital design engineers from the trenches they will not acknowledge this as a problem, and if you insist they will point at your mobile phone and explain how this is what they build, successfully, so if it is a problem it is not a serious one. And I agree. But theoretically it's not entirely satisfactory.  

Going back to compositionality, what is the connection to the problem of combinational feedback? Compositionality means that the behaviour of a composite circuit can be derived from the behaviour of its components. When we feedback over a circuit, taken as a black box



there are several situations, illustrated by the following internal structures of the box:


The first one (A) represents a "trivial" feedback in which the result of the feedback is actually no feedback at all: the wire can be yanked straight. The second one (B) is a legal feedback where there is actually a memory element on the fed-back wire. Finally, the third one (C) represents the kind of circuit we genuinely want to avoid because it would be unstable. But just from looking at the input-output behaviour of a box we cannot tell in which of these situations we will end up. This is not good!

So what is the way out of the morass? Following the lead of programming language semantics we can try a syntactic, operational approach. This is the topic of a recent paper I wrote with my colleague Achim Jung, and with Aliaume Lopez from École normale supérieure Cachan: Diagrammatic Semantics for Digital Circuits.

Hardware semantics from string diagrams

To reason syntactically we first need a syntax. What is the right syntax for circuits? Conventional hardware languages such as VHDL or Verilog have a "structural" fragment which can be used to describe circuits as "netlists". Ironically, netlists are devoid of any apparent structure. They are simply a dump of the incidence relation of the circuit seen as a (hyper)graph. All components and their ports are labeled, and are connected by wires. Syntactic manipulation at this level is clearly unappealing. 

A more natural starting point is the graph of the circuit itself. If programming languages can be defined by operational semantics acting on terms, it makes sense to model circuits by acting on graphs instead. Graph rewriting is a well-studied area of research with many powerful conceptual tools so lets go there. The first observation is that rewriting a circuit-as-graph by propagating values through gates is fun and easy. We can easily define a system which works something like this:



You can see some and and or gates and some logical values (H/L) and how they can propagate through the circuit via rewriting. You can easily prove results such as confluence. However, the system wouldn't be very useful because it wouldn't tell you how to handle feedback. In the case of feedback, the input comes from the output, which is not known. So what do you do?

A feedback loop in a circuit looks a bit like recursion in a programming language, and we have seen such a rule earlier, for while-loops. What is remarkable about that rule above is that in order to model the while loop we need another copy of the while loop (in red above)! The same thing happens in the case of the Y-combinator, which is how recursion happened in the original ur-programming language, the lambda calculus. If g is some function then: 

Y g = g (Y g) = g (g (Y g)) = ...

Note again the need to copy the function g in order to evaluate it recursively. Can we do something like that for circuits though in the case of feedback? The answer is not obvious but I will summarise it below. Consider this circuit, taken as a black box; it has m input and n outputs, and we are feeding back k of the outputs:


What is the correct unfolding of such a loop? It turns out that in certain conditions (which we shall discuss below) the unfolding of this circuit diagram can be made to be this:



First of all we need to be able to define such a diagram, which requires some special componentry, highlighted below:

We need to be able to fork wires and to leave wires dangling. The forking operation is something that takes one input and produces two outputs, and the dangling operation is something that takes one input and produces zero outputs. They are "co-operations" (by contrast with operations) which take zero or more inputs and produce one single output, so have co-arities two and zero, respectively. The fork is a co-commutative co-monoid, with the dangling wire being the co-unit. Diagrammatically,  the equations they must satisfy are these:


These are just the obvious properties that you would expect of any "wire splitter":



Most importantly for us, a point which we shall revisit, is that you don't need to know the semantic model for the wire splitter in order to know that it will satisfy the equations above. What kind of current flows through it? What direction? It doesn't matter (so long as it's within some operational parameters which we shall take for granted at this level)! In some sense you can think of them as the "natural laws" of the wire-splitter! 

Having the wire-splitter-come-co-monoid structure we can construct the unfolded circuit, but is it ok to do that? Are the folded and unfolded circuit the same, in terms of behaviour? In general, for a box with wires sticking in and out this is not the case, but for many kinds of digital circuits this is the case! The reasons go back to work in pure algebra done by two Romanian mathematicians, Cãzãnescu and Stefãnescu in the 1980s, and they were also independently discovered and reformulated in terms of traced monoidal categories by Hasegawa in the 1990s. The idea is that diagrams can be unfolded only if there is a notion of Cartesian product of such diagrams. The axioms of the Cartesian product can also be neatly formulated diagrammatically: 


Informally, it means that sharing the outputs of some circuit f is the same as copying the circuit f and sharing its inputs. The other axiom says that ignoring all the outputs of some circuit f is the same as ignoring the whole circuit. That's it! In an earlier paper, Categorical semantics of digital circuits, co-authored with my colleague Achim Jung, we show that for a large class of digital circuits that is indeed the case, so digital circuits can be safely unfolded. The unfolding rule together with the simple rewrites mentioned above can be used together to evaluate circuits syntactically directly at the level of diagrams. 

Still, what's the deal with string diagrams? The key difference between string diagrams and vanilla graphs is that string diagrams have a well-defined interface, so composition ("plugging together") is a well defined operation. This little bit of extra machinery makes all the reasoning above possible -- and more. 

There are some other interesting axioms in the categorical and diagrammatic models which are worth discussing, but for now I will stop here. Because the key point was made: all the results have been derived axiomatically, from basic properties of circuits and components of circuits which are in some sense "obvious", i.e. they can be directly tested in the lab! They are model-agnostic and in fact hold for both circuits made of Boolean gates and for circuits made of saturation-mode transistors. These two have very different semantic models, yet they have the same diagrammatic semantics because they satisfy the same key axioms.

Example

Lets look now at this circuit again:

A string diagram representation of it (noting the input and output interfaces) is this:
Plugging true, a sequence of obvious rewrites leads to:
Note that the last (trivial) circuit has a spurious loop. The only way we can get rid of loops is by unfolding them, even if they are spurious. Of course, if we were obsessed with efficiency then this would be done differently, but strictly following the diagrammatic cookbook we get:
From the co-monoid rule and the second Cartesian equation we then obtain simply the constant true.

References

The content of this post relies on two papers I have co-authored:
For lots of fascinating stuff on string diagrams Pawel Sobocinsky's blog is a must-read. In particular, the application of string diagrams to quantum mechanics is worked out in exquisite detail in a book by Coecke and Kissinger, Picturing Quantum Processes.

To read more about the connections between category theory and string diagrams Peter Selinger's has a superb survey of graphical languages for monoidal categories. Specifically on rules for diagrams with feedback, Masahito Hasegawa's doctoral dissertation is an excellent reference. The other relevant paper is Virgil Emil Cazanescu, Gheorghe Stefanescu, A note on axiomatizing flowchart schemes (Acta Cybern. 9(4): 349-359. 1990).

Circuits with combinational loops have been discussed in a classic paper by S. Malik, Analysis of cyclic combinational circuits, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (Volume: 13, Issue: 7, Jul 1994). Semantic models for such circuits have been developed by Berry, Mendler and Shiple in Constructive Boolean circuits and the exactness of timed ternary simulation, Formal Methods in System Design (Volume 40 Issue 3, June 2012, Pages 283-329). 

Finally, I will talk about the material in this post as an invited tutorial speaker at the Workshop on String Diagrams in Computation, Logic, and Physics in Oxford on 8-9 September 2017 [link]. 

1 comment:

  1. Interesting stuff, Dan. I wonder: are there cases we could point to, or construct, where feedback applied over large combinational circuit structures allows us to do something more efficiently (in space or time) than isolating feedback into small sequential components. If so, this may be the killer app that allows your "digital design engineers from the trenches" to see this as of more than theoretical interest.

    George (constantinides.net)

    ReplyDelete

Understanding the issue of equality in Homotopy Type Theory (HoTT) is easier if you are a programmer

We programmers know something that mathematicians don't really appreciate: equality is a tricky concept. Lets illustrate this with a str...