in praise of algebra

9
DOI 10.1007/s00165-012-0249-0 BCS © 2012 Formal Aspects of Computing (2012) 24: 423–431 Formal Aspects of Computing In praise of algebra Tony Hoare 1 and Stephan van Staden 2 1 Microsoft Research, Cambridge, UK 2 ETH Zurich, Zurich, Switzerland Abstract. We survey the well-known algebraic laws of sequential programming, and extend them with some less familiar laws for concurrent programming. We give an algebraic definition of the Hoare triple, and algebraic proofs of all the relevant laws for concurrent separation logic. We give the provable concurrency laws for Milner transitions, for the Back/Morgan refinement calculus, and for Dijkstra’s weakest preconditions. We end with a section in praise of algebra, of which Carroll Morgan is such a master. Keywords: Algebra, Refinement, Concurrency 1. Introduction The basic ideas and content of an algebra of sequential programming are familiar [HHJ + 87]; they are summarised in Table 1. The last column describes properties of the concurrency operator (). As described in the reference, the free variables of the algebraic equations stand equally for both programs and specifications. Both of them are regarded as descriptions of events occurring in and around a computer when the program is executed. We can therefore apply programming operators to specifications and logical operators to programs. The algebra makes no distinctions. However, in informal description of the meaning of the operators it helps to distinguish the cases. The operator (;) stands for sequential composition of programs. The implementation of p ; q is expected to obey the constraint that no event in the behaviour of p can depend on any event in the behaviour of q . When p and q are specifications, p ; q describes the behaviour of any program formed as the sequential composition of two disjoint subprograms, the first of which satisfies p , and the second satisfies q . Disjunction () stands for non-deterministic choice between alternative subprograms, and conjunction p q stands for a program that behaves in any way that both p and q can behave. Its execution can be highly inefficient; it is even impossible if p and q are disjoint, and have no behaviour in common (in this case p q , and has no executions). Conjunction is omitted from or highly restricted in most practical programming languages. Correspondence and offprint requests to: S. van Staden, E-mail: [email protected] Van Staden was supported by ETH Research Grant ETH-15 10-1. Dedication: to Carroll Morgan, whose use of algebra has so often given delight to so many of his audience.

Upload: stephan

Post on 25-Aug-2016

223 views

Category:

Documents


5 download

TRANSCRIPT

DOI 10.1007/s00165-012-0249-0BCS © 2012Formal Aspects of Computing (2012) 24: 423–431

Formal Aspectsof Computing

In praise of algebraTony Hoare1 and Stephan van Staden2

1 Microsoft Research, Cambridge, UK2 ETH Zurich, Zurich, Switzerland

Abstract. We survey the well-known algebraic laws of sequential programming, and extend them with some lessfamiliar laws for concurrent programming. We give an algebraic definition of the Hoare triple, and algebraicproofs of all the relevant laws for concurrent separation logic. We give the provable concurrency laws for Milnertransitions, for the Back/Morgan refinement calculus, and for Dijkstra’s weakest preconditions. We end with asection in praise of algebra, of which Carroll Morgan is such a master.

Keywords: Algebra, Refinement, Concurrency

1. Introduction

The basic ideas and content of an algebra of sequential programming are familiar [HHJ+87]; they are summarisedin Table 1. The last column describes properties of the concurrency operator (‖).

As described in the reference, the free variables of the algebraic equations stand equally for both programs andspecifications. Both of them are regarded as descriptions of events occurring in and around a computer when theprogram is executed. We can therefore apply programming operators to specifications and logical operators toprograms. The algebra makes no distinctions. However, in informal description of the meaning of the operatorsit helps to distinguish the cases.

The operator (;) stands for sequential composition of programs. The implementation of p ; q is expected toobey the constraint that no event in the behaviour of p can depend on any event in the behaviour of q . Whenp and q are specifications, p ; q describes the behaviour of any program formed as the sequential compositionof two disjoint subprograms, the first of which satisfies p, and the second satisfies q . Disjunction (∨) stands fornon-deterministic choice between alternative subprograms, and conjunction p ∧ q stands for a program thatbehaves in any way that both p and q can behave. Its execution can be highly inefficient; it is even impossibleif p and q are disjoint, and have no behaviour in common (in this case p ∧ q � ⊥, and ⊥ has no executions).Conjunction is omitted from or highly restricted in most practical programming languages.

Correspondence and offprint requests to: S. van Staden, E-mail: [email protected] Staden was supported by ETH Research Grant ETH-15 10-1.Dedication: to Carroll Morgan, whose use of algebra has so often given delight to so many of his audience.

424 T. Hoare, S. van Staden

Table 1. Basic properties of the operators∨ ∧ ; ‖

Commutative Yes Yes No YesAssociative Yes Yes Yes YesIdempotent Yes Yes No NoUnit ⊥ � skip skipZero � ⊥ ⊥ ⊥

The primitive skip describes the program that does nothing; it always terminates successfully. The bottomsymbol ⊥ (falsity) describes a program that is never executed. It signifies an error, like a syntactic or typingerror, which the compiler is responsible for detecting; the compiler must then prevent subsequent executionof the program. The top symbol � (truth) describes a program that may do anything whatsoever. Think of itas a program that is susceptible to attack by a virus. There is no limit to the ways in which it may behave ormisbehave. Avoidance of � is the responsibility of the programmer. The clear division of responsibility betweenthe implementer of the programming language and its user is an important function of language design. It alsoplays a central role in the design of programming tools that help to discharge the user’s responsibility.

The laws of sequential programming share a surprising feature with many of the laws of physics: they areinvariant to reversal of the direction of time. Every axiom, and every theorem derived from the axioms, remainsan axiom or a theorem when the operands of all its sequential compositions are reversed. This is because the lawsdo not in any way describe the means by which an implementation discovers a valid execution of a program. Allthe axioms for semicolon are valid when (;) is interchanged with (;) which is defined by p ; q � q ; p. Caution: thissymmetry states that any equation is equally provable as the same equation after the interchange. It does not saythat p ; q � p ; q . Furthermore, when a new law is added to the algebra, it is necessary to check whether the newlaw is invariant to time reversal or not. If not, the interchange must not be applied to an equation whose proofrelies on the new law.

The partial ordering of refinement is defined as follows (see e.g. [HHJ+87]):

p ⊆ q def� q � p ∨ q

The properties of disjunction ensure that this is a partial order.The comparison p ⊆ q when applied to specifications means logical implication. So p is a valid design for a

program that implements a specification q . When applied to programs, p ⊆ q means that p is the same as q , ormore deterministic. So p is a valid optimisation of q . Reduction of non-determinism may happen even duringexecution, when some step in the execution of q requires resolution of some of the non-determinism of q ; thishappens, for example, when two guards in a guarded command of Dijkstra are both true. If p is a program and q isa specification, it means that p is correct: all behaviours of p when executed satisfy the specification q . Refinementtherefore plays a central role in specification, design, development, optimisation and execution of programs. Ithas the same central role in programming as logical implication does in mathematical proof. It is wonderful touse the same algebra for all of them.

1.1. Distribution

The laws of Table 1 deal individually with each operator of the algebra. There are more interesting laws of algebrathat link two or more operators. Distribution laws are the best known examples.

1. Sequential composition (and concurrent composition) distributes through choice in both directions. Thus ifany component of a sequential program is non-deterministic over a number of cases, it is sufficient (and neces-sary) to reason about the whole program by considering each case separately. Note that the pair of distributionlaws together, one for each direction, respect the reversal of time. One of them would not be enough.

2. Any operator distributive through (∨) is monotonic. This permits a process of collaborative program devel-opment by divide-and-conquer (stepwise decomposition).

3. Conjunction is related to disjunction by the equivalence

q � p ∨ q ⇔ p � p ∧ q

In praise of algebra 425

The most interesting law for concurrency is an exchange law, similar to the exchange law of Category Theory. Itrelates two operators, namely sequential and concurrent composition. When these operators appear as the majorand minor connectives of an expression, their roles can be interchanged:

4. (p ‖ q) ; (p ′ ‖ q ′) ⊆ (p ; p ′) ‖ (q ; q ′)

The law expresses the fact that concurrency introduces non-determinism; in fact it introduces morenon-determinism when it is the major operator, as on the right hand side of the law. This law respects timereversal.

The above laws are satisfied by an implementation of concurrency as interleaving of strings, described (say)by a regular language. Interleaving is obviously commutative and associative and has the empty string as its unit.The right hand side of the exchange law describes the interleaving of two operands, each of which is a sequentialcomposition. The left hand side describes only those interleavings in which the two semicolons on the right handside are synchronised.

The shape of the interchange law suggests a divide-and-conquer implementation of the interleaving of twostrings: split each string into two non-empty substrings; by two (perhaps concurrent!) recursive calls, generatean interleaving of the two left substrings and an interleaving of the two right substrings; return the sequentialcomposition of the two results.

The separating conjunction of separation logic [ORY01, O’H04] also satisfies all our laws for concurrency.The frame law of separation logic depends on the ‘small’ exchange law (p ‖ q) ; r ⊆ p ‖ (q ; r ), which can beproved from the full exchange law by substituting skip for p ′.

Conjunction is another operator that exchanges with sequential composition. The proof relies only on theproperty that (;) is monotonic and that the result of (∧) is stronger than both its operands. In fact, exchange lawsabound, and each expresses an important insight.

Because of its incrementality, it is easy to extend the algebra with an iteration construct. This unary operatoris typically written as a postfix Kleene star: p∗ describes the iteration where p is performed zero or more times insequence. Iteration interacts with the other operators according to four laws from Kleene algebra [Koz94]:

5. skip ∨ (p ; p∗) ⊆ p∗

6. p ∨ (q ; r ) ⊆ r ⇒ q∗ ; p ⊆ r

7. skip ∨ (p∗ ; p) ⊆ p∗

8. p ∨ (r ; q) ⊆ r ⇒ p ; q∗ ⊆ r

The first law says that p∗ has more behaviours than skip, and more behaviours than p ; p∗. A valid implementationof an iteration can therefore start by unfolding it into two cases, one of which does no iterations, and the other ofwhich does at least one iteration. The second law implies that iteration is the least solution of the first inequation.It permits inductive proofs of the properties of an iteration. The last two laws simply swap the arguments of (;)and thus preserve time reversal. In fact the third law need not be a postulated—it can be derived as a theorem.

2. Hoare triples

We define the familiar Hoare triple in terms of sequential composition and refinement (as in [HWO09]):

p {q} r def� p ; q ⊆ r

We will refer to p as the pre of the triple, and r as the post, and q as the prog. The definition says that if predescribes what has happened before prog starts, then post describes what has happened when prog has finished. Inconventional presentations of Hoare logic (e.g. [Hoa69]), the pre and post are required to be predicates describinga single state of the executing computer before and after the execution of prog respectively. This is just a specialcase of our more general definition, because a single-state predicate may be regarded as a description of all execu-tions which leave the computer in a state that satisfies the predicate when they terminate. Such an interpretationpreserves the intuitive meaning of the triple: in every state that satisfies pre, every behaviour of prog will establishpost if and when it terminates.

426 T. Hoare, S. van Staden

This definition allows Hoare’s familiar laws for the operators of structured programming to be derived fromthe algebra. To contribute to the insight, the laws below are quoted in the new algebraic notation. After all, it isno longer than the Hoare triple.

1. Consequence (post):

p ; q ⊆ r ′ & r ′ ⊆ r ⇒ p ; q ⊆ r

The post of a triple can be weakened. The law depends only on transitivity of ⊆.2. Consequence (pre):

p ⊆ p ′ & p ′ ; q ⊆ r ⇒ p ; q ⊆ r

The other law of consequence permits the pre to be strengthened. It is nothing but a restatement of mono-tonicity of semicolon.

3. For skip:

p ; skip ⊆ p

It depends on the fact that skip is the right unit of (;).4. Sequential composition:

p ; q ⊆ r ′ & r ′ ; q ′ ⊆ r ⇒ p ; (q ; q ′) ⊆ r

This rule depends on associativity and monotonicity of sequential composition.5. Non-determinism:

p ; q ⊆ r & p ; q ′ ⊆ r ⇒ p ; (q ∨ q ′) ⊆ r

This rule states that if both q and q ′ have the same pre and post, then so does their disjunction. It dependsonly on rightward distribution of (;) through (∨).

6. Iteration:

p ; q ⊆ p ⇒ p ; q∗ ⊆ p

Here p is the iteration invariant that q preserves. The rule is a consequence of law 8.7. Concurrency:

p ; q ⊆ r & p ′ ; q ′ ⊆ r ′ ⇒ (p ‖ p ′) ; (q ‖ q ′) ⊆ r ‖ r ′

This is the law postulated for separating conjunction in separation logic. It states that the pre of q ‖ q ′ isthe concurrent composition of the separate pre’s of q and q ′, and similarly the post. As a result, the proofof a concurrent composition is modular: it is sufficient to find a pre and post for its operands. The rule ofconcurrency can be derived directly from the algebraic exchange law, and vice versa.

8. Frame:

p ; q ⊆ r ⇒ (p ‖ f ) ; q ⊆ r ‖ f

The frame law of separation logic states that for any f , the pre and post of a triple remain valid when theyare composed concurrently with f . This rule is derived by monotonicity and commutability of (‖) from thesmall exchange law; and derivation of the law from the rule is trivial.

9. Disjunction:

p ; q ⊆ r & p ′ ; q ⊆ r ′ ⇒ (p ∨ p ′) ; q ⊆ r ∨ r ′

This rule depends on the distributivity of (;) through (∨).10. Conjunction:

p ; q ⊆ r & p ′ ; q ′ ⊆ r ′ ⇒ (p ∧ p ′) ; (q ∧ q ′) ⊆ r ∧ r ′

The law for conjunction has the same form as the concurrency law, with (∧) replacing (‖). The conjunctionlaw of Floyd is just a special case, which arises when q and q ′ are the same variable.

In praise of algebra 427

By reversal of time, every triple provable from the algebra gives rise to another triple. For example, the framerule translates to rule frame′:

p ; q ⊆ r ⇒ p ; (q ‖ f ) ⊆ r ‖ f

3. Concurrency in other calculi

3.1. An operational calculus

One way of reading the basic judgement p ; q ⊆ r is backwards. It says that one of the ways of executing theprogram r is to execute p first, leaving the execution of the remainder q to be executed after p. This is remarkablyclose to the meaning of the basic judgement of the Milner calculus of transitions [Mil80]. A transition can bedefined in terms of the Hoare triple:

rp−→ q def� p ; q ⊆ r

According to the above interpretation of the Milner transition, it is the same as that of the Hoare triple,except for a rotation of its operands! A consequence of this discovery is that every algebraically valid rule of theHoare calculus can be simply translated into an algebraically valid rule of the Milner calculus, and vice versa.For example, the rule frame′ translates to

rp−→ q ⇒ r ‖ f

p−→ q ‖ f

This is one of Milner’s concurrency rules, stating that an operand of (‖) may perform any of its possible firstactions, independently of the other operand. Instantiating q with skip and simplifying, we get

rp−→ skip ⇒ r ‖ f

p−→ f

Another Milner concurrency rule is remarkably similar to the Hoare rule:

rp−→ q & r ′ p ′−→ q ′ ⇒ r ‖ r ′ p‖p ′−→ q ‖ q ′

It states that both operands can perform their first action concurrently, and the rest of the computation is theconcurrent execution of the rest of each of the operands. This rule forms the basis of communication, whichachieves a rendezvous among two concurrent processes and combines their compatible actions in a single inter-nal action. For example, if p and p ′ in the above rule are the CCS input and output actions a and a respectively,then the internal action τ hides the successfully synchronised a ‖ a in CCS’s familiar rule:

r a−→ q & r ′ a−→ q ′ ⇒ r ‖ r ′ τ−→ q ‖ q ′

Other familiar operational rules that also hold as theorems include ones for prefixing, sequential composition,choice, and iteration:

r ; r ′ r−→ r ′

rp−→ q ⇒ r ; r ′ p−→ q ; r ′

rp−→ q ⇒ r ∨ r ′ p−→ q

r∗ skip−→ skip

r∗ skip−→ r ; r∗

Applying time reversal to the Hoare rules and writing the results in Milner notation yields rules that are basedon the top-level operator of the triple’s middle argument:

rq ′−→ r ′ & r ′ q−→ p ⇒ r

q ′; q−→ p

rq−→ p & r

q ′−→ p ⇒ rq∨q ′−→ p

pq−→ p ⇒ p

q∗−→ p

428 T. Hoare, S. van Staden

Although they look the same as Hoare rules, these rules are dual (and not equivalent) to the counterpartsin Hoare logic. In fact, the rules actually used by Milner in his definition of CCS are only a subset of thosethat are derivable from the algebra introduced here. Milner distinguishes atomic actions of a program (such asinput, output and internal actions) from composite actions. He requires that the middle operand of his triple andthe first operand of a sequential composition (its prefix) must be atomic. The last three rules are consequentlyrejected because they use composite actions in their conclusion judgements. Except for the third-last rule, allthe other operational rules in this section still hold when the distributivity law p ; (q ∨ r ) � (p ; q) ∨ (p ; r ) isremoved from the algebra. The third-last rule does not need full distribution, but only the weaker property that(;) is monotone in its second operand.

3.2. The adjoint calculi

A slight rephrasing of the basic judgement p ; q ⊆ r is that q is a program whose execution after p will lead tosatisfaction of r . The weakest post specification r/p is defined in [HHJ+87] as the weakest (specification of a)program q that has this property. Any other program that satisfies this property must be stronger. Let us introduceinto our algebra a least upper bound operator for arbitrary sets of programs. We will strengthen the distributionlaw so that semicolon distributes through arbitrary least upper bounds. We can then define

r/p def�∨

{q | p ; q ⊆ r}

The weakest post-specification can be identified with the specification statement of Back [Bac80, Bac81] andMorgan [Mor88, Mor94]. In Morgan’s work, the notation : [p, r ] is used instead of r/p.

A direct consequence of the definition is the equivalence

q ⊆ r/p ⇔ p ; q ⊆ r

Again, a fundamental judgement of the calculus is obtained by reordering the arguments of the same judgementof the previously treated calculi. For example, the concurrency rule for weakest post-specifications is translatedfrom the rule of Hoare logic:

q ⊆ r/p & q ′ ⊆ r ′/p ′ ⇒ q ‖ q ′ ⊆ (r ‖ r ′)/(p ‖ p ′)

From this rule, we can immediately derive the algebraic law which relates (/) to (‖):

(r/p) ‖ (r ′/p ′) ⊆ (r ‖ r ′)/(p ‖ p ′)

It simply says that (/) and (‖) exchange.Other laws which follow from the algebra are listed in [HHJ+87]. (/) distributes leftward through

conjunctions, and has � as its left zero. It distributes rightward through disjunction, but (∨) changes into (∧),i.e. r/(p ∨ q) � (r/p) ∧ (r/q), and it has right unit skip. It is monotone in the first and anti-monotone in thesecond argument. All of these properties of Morgan’s post-specification can be translated by time reversal to theweakest pre-specification, defined in [HHJ+87] as a generalisation of Dijkstra’s weakest precondition [Dij76]. Itis defined by

p\r def�∨

{q | p ; q ⊆ r}

or more simply by

q ⊆ p\r ⇔ p ⊆ r/q

By time reversal, we know that (\) and (‖) also exchange.All theorems of this paper have been formally checked with Isabelle/HOL [NWP02]. A proof script is available

online [Pro11]. Most proofs were found automatically, without interaction, using the Sledgehammer tool [BN10].For an introduction on algebraic reasoning with Isabelle and Sledgehammer, see [FSW11].

In praise of algebra 429

4. In praise of algebra

The merits of algebra as an aid to reasoning were recognised by Gottfried Leibnitz in the eighteenth century.Inspired by his discovery of the infinitesimal calculus, he proposed that the validity of all mathematical proofscould be established by symbolic calculation based on algebraic foundations. In the nineteenth century, GeorgeBoole similarly formalised his Laws of Thought in terms of Boolean Algebra. Boolean algebra is now used as thebasis of all reasoning, both human and mechanical, about the design, implementation, optimisation, verificationand evolution of computer hardware circuits. Would it be too fanciful to suggest that an algebra of programming,and the tools which are based upon it, might be of similar benefit in software engineering?

In the early 20th century, the unifying power of algebra was critical in the exploration of the foundations ofmathematics. For example, various branches of arithmetic have completely different foundations. Natural num-bers are defined in terms of sets of sets, integers are defined as certain pairs of natural numbers, fractions as pairsof integers, real numbers as sets of fractions, etc. The arithmetic operations on each of these types of numberinevitably have very different definitions. Fortunately, all the definitions have been shown to obey (nearly) thesame algebraic laws. The differences between the various number systems are simply and clearly formalised bya choice between axioms that are satisfied by only one of them. So it is possible to reason and calculate withnumbers, without necessarily specifying what type of number it is. This paper has used algebra to unify fourindependent (and even competing) calculi of programming: Hoare logic, operational semantics, the specificationstatement calculus, and weakest preconditions.

In the exploration of the theory of programming, the most useful property of algebra is its incrementality.Axioms for a new programming design pattern or a new programming language concept or structure can beeasily introduced into an existing algebraically defined programming language. All the axioms of the existinglanguage remain intact, and so do all the theorems already proved from them. Thus each innovation builds onprevious work, and there is no need to start again from scratch.

Incrementality encourages an axiomatic style in the formalisation of programming calculi and program-ming languages. Each axiom may say very little about the properties of the operators that it contains. Furtherindependent properties can always be added separately. Design decisions about the language can be made oneproperty at a time. This contrasts with the more usual approach to formalisation, in which each operator is intro-duced by a formal definition or a rule, and only one such definition is allowed. This definition must therefore besufficiently complex that all the desired properties of the operator can be deduced from it. Of course, when the com-plete algebra has been specified, there is no further need for incrementality. Then it is reasonable to give a definitionof each operator of the algebra in terms of a model, and prove that the definitions satisfy the laws of the algebra.

Finally, when the time comes to put the algebra to practical use, we can enlist the assistance of proof tools,which are already extremely effective at algebraic reasoning. Even the educated general public understands alge-bra, and many scientists and engineers are good at algebraic calculation. Those with mathematical inclinationseven appreciate the elegance of an algebraic proof.

The disadvantage of algebra is its high degree of abstraction. If you are interested in what you are talkingabout, the algebra will never tell you what it is! As a result, it is hard to tell when there are enough axioms, orwhether there are maybe too many. There may even be so many axioms that every formula of the algebra canbe proved equal to every other formula. That would not be a useful algebra, because there is only one value inthe universe to which the algebra applies, and only one operator, namely the one which always delivers this onlyvalue as its result.

The final problem is that algebra will never tell you that a conjectured equation is false. For this reason,the algebraist will normally accompany the postulation of axioms by exploration of a wide range of disparatemathematical models which satisfy these axioms. The scientist or engineer can then choose a model, or constructa new one, that corresponds to some known reality in the real world. Again, the search for counterexamples to aconjecture can exploit the power of modern model checking tools.

An alternative, which is more congenial to the pure mathematician, is to prove a normal form theorem. Anormal form is an expression which follows a particular pattern. A normal form theorem gives valid rules whichenable every expression of the algebra to be transformed into just one of these normal forms. So if two expressionsare reduced to different normal forms, there can be no proof that they are equal. The set of all normal forms isitself an ‘initial’ model of the algebra.

In summary, the study of algebra (together with its models) has made an immense contribution to theadvancement of mathematics; and there is now a good prospect that it may assist in reasoning about computerprograms, and in improving confidence in their correctness.

430 T. Hoare, S. van Staden

5. Conclusion

Our praise of algebra is supported by evidence drawn from many referenced resources, including doctoraltheses [Bac78], technical reports [Bac80], proceedings of conferences [Bac79, HMSW09, HHM+11], scientificmonographs [Dij76, Mil80], learned journals [HMSW11], letters [WHO09], and keynote addresses [O’H04].Each of these publications was written (as this one is too) for a special occasion and for a special audience. Themain contribution of this paper is to collect relevant material from all these sources, and to present it briefly, in acoherent, comprehensible and comprehensive manner, to broaden the insight and enlarge the understanding ofthe readership of a general scientific Journal.

In comparison with its sources, the contributions of this paper are neither numerous nor deep. Indeed theirtriviality only strengthens the evidence in favour of algebra as an effective research tool. There are no originalformulae. Again, discovery that two well-known formulae are in fact the same is sometimes as valuable as thediscovery of a new formula. It prevents duplication (and even potential errors) in the planning and implementationof a design automation suite of tools that use both the formulae.

The main original insight of the paper, which surprised and delighted its authors, was the discovery that allthe laws satisfy the symmetry of time reversal. Since they also satisfy the order-reversal symmetry of lattices,symmetry provides a great many ‘theorems for free’. The authors were also pleased to find an algebraic proofthat that both the Dijkstra and the Back and Morgan calculi satisfy a version of the exchange law, and so doesconjunction.

We conjecture that the main significance of the algebra reported in this paper is in the convergence of deduc-tive and operational semantics of programming, and their extension to both sequential and concurrent program-ming. The two symmetries of the algebra show that each style of semantics is just a backwards and upside-downpresentation of the other! Furthermore, the two kinds of semantics are consistent, in the sense that any model ofthe algebra is simultaneously a model of the deductive and of the operational rules for a programming language.This is the basis of a proof of correctness of an operational implementation of the language with respect to itsdeductive semantics (or vice versa). A traditional proof of the consistency of two complementary calculi can bemore arduous. It often uses induction (on the structure of syntax) and co-induction (on the steps of the compu-tation). Such proofs are notoriously fragile in face of extensions or variations in the features of the language. Wehope that algebra will make it easier to build new languages with new features on the basis of the results alreadyachieved by research on earlier languages.

Acknowledgments

We are grateful to the anonymous referees of this paper, who made many suggestions for its improvement.

References

[Bac78] Back R-J (1978) On the correctness of refinement steps in program development. PhD thesis,◦

Abo Akademi, Department ofComputer Science. Helsinki, Finland. Report A-1978-4

[Bac79] Back R-J (1979) On the notion of correct refinement of programs. In: 5th Scandinavian logic symposium, Aalborg, Denmark.Aalborg University Press, Denmark

[Bac80] Back R-J (1980) Correctness preserving program refinements: proof theory and applications. Mathematical center tracts, vol131. Mathematical Centre, Amsterdam

[Bac81] Back R-J (1981) Proving total correctness of nondeterministic programs in infinitary logic. Acta Informatica 15:233–249[BN10] Bohme S, Nipkow T (2010) Sledgehammer: judgement day. In: Proceedings of the 5th international conference on automated

reasoning, IJCAR’10. Springer, Berlin, pp 107–121[Dij76] Dijkstra EW (1976) A discipline of programming. Prentice-Hall, Englewood Cliffs[FSW11] Foster S, Struth G, Weber T (2011) Automated engineering of relational and algebraic methods in Isabelle/HOL. In: Proceed-

ings of the 12th international conference on relational and algebraic methods in computer science, RAMICS’11. Springer,Berlin, pp 52–67

[HHJ+87] Hoare CAR, Hayes IJ, Jifeng He, Morgan CC, Roscoe AW, Sanders JW, Sorensen IH, Spivey JM, Sufrin BA (1987) Laws ofprogramming. Commun ACM 30:672–686

[HHM+11] Hoare CAR, Hussain A, Moller B, O’Hearn P, Petersen R, Struth G (2011) On locality and the exchange law for concurrentprocesses. In: Joost-Pieter K, Barbara Konig (eds) CONCUR 2011 concurrency theory. Lecture Notes in Computer Science,vol 6901. Springer, Berlin, pp 250–264

[HMSW09] Hoare C, Moller B, Struth G, Wehrman I (2009) Concurrent Kleene algebra. In: Mario B, Gianluigi Z (eds) CONCUR2009—concurrency theory. Lecture Notes in Computer Science, vol 5710. Springer, Berlin, pp 399–414

In praise of algebra 431

[HMSW11] Hoare T, Moller B, Struth G, Wehrman I (2011) Concurrent Kleene algebra and its foundations. J Logic Algebraic Program,80(6):266–296

[Hoa69] Hoare CAR (1969) An axiomatic basis for computer programming. Commun ACM 12:576–580[HWO09] Hoare CAR, Wehrman I, O’Hearn PW (2009) Graphical models of separation logic. In: Manfred B, Wassiou S, Tony H (eds)

Proceedings of the 2008 Marktoberdorf Summer School on engineering methods and tools for software safety and security.IOS Press, Amsterdam

[Koz94] Kozen D (1994) A completeness theorem for Kleene algebras and the algebra of regular events. Inf Comput 110:366–390[Mil80] Milner R (1980) A calculus of communicating systems. Lecture Notes in Computer Science, vol 92. Springer, Berlin[Mor88] Morgan C (1988) The specification statement. ACM Trans Program Lang Syst 10:403–419[Mor94] Morgan C (1994) Programming from specifications, 2nd edn. Prentice Hall International (UK) Ltd., Hertfordshire[NWP02] Nipkow T, Wenzel M, Paulson LC (2002) Isabelle/HOL: a proof assistant for higher-order logic. Springer, Berlin[O’H04] O’Hearn P (2004) Resources, concurrency and local reasoning. In: Philippa G, Nobuko Y (eds) CONCUR 2004—concurrency

theory. Lecture Notes in Computer Science, vol 3170. Springer, Berlin, pp 49–67[ORY01] O’Hearn PW, Reynolds JC, Yang H (2001) Local reasoning about programs that alter data structures. In: CSL ’01, of LNCS,

vol 2142. Springer, Berlin, pp 1–19[Pro11] Isabelle/HOL (2011) Proofs. http://se.inf.ethz.ch/people/vanstaden/InPraiseOfAlgebra.thy,[WHO09] Wehrman I, Hoare CAR, O’Hearn PW (2009) Graphical models of separation logic. Inf Process Lett 109(17):1001–1004

Received 20 December 2011Accepted in revised form 25 May 2012 by Peter Hofner, Robert van Glabbeek and Ian HayesPublished online 2 July 2012