# Interaction Systems

We very frequently have to deal with devices or people delivering a service according to a command response protocol. The device sits there in some state, we give it a command and this makes it emit a response and move to a new state of some kind. We don't care how the device is built, we just want to use it, and so we only want to know what it can be used for. That is the service specification, and it is always the first and often the only thing we want to know. (There may be more than one way of providing the same service, or indeed of using one service to provide another.) How should we model such a service mathematically?

Examples are simple web-accessible service, or `slave' computer devices like disks or a paper-tape readers). You send the device commands and it gives you back results, according to a certain specification. (The specification needn't determine the response or the next-state uniquely from the command.) Other examples are games.

My suggestion here is that a certain mathematical structure, defined below, is a useful model for such a service specification. The definition is as follows. An interaction system is an object of type:

```
(Sigma S : Set) S ->
(Sigma P : Set) P ->                 -- P for Positive
(Sigma N : Set) N ->                 -- N for Negative
S
```
[ I use the convention that the scope of a quantifier like `(Sigma x : T)...` extends as far to the right/down as possible. ] So an interaction system has the form:
```
(S, \s->
(P(s), \p->
(N(s,p), \n->
r(s,p,n)))

where    S is a set                       -- base set of states
P(s) is a set, for s : S         -- `positive' actions
N(s,p) is a set, for s : S, p:P(s)
-- `negative' reactions
r(s,p,n) : S, for s : S, p : P(s), n : N(s,p)
-- result of (p,n)
```
I call a pair `(p,n)` in a set
```
(Sigma p : P(s))N(s,p)

```
an interaction. The concept of a `binary' interaction is more subtle than that of a `solitary' transition, as in interesting cases the values of the set-valued function `P` or (if not) of `N` can be empty for certain arguments. (I am tempted to use the term transaction and interaction synonymously.)

Now what has this mathematical structure got to do with services that can be obtained by use of a device or a server of some kind? Any such service determines a triple: first a state-space `S`, second a state predicate `P` that holds where there is evidence (that as it were could be brought into a court of law) that a request has been made, third a function that assigns to any state for which a request has been properly issued a family of states, namely those which might become the new state of the service when the request has been executed. One can think of the new states as given by a family

```        { { r(s,p,n) | n : N s p } | p : P s }
```
indexed by:
• The prior state `s : S`
• Evidence `p : P s` that a request has been properly made
• Evidence `n : N s p` that a response has been properly returned.

A problem here is that this is not a very abstract notion of interaction, as it is completely explicit about the codes or tokens of proper issue and commit that are exchanged. It may be possible to describe the service offered by a device using many different state spaces. In general, the `real' state space (or the most abstract description) is some kind of quotient. The `best' (least cluttered) state spaces are those in which distinct states are always behaviourally distinguishable. It may be that one should call this notion a presentation or description of a service.

A notation that seems to aid legibility is to write the result state of an interaction using square brackets, so that `s[p/n]` means `r(s,p,n)`. Usually one can tell which result function is meant from the context. When necessary, one can use decorations such as `...[...]'` and `...[...]*`, to distinguish the result functions of different interaction systems.

In terms of the notion of family, one can define an interaction system to be a coalgebra for the endofunctor `Fam^2` on the superlarge category of types. (By `Fam^2` I mean the composition of `Fam` with itself.) Such a coalgebra is a pair

```
S : Type, f : S -> Fam^2 S
```
In contrast, a transition system (which can be thought of as a "degenerate" one-sided interaction system) is a coalgebra for `Fam`.

It is also interesting to consider structures of the following form

```          S : Type, f : S -> Fam (Pow S)
```
Here the function `f` can be decomposed into a" specification" structure:
```          pre  : Pow S
post : (s : S)-> pre s -> Pow S
```
Together, these determine a monotone predicate transformer
```          [pre,post] : Pow S -> Pow S
[pre, post] P = { s : S | (exists x : pre s) post s x \subseteq P }
```
This lets the user of a program know that to bring about a goal predicate `P`, it is enough to start running the program in a state in which ` [pre,post] P ` holds. Then we know that `pre` holds (the program terminates), and also that no matter what proof is used that `pre` holds, the predicate `P` holds in any terminal state. The predicate transformer is a specification (of a batch program, state transition, or action) expressed in weakest precondition form. A specification need not be feasible. An example is the "miraculous" specification `[True,False]`, which asks for a program that terminates in all circumstances, and brings about the impossible. It is useful when developing a program to say "magic happens here". An example at the other extreme (one which is entirely useless) is `[False,...]` (it does not matter what the postcondition is), which is never obliged to terminate. This is an abort. It is useful when developing a program to express "this can't happen".

One can define a `refinement' order between specification structures.

```          [pre,post] LEQ [pre',post']
= exists f : pre \subseteq pre' .
all s : S, x : pre s          .
post' s (f x) \subseteq post s x
```
If two specification structures are related by `LEQ`, then the predicate transformer induced by the first is pointwise included in that induced by the second. (The converse does not seem to hold.)

This approach is related to some important topics:

• The refinement calculus. The specification statement comes from here. Back and von Wright. Morgan. Morris.
• Cover relations in formal topology. (One thinks of the states as neighbourhoods, or concrete states of knowledge achieved by direct observation.) The cover relation is augmented by a positivity or consistency predicate (of states).
• Normalisation models. Here there is a predicate of computable type expressions, and for each such a predicate of computable expressions with that type: this has the same form as a specification structure.

It seems a reasonable use of the words `system' and `structure' to refer to a function `f : S -> Fam^2 S` as an interaction structure on a set `S`. So an interaction system is a set together with an interaction structure on it. The other kind of system `f : S -> Fam (Pow S)` I shall call a predicate based interaction system (for want of a better term).

Interaction systems are a nice model for `system calls' across the interface between a program and the operating system it runs over (or perhaps under, or despite of, ..). Think of ` P(s) ` as the set of calls legal in state ` s `, of ` N(s,p) ` as the rest of responses or values that might be returned, and of ` s[p/n] ` as the next state of the interface.

A predicate based interaction system `[pre,post]` is interpreted in the following way. The precondition can be thought of as giving the set of calls or entries that are legal in a given state, while depending on the state `s : S` and call `x : pre s` the postcondition gives a map from states `s'` to the possible legal responses leading to that state.

Kent Petersson and Dan Synek devised a certain kind of tree when thinking about how to model formal grammars in type theory. Given a quadruple `(A,B,C,d)`, where

```
A : Set
B : A -> Set
C : (Pi a : A) B a -> Set
d : (Pi a : A, b : B a)C a b -> A
```
the Petersson-Synek tree sets with respect to this quadruple are the least family `(A,T)`, where
```
T(a : A) = (Sigma b : B(a))(Pi c : C(a,b))T(d(a,b,c)) : Set
```
In terms of formal grammars, `A` is the set of non-terminals, `B(a)` for a non-terminal `a` is the set of different forms a string matching `a` may take, and for each such form `b`, `C(a,b)` is the set of components of such a form, with the non-terminal to be matched by sub-component `c` given as `d(a,b,c)`. There is no cardinality restriction on any of these sets. The set `T(a)` is the set of parse trees for strings matching `a`.

It surprises me the quadruple structure is so closely connected with inductive definitions.

The structure is rich with applications, as indicated by the following table:

```
s : S        | p : P(s)     | n : N(s,p) | s[p/n] : S
------------------------------------------------------------
sort          | constructor  | selector   | component sort
statement     | inference    | premise    | premise statement
neighbourhood | partition    | part       | new neighbourhood
game          | attack       | defence    | new state
interrogation | question     | answer     | new state
interface     | call         | return     | new state
universe      | observation  | reading    | new state
knowledge     | experience   | result     | new state
dialogue      | thesis       | antithesis | synthesis
```
The second line of this table leads to a way of modelling Peter Aczel's rule sets: a rule set is a function which assigns to each `s` of a set `S` of statements a family of families of statements - those from which `s` may be immediately inferred, in exactly one inference step. (Aczel's own definition of a rule set (ref?) is as a subset of `S * Pow(S)`, or equivalently, a function ```S -> Pow^2(S)```.)

# Predicate transformers associated with interaction systems

Given an interaction structure `Phi(s : S) = (P(s),\p->(N(s,p),\n->s[p/n]))) : Fam^2 S` on a set S, there is a dual pair of predicate transformers that seem to be of great interest:
• `Phi^{a}(X,s) = (Sigma p : P(s))(Pi n : N(s,p))X(s[p/n])`
• `Phi^{d}(X,s) = (Pi p : P(s))(Sigma n : N(s,p))X(s[p/n])`
Note that both are monotone. The least fixed point of the first is the predicate which assigns to a state `s` the set of Petersonn-Synek parse trees for that state (or in terms of rule sets, the closed derivation trees for that statement). The predicate transformers are mutually dual in the sense that
```
X = - Y  <=>  Phi^{a}(X) = - Phi^{d}(Y)
```
holds classically, where `- X` is the complement of predicate `X`, and `=` stands for extensional equality. (Just push minus through the quantifiers according to de Morgan's laws.)

If `theta` is a monotone predicate transformer, let `theta*` be the predicate transformer (it exists, by Knaster-Tarski, or the grace of god) which assigns to a predicate `X` the least extension of `X` which is a fixed point of `theta`. A slightly curious fact is that if `Phi` is an interaction system, then `(Phi^{a})*(X)` can be expressed merely in terms of `(Phi^{a})*(True)`, where `True` is the state predicate that assigns every state the standard singleton. Let `Phi` be given in the form ```Phi(s : S) = (P(s),\p->(N(s,p),\n->s[p/n]))) : Fam^2 S```, then define

```
P* : S -> Set
P*(s) = 1 + (Sigma p : P(s))(Pi n : N(s,p))P*(s[n/p])
[ = (Phi^{a})*(True) ]

N* : (Pi s : S)P*(s) -> Set
N*(s,inl 0) = 1
N*(s,inr (p,f)) = (Sigma n : N(s,p))N*(s[n/p],f(n))

\s,ps,ns->s[ps/ns]* : (Pi s : S, ps : P*(s))N*(s,c) -> S
s[inl 0 / 0]* = s
s[inr (p,f) / (n,ns)]* = s[p/n][f(n)/ns]*
```

It can be checked that `(Phi^{a})* = Psi^{a}`, where `Psi(s : S) = (P*(s),\ps->(N*(s,ps),\ns->s[ps/ns]*))`. In terms of rule sets, what we have here is (`P*`) the inductive definition of the (possibly open) derivation trees for a statement, (`N*`) the inductive definition of the set of paths (or `escape routes') which lead (from the conclusion) to a hypothesis in such a tree, and for each tree `p` and path `n` to a hypothesis the statement `s[p/n]*`which is the content of that hypothesis. (I count the empty tree as a derivation.)

# Non-deterministic commands and interaction structures

The closure construction above can be teased apart into simpler steps, which reveals something about the connection between interaction systems and non-deterministic commands. One can define the combinators `;`, `SKIP` and `[]` in such a way so as to express `(Phi^{a})*` as the least fixed point of `X |-> (SKIP [] (Phi ; X))`, or if you like, as a loop (which is constrained in no other way than to be terminating).

The sequential composition `Phi = (Phi1 ; Phi2)` of two interaction structures is defined by:

```
P(s : S) = (Phi1^{a})(P2,s)
-- = (Sigma p1 : P1(s))(Pi n1 : N1(s,p1))P2(s[p1/n1]))
N(s : S, (p1,f) : P(s) )
= ( Sigma n1 : N1(s,p1) ) N2(s[p1/n1],f(n1))
s[(p1,f)/(n1,n2)] = s[p1/n1][f(n1)/n2]
```
In software terms, this is known as `function shipping'. Instead of sending a command, and waiting for the response before sending another, we send not just the first command, but also a function to apply to the response, so that the server can figure out the next command for itself, saving a round-trip of messages.

The `SKIP`, or "no op" instruction is given (for example) by `P(s) = N(s,p) = { 0 }, s[0/0] = s`. Interesting variants of `SKIP` are obtained if one is given a state-predicate `X`. Two are `ASSERT(X)` (for which `P(s) = X(s)`) and `ASSUME(X)` (for which `N(s,0) = X(s)`.

The alternative `Phi = Phi1 [] Phi2` is defined by:

```
P(s : S) = P1(s) + P2(s)
N(s : S, inl(p1)) = N1(s,p1)        --- and similarly
N(s : S, inr(p1)) = N2(s,p1)        --- and similarly
s[inl(p1)/n1] = s[p1/n1]            --- for inr(p2)
s[inr(p2)/n2] = s[p2/n2]            --- for inr(p2)
```
There is another (more demon-friendly?) alternative construction which seems also quite interesting:
```
P(s : S) = P1(s) * P2(s)
N(s : S, (p1,p2))  = N1(s,p1) + N2(s,p2)
s[(p1,p2)/inl(n1)] = s[p1/n1]
etc.
```
Here the angel offers two moves, and the demon picks the one it prefers.

I think that every construct on predicate transformers in Back and von Wright's refinement calculus can be represented as a construction on interaction systems.

# Relation transformers associated with pairs of interaction structures

Interaction systems induce certain relation transformers. Given a relation `R : S -> Pow(S)` between states, and two interaction structures, we consider the following transformation `R'` of `R`:

```
R'(s1,s2) = (Pi p1 : P1(s1))(Sigma p2 : P2*(s2))
(Pi n2 : N2*(s2,p2))(Sigma n1 : N1(s1,p1))
R(s1[p1/n1],s2[p2/n2]*)

i.e. R'(s) = \Cap_{p : P1(s)} (Delta (\Cup_{n : N1(s,p)} R(s[p/n])))
where Delta = (Phi2^{a})*
```
One can think here of `s1` as an `abstract' or `virtual' state, which is simulated by the `concrete' or `real' state `s2`, at least for one interaction. To simulate a single abstract interaction, we may require zero or more, but (in a particular run) only finitely many concrete interactions. (They must run atomically.) The question arises of whether one can define relations of similarity and bisimilarity between states along these lines, perhaps using greatest fixed points of operators such as `(')`. It seems to me that when this question can be answered, the prospects are good for a constructive development of predicate transformer semantics for non-deterministic programs. This means that everything in it will have a computational meaning.

So far I have said what an interaction system is, but not what a morphism between them is. I currently feel that the most fruitful notion of a morphism from one interaction system to another (ie. from a pointed coualgebra for `Fam^2` to another is a relation between the state-spaces that is a post-fixed point for the above relation transformer, and that holds between the initial states. Intuitively, a morphism implements/emulates the virtual object `on top of' the real object.

Peter Hancock