(Sigma S : Set) S -> (Sigma T : Set) T -> S[ I use the convention that the scope of a quantifier like

```
(Sigma x : T)...
```

extends as far to the right/down as
possible. ] So a transition system has the generic form:
(S, \s->(T(s),\t->p(s,t))) where S is a set -- base set of states T(s) is a set, for s : S -- transitions from s p(s,t) : S, for s : S, t : T(s) -- destination of t from sFor the sake of readability, I often write the destination of a transition using square brackets, writing

` s[t]`

instead
of ` p(s,t)`

. Usually one can tell which destination
function is meant from the context.
In terms of the notion of family, one
can define a transition system to be a coalgebra for the
endofunctor ` Fam`

(on a suitable category of types).

S : Set, Tp : S -> Fam SCompare this with a binary relation:

S : Set, R : S -> Pow Swhere

` Pow`

refers to the contravariant endofunctor
` Pow`

.
It is useful to refer to a function ```
f : S -> Fam
S
```

as a transition *structure* for a set ```
S
```

. So a transition system is a set together with a
transition structure on it.

So what are morphisms between transition systems? Personally, I
am not sure, and do not mind not being sure, so long as it isn't a
permanent condition. All the formulations that have occurred to me
have involved a notion of equality between states. For example it
would be quite natural to define a map between ```
(S1,T1,p1)
```

and ` (S2,T2,p2)`

to be

- a function
`f : S1 -> S2`

(translating states to states). - a function
`g : (Pi s1 : S1) T1(s1) -> T2(f(s1))`

(translating transitions to transitions). - a proof that
`(Pi s1 : S1, t1 : T(s1)) f(s1[t1]) = f(s1)[g(t1)]`

```
state1 =
state2
```

or ` state1 /= state2`

if the variables
` state1, state2`

refer to states of the physical
universe? In principle that is something you could never detect,
within the universe. I prefer to think about "states" as a
construction from transitions, or rather the actions by which
transitions are brought about. If the notion of equality between
states is subtle, then the notion of equality between transitions
from "the same" state is obviously even more subtle.
Transition systems are related conceptually to what I call interactive systems, which
`split' a transition into a positive and negative part. I don't
fully understand the relationship between transition systems and
interactive systems. One can in some senses "capture" an
interactive system with a pair of transition systems. Interactive
systems are closer to the concurrency-theorists' labelled
transition systems; namely, (in their unrooted form,) triples
` (S,L,R)`

where ` S, L : Set`

and ```
R :
Pow (S*L*S)
```

. The differences lie in replacing the set ```
L
```

with a family of sets indexed by ` S`

(` L(s)`

is the set of labels emerging from a given
state), and the ternary relation ` R`

by a binary
function mapping a state and a label leading from it to the family
of states to which it can lead.

` (A,B)`

where ` A : Set`

and ```
B : A ->
Set
```

. So ` Fam Set`

is a type, or large set. Let us
however ignore this "size consideration" for the moment and pretend
that ` F = Fam Set`

is a set, and so can be the state
space of a transition system. We can put the following transition
structure on ` F`

. The ```
(A,B)
```

are defined to have type
Tfam (A,B) = (Sigma a : A) B a -> A : SetThe

(A,B)[a,b] = (B a, B . b) : Fwhere `

` _ . _`

' refers to composition. To see that this
type checks, remember that ` a : A`

, ```
b : B a ->
A
```

and ` B : A -> Set`

, so ```
B a :
Set
```

and ` B . b : B a -> Set`

.
A chain starting from ` (A,B) : Fam Set`

has the
form

(A,B) | | | (a,b) : (Sigma a : A) B a -> A | V (A',B') = (B a, B . b) | | | (a',b') : (Sigma a' : B a) (B . b) a' -> B a | V (A'',B'') = ((B . b) a', B . b . b') | | | (a'',b'') : (Sigma a'' : (B . b) a') (B . b . b') a'' -> (B . b) a' | V (A''',B''') = ((B . b . b') a'', B . b . b' . b'')and so on. Each successive family of sets is a "internal refinement" or restriction of its predecessor.

Now, what about size questions? I have defined a transition
system to have a *set* of states. The situation appears to
be this. For each fixed family of sets ` (A,B)`

we can
put a transition structure on the set of "internal-families" of
` (A,B)`

, ie the set

sfam (A,B) = (Sigma a : A) B a -> ANamely, we take as transition sets

T : sfam (A,B) -> Set T(a,b) = (Sigma c : B a) B (b c) -> B a = sfam (B a, B . b)and as destinations

_[_] : (Pi (a,b) : sfam (A,B)) T (a,b) -> sfam (A,B) (a,b)[c,d] = (b c, b . d) where a : A b : B a -> A c : B a d : B (b c) -> B aIn other words, we can replace talk of "sets" by talk of "small sets".

At the moment I don't understand this transition structure. Is it well-founded? Transitive? I need some examples.

Suppose ` (A,B)`

is the family of finite sets. ...

` Phi(s) = (T(s), \t-> s[t])`

is a transition
structure on a set ` S`

, then we can form its reflexive
and transitive closure ```
Phi*(s) = (T^{t}(s), \t->
s[t])
```

as follows.
T^{t} (s : S) = 1 + (Sigma t : T(s)) T^{t} (s[t]) s[0] = s s[(t,ts)] = s[t][ts]Note how the notation here is actually slightly abusive:

```
s[t]
```

is the ` t`

'th immediate predecessor of s
with respect to the original transition structure, while ```
s[t][ts]
```

is the ` ts`

'th of that predecessor with
respect to the transition structure ` T^{t}`

. To be
pedantic, I should write something like ```
s[(t,ts)]^{t} =
s[t][ts]^{t}
```

.
One can present reflexive and transitive closure as the least fixed point of a certain monotone operation on transition structures, as follows. (The order, and hence monotonicity will not be apparent yet.)

- Given a pair of transition structures
`f, g`

on a set`S`

, define their sequential composition`(f;g)`

by the clauses(f;g) = \s->(T(s),\t->s[t]) where T(s) = Sigma t : T_f(s) . T_g(s[t]_f) s[tf,tg] = (s[tf]_f)[tg]_g

- Define the `skip' transition structure by
SKIP = \s->(T(s),\t->s[t]) where T(s) = { 0 } s[0] = s

- Given a pair of transition structures
`f, g`

on a set`S`

, define the `union'`f |_| g`

byf |_| g = \s->(T(s),\t->s[t]) where T(s) = T_f(s) + T_g(s) -- disjoint union s[inl t] = s[t]_f s[inr t] = s[t]_g

The order in which this is a union is not yet apparent.

` phi : S -> Fam S`

can be written as a least fixed
point
mu psi : S -> Fam S . SKIP |_| (phi ; psi)of the monotone operator

X |-> SKIP |_| (phi ; X)on the transition structures on a set. (Here we define

```
f .le.
g
```

to mean that the induced predicate transformers are
pointwise related by inclusion. ```
(f^{d}) \subseteq
(g^{d})
```

. See below.)
One can compare the construction of the free category over a
directed graph with the construction of the reflexive transitive
closure of a transition structure. One can think of the set ```
S
```

of states of a transition system as vertices of a directed
graph, of ` T(s)`

as the set of arrows with source/target
` s`

and of ` s[t]`

as the target/source of
arrow ` t`

. To form the free category over such a graph,
one takes the arrows in it to be paths (ie. finite sequences of
zero or more consecutive edges) in the graph. I have chosen to
build up paths `cons' style, with the new edge going always `on the
left'. there are other definitions. For example, `snoc'-style, with
the new edge going always `on the right'. Also `cat' paths, built
up from the empty path and the given edges by binary concatenation;
in this case however one has to build into the definition of
equality between such paths that they form a monoid.

```
Phi(s) = (T(s),
\t-> s[t])
```

on a set ` S`

are two predicate
transformers which I shall write ```
Phi^{a}
```

and ` Phi^{d}`

for `angelic' and
`demonic'. (Just regard these as labels. I doubt the metaphor of
angels and demons is really appropriate.)
Phi^{a}, Phi^{d} : Pow S -> Pow S Phi^{a} (P : Pow S, s : S) = (Pi t : T s) P (s[t]) Phi^{d} (P : Pow S, s : S) = (Sigma t : T s) P (s[t])I define

` f .le. g`

to mean ```
f^{d} .le.
g^{d}
```

as predicate transformers.
- The angelic one,
`Phi^{a}`

, transforms a predicate`P`

over`S`

into the predicate of a state that no matter what transition is taken from that state, the next state will satisfy`P`

. Of particular interest are those `dead' states from which there is*no*transition. These are described by the predicate`Phi^{a}(False)`

, where`False`

denotes the predicate that is everywhere the empty set. - The demonic one,
`Phi^{d}`

, transforms a predicate`P`

over`S`

into the predicate of a state that there is some transition from it to a state which satisfies`P`

. Of particular interest are those `live' states from which there is*some*transition. These are described by the predicate`Phi^{d}(True)`

, where`True`

denotes the trivial predicate whose value is everywhere the standard singleton.

Note that (where ` x . y`

means composition)

(f;g)^{a} = (f^{a}) . (g^{a}) (f;g)^{d} = (f^{d}) . (g^{d}) (SKIP)^{a} = (SKIP)^{d} = the identity predicate transformer -- ? (f |_| g)^{a}(P) = (f^{a})(P) \cap (g^{a})(P) (f |_| g)^{d}(P) = (f^{d})(P) \cup (g^{d})(P)(Or should that be the other way round?) Clearly both these predicate transformers are monotone (with respect to the inclusion ordering on predicates). The least fixed point of

` (Phi^{a})`

is the
predicate which describes the well-founded (aka accessible) states
in the transition system: any sequence of transitions from such a
state terminates in a dead state. (Such a state might be called
mortal in the sense that it will inevitably die.) On the other
hand, the greatest fixed-point of ` (Phi^{d})`

is the
predicate which describes `potentially immortal' states from which
there is an infinite sequence of transitions.
The other extreme fixed points of these predicate transformers
are uninteresting. The greatest fixed-point of ```
(Phi^{a})
```

is extensionally equal to the predicate which is
everywhere ` True`

. The least fixed-point of ```
(Phi^{d})
```

is extensionally equal to the predicate which is
everywhere empty.

` (S,\s->(T(s),\t->s[t]))`

is a transition
system, then a binary relation ` R`

between states is a
` R`

is included in ```
R'
```

where
R'(s1,s2) = (Pi t1 : T(s1))(Sigma t2 : T(s2))R(s1[t1],s2[t2])It is a

` R`

is included in both
` R'`

and ` (converse of R)'`

. Note that being
a simulation or a bisimulation is a second order, impredicative
property of binary relations. It requires an existential quantifier
over all binary relations to form the union of all simulations or
all bisimulations on a set. The relation
\ (s,s') -> (Sigma R : S -> S -> Set) R \subseteq R' /\ R(s,s')is large.

As a kind of dual of the transformer ` ()'`

above, we
can consider

R`(s1,s2) = (Sigma t1 : T(s1))(Pi t2 : T(s2))R(s1[t1],s2[t2])The property that

` R``

is included in ` R`

,
is equivalent to
(Pi t1 : T(s1)) [(Pi t2 : T(s2))R(s1[t1],s2[t2])] => R(s1,s2)I have no idea what to call a relation with this property.

My best intuition of what ` S`

is is as follows.
Imagine a 2-player game in which the protagonist first chooses a
transition from ` s1`

, and the responder next chooses a
transition from ` s2`

, and so on. The protagonist wins
by making a move to which there is no response, i.e. when the
responder is in a dead (minimal) state. In terms of this game,
` S(s1,s2)`

holds if the protagonist has a winning
strategy when starting in state ` s1`

with the responder
in state ` s2`

. The strategy can be represented by a
well-founded tree, with the nodes represented by the protagonist's
transitions, and the arrows by the reponder's. One might say that
` s1`

*exceeds*, or *is beyond* ```
s2
```

, which has *shortcomings* with respect to ```
s1
```

. The intuition is that part of ` s1`

cannot be
simulated by ` s2`

.

We can also define the notion of a win by the responder, in which the protagonist reaches a state in which there are no further moves. (It is not meaningful to ask whether the responder might somehow have run out of moves at the same time.)

These are interesting notions, I think, because winning and losing correspond to "bugs", or contests in which one player vanquishes the other at some point in time. The complementary situation is one in which there is no winning and losing, but the game runs harmoniously forever. (This is something we desire in real-life "games", like sending requests to servers and getting responses, and perhaps too in marriage.)

Suppose we say that two states are *upper-equivalent*
when the same states are beyond them, and *lower-equivalent*
when they are beyond the same states. These are obviously
equivalence relations.

- The simplest example of a transition system has no states, and for each state ..., well, there aren't any, so we needn't say what the rest of the structure is. This transition system surely deserves to be called zero, or naught.
- Given a transition system
`Phi = (S,\s->(T(s),\t->s[t])`

we can form its successor`Phi' = (S',\s->(T'(s),\t->s[t]'))`

whereS' = S + 1 T'(inl s) = T(s) T'(inr 0) = S (inl s)[t]' = inl (s[t]) (inr 0)[s]' = inl s

This is in a sense the von-Neumann successor of`Phi`

, if we think of the family`{ s[t] | t : T(s) }`

as incorporated into`Phi'`

in the form of the (topmost) element`0 : S'`

. - As a generalisation of the last example, if we are given two
transition systems
`Phi_1 = (S_1,\s->(T_1(s),\t->s[t]_1)`

and`Phi_2 = (S_2,\s->(T_2(s),\t->s[t]_2)`

we can form the binary ordered sum`Phi = Phi_1 + Phi_2 =(S,\s->(T(s),\t->s[t])`

whereS = S_1 + S_2 T(inl s) = T_1(s) | T(inr s) = S_1 + T_2(s) (inl s)[t] = inl (s[t]_1) | (inr s_2)[inl s_1] = inl s_1 | (inr s_2)[inr t_2] = inr (s_2[t_2])

- As a yet further generalisation of the two preceding examples,
we can define the ordered sum of a family of transition systems
indexed by the states of a transition system. Let
`A =(A.S,\s->(A.T(s),\t->s[t]))`

be a transition system, and`B(a) = (B(a).S,\s->(B(a).T(s),\t->s[t]))`

a family of transitions systems indexed by its states. The ordered sum of this family can be defined in the following way.S = (Sigma a : A.S) B(a).S T(a,b) = ((Sigma t : A.T(a)) B(a[t]).S) + B(a).T(b) (a,b)[inl(t,b')] = (a[t],b') (a,b)[inr(t)] = (a,b[t])

- Perhaps one can define a set of ordinals below the first
inaccessible (or hyper-inaccessible, etc.) as follows. We
inductively define the ordinals, and recursively define for each
ordinal an index set for its predecessors, and the indexing
function. That is, we define an object of type:
Sig I : Set . I -> Fam I

which is to say, a transition system. This is not a true inductive-recursive definition (where we define an object of type`Fam D = Seg I . I -> D`

for some fixed`D`

.{- [2000-09-05 12:10:28] -} --#include "amenities.agda" --#include "N0.agda" --#include "N1.agda" open N1_p use N1 = FORM, star = IN open N0_p use N0 = FORM Maybe (A :: Set) :: Set = data zero | succ (a :: A) {- We define (a type corresponding to) an inaccessible that is closed under the step to the next inaccessible; this may be called a hyper-or super- inaccessible?). It is a bit problematic: it is not an IR definition. We simultaneously define I, and a function which assigns to each a : I the indexed family of its predecessors in I. Is it valid? -} mutual I :: Set = data zero | succ (a' :: I) | lim (a :: I) (f :: Seg a -> I) | reg (a :: I) -- closure of a under 0, +1 and limits -- | inacc (a :: I) -- closure of a under 0, +1, limits and reg {- An index set for the predecessors of a -} Seg (a :: I) :: Set = case a of (zero) -> N0 (succ a') -> data zero | succ (x :: Seg a') (lim a' f) -> Sigma (Seg a') (\(b::Seg a') -> Seg (f b)) (reg a') -> data zero | succ (x :: Seg a) | lim (x :: Seg a) (f :: Seg (pd a x) -> Seg a) | inc (x :: Seg a') -- (inacc a') -> data zero -- | succ (x :: Seg a) -- | lim (x :: Seg a) (f :: Seg (pd a x) -> Seg a) -- | reg (x :: Seg a) -- | inc (x :: Seg a') -- -- .. clearly we can extend this {- Indexing the predecessors of a -} pd (a :: I) :: Seg a -> I = case a of (zero) -> \(h::N0) -> case h of { } (succ a') -> \(h::Seg a) -> case h of (zero) -> a' (succ x) -> pd a' x {- -- alternative? case h of (zero) -> zero@_ (succ x) -> succ@_ (pd a' x) -} (lim a' f) -> \(h::Sigma (Seg a') (\(b::Seg a') -> Seg (f b))) -> pd (f h.fst) h.snd (reg a') -> \(h::Seg a) -> case h of (zero) -> zero@_ (succ x) -> succ@_ (pd a x) (lim x f) -> lim@_ (pd a x) (\(h'::Seg (pd a x)) -> pd a (f h')) (inc x) -> pd a' x -- (inacc a') -> \(h::Seg a) -> -- case h of -- (zero) -> zero@_ -- (succ x) -> succ@_ (pd a x) -- (lim x f) -> lim@_ (pd a x) (\(h'::Seg (pd a x)) -> pd a (f h')) -- (reg x) -> reg@_ (pd a x) -- not sure. -- (inc x) -> pd a' x

- ...

Peter Hancock Last modified: Fri Sep 15 17:43:33 BST 2000