One good reason (among others) to be interested in locating the
`ordinal' of a system is that countable ordinals are representative of
an important class of interactive programs.
Prog = data { $Stop -- zero
, $Cycle (p : Prog) -- successor
, $Read (p : (n : Nat)-> Prog } -- countable sup
On alternate days this makes some/no sense to me. I write to ask what
you think?
My argument consists of the following unordered sequence of steps.
* I distinguish between between obtaining an IO command by computing
the whnf of a productive program :: Main, and "running a program"
= interacting with it, in successive computations.
* Wellfoundedness is interesting because liveness is. They are
different sides of the same coin. It is because we want programs
that live (react, are productive) forever that we want their
internal computations to terminate.
There seems to be more than one way to understand
infinite/cyclic/non-wellfounded objects. In all of them, the
well-founded objects are bound to play a central role.
* It captures a lot of important cases to model an interaction as an
exchange of messages.
* It is a reasonable idealisation to model the messages exchanged in a
real interaction by (fully evaluated) natural numbers. So countably
branching structures are especially interesting.
(One might also want to bound the size of messages. Of course, in
practice one wouldn't want to encode messages as natural numbers.)
* The countable ordinals represent programs that can in principle
be run in a ("batch") stream of transactions, each of which gives
rise to a terminating sequence of atomic steps, run in isolation.
* It is interesting to know the "asymptotic" limits necessarily
imposed by a type system upon the terminating interactive programs
it will allow. (And also - the ones it makes it feasible to
write. We can in principle write a program with the same interactive
complexity as the countable ordinal (w ^ .. ^ w) in Godel's T, where
the tower of w's is 2^128 deep. We can do it in practice (on a
sheet of A4) if we add a universe. (Cf. Godel's remarks on shorter
proofs in stronger systems.)
-----
"operational semantics of ordinals"
> data Prog = Stop | Cycle Prog | Read (Nat -> Prog)
> data Str a = Push a (Str a)
> data Nat = Z | S Nat
> data Trace = Step Nat -- output
> (Maybe (Nat, -- input
> Trace))
> type Input = Str Nat
> type Behaviour a = Input -> (a,Input) -- a kind of parser.
> behave :: Prog -> Behaviour Trace
> behave p inp
> = case p of Stop -> (Step Z Nothing, inp)
> Cycle p' -> let (Step n r, res') = behave p' inp
> in (Step (S n) r, res')
> Read f -> let Push i inp' = inp
> (e, res') = behave (f i) inp'
> in (Step Z (Just (i,e)), res')
This is quite complex. Somehow the simplest interpretation is the
following "slow growing" interpretation. (Any continuous type 2
function is representable in this way. But there is something
bogus here about the interpretation of Cycle.)
> type2 :: Prog -> Behaviour Nat
> type2 p inp
> = case p of Stop -> (Z, inp)
> Cycle p' -> (S n, inp') where (n,inp') = type2 p' inp
> Read f -> type2 (f i) inp' where (Push i inp') = inp
How to run slow growing programs in a "batch stream".
> batch2 :: Str Prog -> Str Nat -> Str Nat
> batch2 (Push p ps) inp
> = Push n ns where (n,inp') = type2 p inp
> ns = batch2 ps inp'
(Clearly we can run the programs in the other sense.)
> batch :: Str Prog -> Str Nat -> Str Trace
> batch (Push p ps) inp
> = Push n ns where (n,inp') = behave p inp
> ns = batch ps inp'