These are unstructured notes on the topic of input-output for functional languages with data-dependent types. There is nothing finished here; only some things that bother me, which I am trying to get clearer about.

My interest in this topic is that I want to understand the general form of a specification of certain common kind of interactive programs, and so of the development of such programs from their specifications. It seems to me likely that there will be practical benefits in getting clear about how type-checking can be applied in this area. It may be that we can use dependent types to specify interfaces between software components with full precision, by which I mean pinning down the semantics of interface actions, not merely their syntax.

My impression is that the notion of an executable program isn't widely understood throughout the community of people pursuing research in type theory. They think of a program as an expression denoting some value such as a function from inputs to outputs. They think of execution as supplying the function with a suitable argument expression, evaluation of the applicative expression formed by this, and printing the result. This isn't wrong, as far as it goes -- it is a useful model for batch programs. However (a) the parsing and printing are something outside type theory -- the boundary or frontier between static values and dynamically evolving states is not analysed; moreover (b) it is not clear how one parses arguments and prints results which are not finitistic values; For example, how does one print or parse a potentially infinite value such as a stream of natural numbers? Or a wellfounded tree with branching indexed by the natural numbers? Or a higher order value such as a function?

In some sense the main problem is not a mathematical problem, but a very basic conceptual problem. The difficulty is the very considerable one of seeing something that is in front of one's nose.

Roughly speaking, my question is this: how could I write a dependently-typed program to fly an aeroplane, sell things on the internet, or provide a programming environment under Windows?

What is it to print a value?

According to the first section of chapter 1 of Bird and Wadler's "Introduction to Functional Programming" (Prentice-Hall 1988), in functional programming "the primary role of the computer is to act as an evaluator or calculator: its job is to evaluate expressions and print the results". Although it is admirably short and snappy, this remark seems to me misleading. It's tempting to say it is plain wrong, but maybe one can interpret the words "evaluate" and "print" in some way so as to make sense of it -- if one wants to stick with the picture of the computer as a calculator.

evaluation

The term "evaluation" is broad. There are all sorts of strategies for reducing or simplifying expressions. There's really no single thing that in all contexts is meant by evaluation, or by any other word in the semantic vicinity: reduction, calculation or simplification. There is however one interpretation which stands out, an extreme case distinguished by minimality. The evaluation strategy is to do the least necessary to bring the expression to constructor (hnf) form (postponing the evaluation of its parts). (It is tempting to say "lazy", but that may have acquired a specific technical meaning.) With numerical expressions such as one types in on a pocket calculator, the point is obscured; it isn't even clear what the constructors are here. We are inclined to think that what is in the calculator's display is a number, whereas if we look at what is in front of our noses, it is a string of decimal digits (and other signs like the decimal point, and perhaps an `E' for "scientific notation", and what not). Maybe we think of the string as appearing all at once. It looks very like a kind of "complete" evaluation. The very familiarity of this case is a kind of obscurity.

Say something about the evaluation of functional expressions, and of neutral expressions which contain functional parts. For an example, one might take a datatype such as the following, where c and r are given types.

   Prog x = Exit x
          | Call c (r -> Prog x)
Here the constructor Call forms values of the form Call cmd rsp, where cmd has type c, and rsp has type r -> Prog x. What might it mean to evaluate rsp? One answer could be that one simplifies (and eta-expands) the expression as far as possible, obtaining a lambda expression \ x -> ... .

printing

Coercion to "Show". (gofer, hugs.) If we allow that "printing" can involve a kind of exploration of something infinite, then there is some kind of control (steering, navigation).

The actual technology of display. Paper tape punches. Card punches. Teletypes. Line printers. Character cell displays. Bit-mapped displays. LCD displays. Lights. Actuators.

That evaluation and printing are interleaved. (Or overlap.)

That interaction or exploration may be involved. Paper jams, refills. Zooming. Sensors.

Perhaps "printing" is a red herring. The important notion is really using the result of evaluation, in some form or other such as a line of ascii, or a roman numeral, or a bar-code, or a radio signal etc. To print something is to make it available for use. Publishing.

A central example is buying things in a shop. We take an imprint of the value of an expression, by handing over coins and notes; peeling off constructors, doing something appropriate to the constructor (handing over another coin) and proceeding in some way with the component or components. In the case of numerical values there is a single component, and nothing is needed from the environment. Or is there a "pure acknowledgement" of some kind?

Sometimes what we use is a number in decimal or scientific notation. Then we measure something off, or set a dial, or decide to go this way rather than that, etc.

Every case has inevitably something "special" about it, in much the same way as every triangle is either acute, or obtuse. Some common cases are somehow worn smooth to the point of inscrutability.

The question of how in general one "uses" the result of a calculation is one of those "huge", fatally Quixotic questions, like: what is the nature of a mathematical object? It looms in the background; we need to pick apart the notion of taking a mathematical value as a guide for action. ("How is mathematics applied?")

How is it in functional languages?

One way to arrange for input-output in functional programming languages such as Haskell involves the type system. There is a special form of type IO (...) where the dots stand for one or more (type) arguments or operands. The values of a type of this form are (structures interpretable as) executable interactive programs, or scripts for performing a series of synchronising interactions with a passive environment (which only reacts to instructions -- has no initiative of its own). I shall call these structures IO programs. One of the essential things is that a program tells us whether or not execution of the program is complete, and when it is not complete tells us two things:
  1. a "command code" that can be used to initiate an interaction
  2. a function that for for each possible "response code" tells us how to proceed in the post-interaction state - i.e. what program to run subsequently. I call this function the response function.
A program written in the functional language defines/denotes/determines such a (static) structure, and the run-time system (as it were, on behalf of the user who launched the program) interprets it.

In interpreting (or executing) an IO program, the run-time system cycles between two phases, which I call fetching and execution, in analogy with the basic execution model of conventional CPU's or instruction processors.

  1. First the run-time system (which in the analogy is the CPU) fetches the next command (code) for an interaction -- which it does by evaluating the current program to a value (ie. calculating its weak head normal form) -- and initiates the interaction with this command code. Fetching and evaluation are the same thing. Initiating an interaction means "issuing" the command code -- the command is now pending.
  2. If and when execution of a command has completed, there is ipso facto a response (code) that can be fed into the program to determine what to do next. This response code becomes the argument of the response function, thus beginning a new phase of fetching or evaluating.
(Instead of fetch/execute I could say evaluate/perform.)

There are at least two ways the fetch-execute cycle can "hang".

In machine architecture, one wants (for performance) the fetching and execution to occur in parallel. If instructions are fetched out of order and in parallel with execution of the current instruction, it must still appear as if the two phases occur in strict alternation.

An atomic interaction is an exchange of codes or tokens or values between two parties or agents. One agent initiates the interaction by issuing a "command" code to the other, who returns a "response" code. The active one who initiates the interaction is the client. The reactive one who completes it is the server.

Just as with the enactment(?performance?) of an atomic interaction, execution of a IO program composed from many such interactions may or may not terminate. (But now there is another form which non-termination may take, namely an endless stream of intermediate interactions.) When and if such an execution terminates, it returns to the client/initiator/actor a response (code) that can be passed on to a program to be executed subsequently. On the other hand, it need not terminate - it may for example hang fetching or executing a command, or (quite differently) it may carry on interacting forever without terminating. This is often desireable, something we require or expect or hope of a system.

The way this is arranged in Haskell, there is a unary operator IO taking types to types, such that IO a is the type of IO programs that return a response code of type a when execution has ceased. For example

There is a lot of ambiguity in the word "program". (Try to sort this out: there is the text of the program. There is the abstract program (algorithm) which might be written in a number of different ways, expressed in a variety of notations.) There are IO programs - and for these too there is an ambiguity.

In practice we want some form of multithreading -- primitives which create new threads. One couldn't claim to have a rational foundation for designing interactive programs unless one has a way to write programs that can introduce new computations, interrupt unwanted computations, and so on.

We think of the run-time system as following/executing a number of threads, each according to some program.

An IO program is a program for a thread. The concept of a thread and the concept of sequentiality (doing one thing at a time, one thing after another) are somehow one and the same. Doing one thing at a time involves mutual exclusion; one thing after another may have a little more.

Should one distinguish exceptions from interrupts? An interrupt abandons or at least suspends execution of an instruction, in some way that may or may not be restartable, that may be more or less "precise". (Usually: kills/discards a thread.) This raises among other questions that of whether only IO instruction are interruptible - do interrupts occur while we are fetching?

If we can see that execution of an instruction will not terminate, then we have to say what it means that it is `executed' at all, even the tiniest bit.

Catching goes with exceptions. Catching is a way defining a sequence of IO-programs.

Say something about whether we think of the program as initiating commands, or as transforming a stream of responses to a stream of commands. (Are their difficulties in giving the proper types to such stream transformers?)

Say something about uniqueness types.

Abstractness of IO

The type IO a should not be thought of as a type whose values can be inspected with a case construction.
     foo :: IO () -> ...
     foo p = case p of 
                  putChar c -> ...
                | ...
Instead, the type IO a is "abstract". This means (partly) that the formal representation or encoding of programs is a secret of the run-time system. We do not have an exhaustive enumeration of constructors - we aren't even told that anything is a constructor. We cannot even assume that there is any exhaustive enumeration; or maybe we can assume it if we like, or install it as a mental decoration, but cannot make use of such an assumption. Surely this means that a type of the form IO(...) is not a set, but is "large" (???).

This allows the input-library facilities to be extended or reimplemented in a backwards-compatible fashion. You do not have access to a complete and exhaustive enumeration of those facilities, as you do with the two constructors for lists

     data [a] = [] | a : [a]  deriving (Eq, Ord)
With lists, because these are all the list constructors, an exhaustive enumeration, functions on lists may be defined by cases:
     foo : [a] -> ...
     foo as = case as of 
                   [] -> ...
                 | (a:as') -> ...
But with executable interactive programs, as opposed to lists, you cannot even write
     foo :: IO () -> ...
     foo p = case p of 
                  putChar c -> ...
                | _         -> ...
using a "mopping up" or default arm in the case construction. The reason is that it is not given that putChar is a constructor. It might be something defined in terms of some very low-level signal-wiggling primitives. Because this cannot be assumed in a program, this allows a particular implementation of Haskell's runtime environment to define putChar explicitly as a function, for example perhaps in terms of more primitive bit level operations.

IO is monadic

Now the type construction IO a is monadic. What this really means is that there is a notion of sequencing the execution of one interactive program before that of another, passing to the second a value returned when execution of the first program terminates. I write this

        e1 >>= \ x -> e2 x  :: IO b
where e1 :: IO a is the first program, and e2 :: a -> IO b is a function which maps a value returned by e1 to the next program to execute. The notation \ x -> e stands for the function whose value at x is e. There is a "do nothing" program construction that simply returns a given value. I write this
        return :: a -> IO a
Sequencing and returning obey the laws we would expect:
                   Associativity of sequencing
        e1 >>= \ x -> (e2 x >>= \ y -> e3 y)
        = (e1 >>= \ x -> e2 x) >>= \ y -> e3 y

                   Left unit
        return a >>= \ x -> e x 
        = e a

                   Right unit
        e >>= \ x -> return x
        = e

Where Monads come from: character example

Any inductive definition yields (can be seen as a particular case of) a monad. Let us take a data type of "Programs" for reading and writing characters. (This was getChar putChar above. They are too long. Maybe compare input/read and output/write.)

        data Prog  = Stop
                   | Write Char Prog 
                   | Read (Char -> Prog)
It is important to note that the constructors here are program constructors, not instructions.

We can easily generalise this into a monad:

        data ProgM a  = Leaf a
                      | Stop
                      | Write Char (ProgM a)
                      | Read (Char -> ProgM a)
The programs of type Prog a can now "exit" through the leaf constructor, returning a value of type a. The unit of the monad is Leaf, and the sequencing operation is defined by:
        first >>= second 
         = case first of 
             Leaf a    -> second a
             Stop      -> Stop
             Write c p -> Write c (p >>= second)
             Read f    -> Read (\ c -> f c >>= second)
Our original type Prog is an instance of our new type function, namely the instance ProgM {} (where {} denotes the empty type). In other words, ProgM is a generalisation or extension of Prog. (A program of type IO {} cannot exit; it can only stop.)

What we have accomplished by generalising the datatype of programs to a monad is that we can now express the program constructors as instructions. To see this, notice that our constructors have types:

       Stop  :: ProgM a
       Write :: Char -> ProgM a -> ProgM a
       Read  :: (Char -> ProgM a) -> ProgM a
Let us fix a, and then define:
       IO r = (r -> ProgM a) -> ProgM a
This is a kind of "double negation" operator (whose shape should remind us of the continuation transform). We can massage the type declarations above (via some isomorphisms) into the following form. Let {} stand for an empty type, and {*} for a singleton type.
       Stop  :: IO {}
                --    IO {}             
                --  = ({} -> ProgM a) -> ProgM a 
                --  = 1 -> ProgM a 
                --  = ProgM a
       Write :: Char -> IO {*}
                --    IO {*}             
                --  = ({*} -> ProgM a) -> ProgM a 
                --  = ProgM a -> ProgM a 
       Read  :: IO Char 
The general form of a constructor's type is then
       Inst :: Inputs -> IO Outputs

Where monads come from: general situation

An instruction set is given by a type functor, which is a sum of products. The functor gives the arity of some program constructors. There is a summand per constructor. That summand has a factor for each argument of the constructor. For character IO as above, the functor is

       IS x = 1 + Char * x + (Char -> x)
In general it is something like:
       IS x = ...  + Output * (Input -> x) + ...
which allows an output/input interaction. (Read with prompt. Command/response.) The type of programs which do not return (but only halt) is the least fixed point of IS.
       Prog = IS Prog
The type of programs returning data is the least fixed point with the return data type "embedded".
       ProgM a = a + IS (ProgM a)
This is a monad.

Now we define another monad, as the ("type" part of the) continuation transform with "answer" type ProgM a. Think of a as fixed.

       IO r = (r -> ProgM a) -> ProgM a
i.e.
       IO r = ct (ProgM a) r   
              where   ct Ans x = (x -> Ans) -> Ans
We get back the (general) instructions as morphisms in the Kleisli category:
       Inst :: Inputs -> IO Outputs
(This may be related to abstractness.)

The viewpoint of data-dependency

Let us look at this from the viewpoint of dependency. A more general form for the declaration of an instruction's type is as follows:

       inst :: ( c : C )-> IO (R c)
C is the type of commands (inputs). Possibly depending on the actual input c:C, we have the type R c of responses (outputs).

So the general form (not the most general) is something like the following. I take the opportunity to distinguish programs that have at least one real instruction. To do this I introduce an auxiliary set-forming instruction Prog+.

     
        Prog+ (a : Set)     -- type (code) of answers
        = data  Inst ( c : C,
                       f : ( r : R c )-> Prog a )
             |  Stop        -- just an example.

        Prog (a : Set)
        = data  Ret ( ans : a )  -- just return 
             |  Minus ( p : Prog+ a )

We can then define sequential composition
        p >>= q : Prog b                p >>=+ q : Prog+ b
        where p : Prog a                 where p : Prog+ a
              q : T a -> Prog b                q : T a -> Prog b

        (Ret ans) >>= q        = q a
        (Minus p) >>= q        = Minus (p >>=+ q)

        Stop >>=+ q            = Stop
        Inst c f >>=+ q        = Inst c (\ r -> f r >>= q)
        ...
So we have a monad constructed from the usual kind of IO type specification. The monad describes the type correct programs.

An interesting feature of type dependency is exploited here. Perhaps there should be some trumpet-blowing at this point.

List the advantages of data dependency in the representation of programs

We get to represent nodes with arity nought (which terminate a session) in a natural way.

Giving a pair (C,R) with C : Set and R : C -> Set is a natural way of expressing the `syntax' of an interaction. The syntactically legal interactions are then elements of the sigma type (Sig c : C) R c.

So the type system is being used to enforce something worthwhile.

Should return types depend on state?

A yet more general form for the declaration of a program's type is something like this:
     
        Prog+( Ans : S -> Set, s : S ) : Set 
        = data  Inst ( c : C s,
                       f : ( r : R s c )-> Prog Ans (s[c/r]) )
             |  Stop                       -- just an example

        Prog( Ans : S -> Set, s : S ) : Set
        = data  Ret ( x : Ans s )  -- just return 
             |  Minus ( p : Prog+ Ans s )

Here Ans is a goal predicate.

It seems that without any loss in generality we may take the goal predicate True : S -> Set. For then we may recover the general predicate transforms with the aid of the following definitions.

        Path+( s : S, t : Prog+ True s ) : Set
        Path s t = case t of 
                     Inst c f  -> (Sigma r : R s c) Path (f r) (s[c/r])
                     Stop      -> {}

        Path( s : S, t : Prog True ) : Set
        Path s t = case t of 
                     Ret *     -> {*}
                     Minus p   -> Path+ p s

        dest+ ( s : S, t : Prog+ True, p : Path+ s t ) : S
        dest+ s (Inst c f) (p0,p') = dest+ (s[c/p0]) (f p0) p'
        dest+ s Stop       p       = case p of {}

        dest ( s : S, t : Prog True, p : Path s t ) : S
        dest s (Ret x)    *    = s
        dest s (Minus p)  l    = dest+ s p l 

We can then recover the predicate transformers from their value at True
        Prog, Prog+ ( Ans : Pow S ) : Pow S
        Prog+( Ans, s ) = ( Sigma t : Prog+ (True, s) )
                          ( Pi p : Path+ t ) 
                             Ans (dest+ s t p)
        Prog( Ans, s )  = ( Sigma t : Prog (True, s) )
                          ( Pi p : Path t ) 
                             Ans (dest s t p)
One can prove this by induction on proofs of Prog+(...), etc.

Dependency

The question here is how best to generalise the monadic approach to input-output when we consider functional programs in which the type system allows types to depend on data. A type which depends on data of type A is not strictly speaking a type, but rather a family of types { B(a) | a : A } parametrised by data a : A. (One may prefer to say that it is a variable type, or a generic type, or something like that.) Such a family of types can be considered to be a property of values of type A. A value of type B(a) is a proof that B holds of a. Thus the type system allows properties of things of some type D to be expressed as families of types indexed by D.

Given a sufficiently rich repetoire of type operators, it becomes possible to express an arbitrary mathematical statement, built up with the operators of first order logic, or some fragment of second order logic as a type. The type system possesses some interesting properties.

If types can depend on data, and are closed under the usual logical and mathematical operators (such as quantifiers, fixed point operators of various kinds) then we can use the type system to help us write a specification (i.e. a property to be satisfied by something meeting the specification) with complete precision, at least in the realm of static mathematical values. Might it be that data dependency can also be exploited to express fully and precisely the specification of input-output primitives and help to pin down and debug the sometimes complex reasoning that goes on when writing an interactive programs?

To answer such questions requires thinking about what one means by the specification of an interactive system or an interaction, and so about the concept of an interactive system and interaction itself, from the perspective of data dependency. Although these are not uncharted waters, they do not seem to have been explored very much from the perspective of dependent types. We want to present some suggestions.

I am inclined to think that the specification of an atomic interaction is given by two predicate transformers - one for each of the parties. They are "precondition" operators. The one for the agent to go first is

       White(X,s) = (Sigma c : C s)(Pi r : R s c)X(s[c/r])
This transforms a goal predicate X (aimed at in the next state) to a precondition predicate White(X) which if it holds in the current state gives an input which ensures that the goal holds when/if/however the agent to go second responds. (It ensures the goal on the assumption that the agent to go second responds.) If White False holds then there is an input with no corresponding output.

The one for the agent to go second (I am not sure if this is necessary: it is the classical dual of the other one) is

       Black(X,s) = (Pi c : C s)(Sigma r : R s c)X(s[c/r])
This transforms a goal predicate to a predicate which if it holds in the current state gives a strategy for responding to any command so as to bring about X in the next state. If Black False holds then there is no possible input. If Black True holds, then for every input there is an output.

The question of empty types

Most functional programming languages, even ones with a type system of some kind, permit the construction of expressions having an "Void" type - ie one having no values. Roughly speaking, the empty type is a data type with no constructors - there is no normal (fully evaluated) form which a value of that type can take, so there is no value of that type. An expression having the empty type is an expression which can be evaluated (without anything "going wrong"), but for which the evaluation fails to terminate. (Fetching hangs.) In computer science terms, the program denotes the bottom ("no information") of an ordered domain of approximations or neighbourhoods of some kind.

It seems to me that such an expression is useful only to "switch the machine off", as an idle task, or to soak up available resources. Nothing "goes wrong", but also "nothing goes right" -- where going right means that evaluation terminates with a value of some kind, or at least, more precisely with the outermost form of such a value.

So it is interesting to consider type systems with the greater ambition of ruling out expressions for which evaluation hangs. Type systems are not there just to prevent the nasty things in life, like misinterpreting a character as an integer, or one structure as another. Besides this, they can also make sure of the good things in life, like getting answers to one's questions, or responses to one's instructions.

The price that we pay if we adopt such a type system, and we want type checking (which may include inference) to be decidable, is that we must be reconciled to the fact that our programming language is not Turing complete. If we insist that our programming language is Turing complete, we must be reconciled to the fact that the type system cannot be decidable. The type checker itself may sometimes hang.

Is it possible to explain simply how if we have a decidable type system, we can diagonalise up a new numerical function which can't be expressed by a program? (The terms of type N -> N would be recursively enumerable. So we can diagonalise one up.)

This should maybe go under "what is a type system". One suggestion is that we can profitably ask ourselves what we think a type system is, or more precisely, what we think a type system is for.

As part of what a type system is for: something which knows the type of every meaningful fragment of your program can help you construct it by refinement, and even to physically edit it. It can point out errors. It can expand constructors, selectors and so on. It can inferring things from the context. The type system is your friend, not some tyrannical presbyterian school-mistress.

(A stage in) the development of a program; an incomplete program; a "partial" program; a program with holes in it.

Atomic Interactions

Our basic concept is that of an atomic interaction. We concentrate on interactions that involve two agents (or parties to an interaction) one of whom passes a value in and thereby initiates an instance (or occurrence) of the interaction, while the other passes a value out and thereby completes that (instance of the) interaction. The interaction takes place sometime between the input and the output, consuming the former and producing the latter, and is considered to be a pure point-event with no relevant substructure. It is essentially an atomic exchange of values. That is how we want it to appear.

There are many phenomena that arise in connection with computer systems that can be thought of as atomic interactions -- at some level. For example there is calling a procedure and returning from it, issuing an instructions and carrying it out, stimulus and response, request and reply, question and answer, and so on. Of course, these are "in reality" very far from being atomic, but are rather composed in an often subtle and tricky way from simpler interactions. For some purposes, for example implementation, this composite structure may become relevant. (We have atomic interactions in the sense that in the predicate calculus we have atomic formulae. "In reality" there are no atomic formulae. We are not saying that there "really" are "ultimately" atomic interactions of some kind -- they are there because we want to understand how to make complex programs out of simpler interactions without relevant structure.)

We think of the interaction as bringing about a change of state. This state is determined by the history or trace of interactions that have ever occurred, in the form of the sequence of pairs of values that have been input and output. It may be determined by something considerably simpler (a quotient), but is at least determined by the complete history. We shall be interested in interactions in which the type of the values that can be input and output at a particular instant may depend on the history of inputs and outputs preceding that input, thought of as data. In this way the legality of a trace can be reduced to a type-checking problem.

An interaction is an exchange of values "synchronised" with a state change. We can think of the state change as consisting in "writing a transaction record" (ie. appending an input-output pair) to "the log" (of transactions committed). The relevant state is a function of this log. There are some intermediate "semi-states", in which a command has been issued, but no response has yet been given.

An interaction is an instance of an instruction. Other terms for instruction are

   command
   atomic (primitive, elementary) interaction.

Semantics of instructions

The same instruction can be issued in many different states. One should distinguish instructions (which are like rules) from their instances (which are like individual inferences).

The semantics of an instruction is (for abstractness) given as a transformer for a predicate on states. Or is it 2 - one for the angel and one for the demon?

Practical reasoning (Aristotle) is reasoning which concerns ends and means. It is reasoning from the future back towards the present. "Hodology".

It is difficult to be sensitive to the tiny nuances of meaning there are in words like "end", "goal", "purpose", "aim", "target". Certainly sometimes a goal is expressed by a predicate, and we calculate preconditions for achieving that goal with a particular command or program. (We might say that a predicate is just a "variable" type.)

Looping behaviour

Corecursively defined programs. The data type of programs is really not a least fixed point. We want it to contain programs which execute forever. These aren't "standard" programs which eventually terminate, or are well-founded.

We model looping programs with (potentially cyclic) graphs. We have to fuss to ensure that programs make progress, by excluding trivial cycles. There is a certain kind of knot-typing here. We can nevertheless deal with graphs using only well-founded notions.

Connection with non-standard type theory, which is type theory extended with an infinite object that is gradually (through time) unfolded, introducing new constants for successively deeper parts.

A "non-stop" program is something like this. In any "particular case", one can use a particular "standard" finite (or wellfounded?) object.

Metaphor of a "fractal" onion which one carries on unpeeling forever. (Zooming into a fractal picture.)

Terminology

I want here to begin a survey of the terminology one seems to need in the discussion of input/output.

Agencies

The first has spontantaneity; solicits the interaction; has the initiative; has responsibility for initiating the interaction; is pro-active; ... .

The second has responsibility for completing the interaction; the "finish-iative".

Because there are so many words, one can maybe use some jokey labels: angel and demon. The problem here is that if you change your point of view, the angel is now the demon.

What they do

The first agent: issue, potentiate, enable, initiate, start, begin, commence, command, request ...

The second (reactive) agent: carry out, complete, accomplish, commit, execute, perform, obey, fulfill, enact, implement, do, effectuate, run, achieve, respond, ...

Instruction sets

It is interesting how many of these words can be both verbs and nouns.

Other words

Teleological: intention, purpose, goal, aim, ...

Composites (non-atomic) of simple, atomic, basic, simple interactions: program, script, strategy, transaction, ...


Peter Hancock
Last modified: Wed Nov 3 22:48:16 GMT 1999