Applied mathematics and models

In elementary (school) mechanics, I learnt that if you throw a stone into the air with velocity v and at angle theta to the ground, its flight will be a part of such-and-such a parabola -- at least, ignoring air friction, passing flocks of birds, and so on. I was introduced to some stylised mathematical models that might help predict and reason about practical issues connected with aiming cannons, safe angles for ladders, or designing load-speakers and hydrogen bombs. It seems to me that applying mathematics is about setting up models that are mathematically tractable, yet capture adequately (for practical purposes) important features of a real system, artificial or natural.

From my experience, I think that a key skill in computer engineering is a kind of agility or nimbleness in working with models at several levels of abstraction. [ Besides this learnable skill, there are many many other important virtues, such as sorcery, guile, and a deep understanding of The Absurd. ]

I find the notion of a model in applied mathematics very mysterious. Philosophers have had little of any interest to say about the application of mathematics, or the notion of model. (See Ladkin: abstraction and modelling for some interesting discussions.)

We take results proved about a mathematical structure of some kind as guide to action. What does it mean to take a mathematical result as a guide to action? (To `apply' a theorem in graph theory to a pub-crawl in Konigsberg.)

Models for discrete behaviour

A modelling framework I've actually tried using is Leslie Lamport's system of temporal logic, known as TLA. Here is its web page: TLA stuff. To me, it makes good sense on the whole. The key idea is that specifications, which are formulae of temporal logic denoting sets of infinite state sequences, should not be sensitive to finite repetition of interface states, or stuttering, `skip' steps. The allows you to talk sensibly about what you can rely on happening , as well as what you can rely on not happening. It prevents you making senseless `psychologistic' distinctions between systems which don't differ in any way an implementer or user could care about.

The distinction between what you can rely on happening and what you can rely on not happening is comparable to the distinction between sins of omission and sins of comission. Thinking of the 10 commandments, there is "Remember the Sabbath", to sin against which is a sin of omission; and there are the various "Thou shalt nots", to sin against which would be a sin of comission. The 10 commandments are of course a very famous behavioural specification.

In Lamport's framework, a design step (from a high level abstract statement, to a more concrete and implementation oriented statement) is verified by proving an implication between temporal formulas of a particular circumscribed form. The proof of that implication is the solid content of the design step.

To prove such an implication, you can often just define a function (called a refinement mapping) from states of the low-level system to the states of the high level system. This function is, so to speak `static'. However, you may need to add special auxiliary variables to the concrete state, to account for the `dynamics' of the abstract system. Lamport and Abadi in SRC report 29, The Existence of Refinement Mappings (from 1988) divide these auxiliaries into history variables (that record the history of the concrete system), stuttering variables (that control its present internal activity), and prophecy variables (that oracularly predict its future).

State machines

In connection with rule systems for grammars, Petersonn and Synek introduced the following class of structures (A,B,C,d) which admit a large number of applications, including one to rule systems, and one to the theory of state machines: Essentially the same structure can be expressed as a function from A to Fam (Fam A), where Fam of a type is the type SIGMA I : Set . I -> A. This defines two monotone functions from predicates over A to predicates over A :

The least fixed point of the function User is a set of states in which the user has a strategy, or interactive program which forces the machine to stop. A constructive proof that the machine can be forced to stop is essentially a well founded structure saying what commands to issue, and what to do with a possible response. It is something which can be used as a program. Note that the state of the user program is something quite different from the state of the user/machine protocol.

The least fixed point of the function Machine is a set of states in which the machine has a strategy, or interactive program which forces the user to stop. A constructive proof that the user can be forced to stop is essentially a well founded structure saying for any command what responses to give for it, and what to do next. It is something which can be used as a program for the state machine. Notice that the state of the machine is something quite different from the state of the user/machine protocol.

There are a number of other inductive constructions which arise in connection with interactive programs and state-machines. It is also interesting to consider notions of simulation, "virtual" machines, and refinement.