# The discreteness of time

This note concerns the following axiom (scheme) of temporal logic:
```
<>[]p /\ Precessive(p) -> p
where Precessive(p) = []([](p->[]p)->[]p)
```
In words, if eventually p holds forever, and p is precessive in a certain sense, then p must hold already. If one makes the abbreviation `p ~> q =def [](p -> q)`, one can also write it
```
<>[]p -> []([](p ~> []p) ~> []p) ~> []p
```
On the web, one finds numerous slight variants. These are all(??) equivalent over KT4. (Rajeev Gore??)
```
<>[]p -> []([](p -> []p) -> []p) -> []p
<>[]p -> []([](p -> []p) ->   p) -> []p
<>[]p -> []([](p -> []p) -> []p) ->   p
<>[]p -> []([](p -> []p) ->   p) ->   p
```
A classical equivalent that is quite enlightening is:
```           [](p -> <>(~p /\ <>p)) -> p -> []<>p
```
In essence, Dummett's scheme is a weakening of Grzegorcyk's by "<>[]p ->". It is the only scheme of linear-time discrete temporal logic in which modalities occur nested three deep.

The axiom comes from some subset of Grzegorczyk, Prior, Geach, Dummett, Bull, Kripke and Lemmon. (See Hughes and Cresswell, An Introduction to Modal Logic, round pages 261-262, for some references. Also, p. lvii of Truth and Other Enigmas, by Dummett.) It is sometimes called axiom ` D `, perhaps because it says that time is discrete, in the sense that a certain principle of `reverse' temporal induction holds, which argues from something's being inevitable in the future, back towards its holding in the present (providing that something is precessive). (Or perhaps ``` D ``` is for Dummett? Or Diodorean?) There are some historical remarks about this axiom at the end of the Preface of Dummett's "Truth and Other Enigmas", 2nd edition published by Duckworth in 1992.

(I was fortunate to attend lectures by Michael Dummett in the 70's, on a wide range of subjects -- topological models of intuitionistic logic, Brouwer's proof of his Bar theorem, cut-elimination, and the constructive notion of proposition -- but not much on modal logic. Michael is a very inspiring man.)

As already mentioned, there are numerous slight variants of the axiom. I choose the one above because I like the pattern:

```
F(F(p,p),p), where F(p,q) = [](p->[]q)
```

Let's see what it means that ` p ` is "precessive". First, `[](p->[]p)` says that as soon as ` p ` becomes true, it remains true forever; it denies that p is ever true and subsequently false. (I think Chandy and Misra in their book Parallel Program Design call this stability.) Therefore, ``` Precessive(p) ``` says that as soon as stability holds, then p in fact holds already (and hence forever). Contrapositively, whenever `NOT p`, it must be that sometime later ` p ` holds and subsequently `NOT p` - so ` p ` `wiggles about'.

If time instants were like the rational numbers (or for that matter, the ordinal omega + 1), then we might have ` <>[]p ` and ` [](NOT(p) -> <>(p /\ <>NOT(p))) ` (the contrapositive of precessivity) but also ` NOT(p) `, as there'd be infinitely many instants of time for the wiggling about to happen between, before eventually p holds forever.

For example, someday you will die. Once you are dead, you will stay dead. (Death is stable.) However, you are not dead yet. Therefore, death is not precessive. On the other hand, mortality is.

The notion of a state from which deadlock is inevitable (cf. mortality, a concavity in the state-space) plays a key role in program correctness, hence the interest of the Dum and Grz axioms for computer science.

# Grzegorczyk

(The following is taken from p 161, Ch 12 of The Logic of Provability by Boolos.) In his 1933 paper "An interpretation of the intuitionistic propositional calculus", Gödel asserted that if A is provable in the intuitionistic propositional calculus, then A' is provable in S4, where A' is a modal translation of A given by the table:
```      (~A)'       |->  ~[]A
(A->B)'     |->  []A' -> []B'
(A \/ B)'   |->  []A' \/ []B'
(A /\ B)'   |->  A' /\ B'
```
He conjectured the converse; this was proved by McKinsey and Tarski. Grzegorczyk showed that the Gödel-McKinsey-Tarksi result holds also if S4 is replaced by a modal system deductively equivalent to S4Grz, which is S4 with the axiom scheme:
```      []([](p -> []p) -> p) -> p
```
This scheme (more or less) says that the accessibility relation in a Kripke frame has to be transitive, reflexive and converse weakly well-founded (= having maximal elements of non-empty subsets).

Grzegorczyk's axiom is interesting because it shares the same antecedent (of precessivity) as Dummett's; the conclusion is `A` instead of `<>[]A -> A`. Furthermore, it subsumes the axioms `[]A->A` (reflexivity) and `[]A -> [][]A` (transitivity) of S4. (There are short proofs of reflexivity and transitivity from Grz in Boolos' book.)

So called Godel-Lob (GL, or "provability") logic has the axiom (related to Lob's theorem about provability predicates in first order arithmetic):

```       []([]p -> p) -> []p
```
that is not a million miles from Grzegorczyk's axiom. A formula is provable from S4 with Grzegorczyk's axiom if and only if a certain "transform" of it (in which a box formula `[]A` is replaced by `A /\ []A`) is provable in GL logic. The details are in Boolos.

Interestingly, GL logic is sound and complete for the class of converse-wellfounded Kripke frames. The subject of provability logic has been thoroughly researched by Boolos, Sambin, de Jongh, Solovay and others. (There's an account in a paper by Boolos and Sambin in Studia Logica. A useful bibliography is here.) More recently, Beklemishev has made use of a provability logic in which the modalities are "graded", to obtain an ordinal analysis of Peano arithmetic.

# Relation to bar theorem(?)

Brouwer's so-called bar theorem is the following. Let ` P ` be a property of finite sequences of elements of ` T `.
```      IF 1. ALL t : (Nat -> T) . SOME n . P(t,...,t[n - 1])       -- part of <>[]p
2. ALL (t,..,t[n-1]) .
2.1. P(t,..,t[n-1]) -> ALL x. P(t,..,t[n-1],x)    -- another part
2.2. P(t,..,t[n-1]) -> Q(t,..,t[n-1])
2.3. (ALL x . Q(t,..,t[n-1],x)) ->  Q(t,..,t[n-1])
THEN  Q()
```
(Actually this is "monotone" bar induction.) If we delete the first hypothesis, the principle asserts the validity of a certain kind of inductive argument.

Brouwer's justification for his bar-theorem is thoroughly analysed by Michael Dummett, in his book "Intuitionism". To me, the `theorem' is a mathematical analysis of the idea of inevitability.

• The first ` (Pi-1-1) ` clause says that no matter how diabolically you choose successive elements of an infinitely proceeding sequence, eventually you accumulate an initial segment satisfying ` P `. You can use free choice, Geiger counters, cunning, cussedness, coin-tossing, opinion polls, ... in any combination: there is (to put it rather non-constructively) no escape. Somehow, this is not a mathematical notion - but none the worse for that! (This is obviously close to the idea of having a strategy for winning a game, as it were, against the devil.)
• The rest of the theorem asserts the validity of wellfounded induction up to the empty sequence ` () ` in a certain ordering ` > ` of finite sequences. The (partial) order ``` < ``` is defined by the inductive clauses:
```
() > (t,ts)                    -- A sequence is preceded by its extensions
ts > ts' -> (t,ts) > (t,ts')   -- nesting
```
(The Kleene-Brouwer ordering -- see p. 396 of Theory of Recursive Functions and Effective Computability by Rogers, McGraw-Hill-1967 - of finite sequences is a linear extension of this ordering.) This relation is of course very far from being well-founded (when ` T ` is non-empty). However, its restriction to `unsecured' sequences which do not (yet) satisfy ` P ` is well founded. The mathematical analysis of the `inevitability' of ` P ` which I think Brouwer proposes is that the empty sequence (which is the supremum of the order) is accessible in the inductive sense from sequences which satisfy ` P `.

I cannot resist thinking that the same notion is behind Brouwer's bar theorem, and axiom D. The bar-theorem seems to be something like

```               []<>p ->
[](p -> q) ->
[]([](q -> []q) -> q) ->
q
```
If that isn't wholly misleading, then `[](q -> []q)` ought to be something like a functor of `q`.

# Wellfoundedness and temporal logic

There's something else which puzzles me, namely that you can express the wellfoundedness of a relation ` < ` using temporal operators.

Introduce the following abbreviations (which come from Lamport's TLA):

```    [ A ]_x = x' # x => A
< A >_x = x' # x /\ A
```
(where `A` is a formula, `x` is a variable, and `#` means "differs from"). The "box" operator says that any state-change detectable in the variable x is as described by the relation A; the diamond operator says that a state-chenge occurs which is both detectable by x and as described by A. Then the following assertions are both (classically) equivalent to the wellfoundedness of ``` < ```.
```     [][ x' < x  ]_x => <>[][ False ]_x
[]<><  True  >_x => []<>< NOT (x' < x) >_x
```
The first says that if all changes to ` x ` are reductions in the order ` < `, then eventually `x` stops changing. The second says that if `x` changes infinitely often, then infinitely many of those changes are not reductions.

I think that Stephan Merz and Martin Abadi have noted the possibility of expressing the well-foundedness of a relation (which is conventionally defined using quantifiers over power sets) using linear-time, discrete temporal logic. (See paper on the logic of TLA, p 21). The remark is almost a platitude (like much else truely worth knowing), but it's still rather curious.

The cleanest description I have seen of TLA as a logic is by Merz: on the completeness of propositional raw tla.

There seems to be a direct connection between temporal logic and well-foundedness, or at least with inductive arguments of a certain kind). This connection is manifested in the use of well-founded induction in "counting down" arguments to show that an iterative process terminates. F. Schneider and B. Alpern (Recognizing safety and liveness. Distributed Computing 2, 3 (1987), 117--126) appear to suggest that proofs of liveness properties are in essence proofs by well-founded induction (and perhaps even vice-versa).

Home