# Ordinal notation systems

Rough notes.

An ordinal notation system is, unsurprisingly enough, a way of writing down an initial segment of the ordinals. But what is an ordinal, and what does it mean to "write them down"?

A quick definition. An ordinal notation system (in the weakest, least controversial sense) is a coalgebra g : S -> B(S) for the functor B(X) = 1 + X + X^w in which all elements are accessible, ie. satisfy every progressive predicate over S. So each element of S belongs to one of three categories, and a sequency of predecessors which is either empty, a singleton or countably infinite, depending on its category. I use the notation 0, a+1, as to indicate the different forms of expression. A predicate P : S -> Set is progressive if it satisfies P' subset P  where


P'(s) = CASE g(s) OF 0   |-> {}
a+1 |-> P(a)
as  |-> all i P(as[i])

In other words, a progressive predicate is one which holds of a notation provided that it holds for all predecessors of that notation; the accessible notations are those built up from below', from which all chains of successive predecessors eventually stop. Each element in an ordinal notation system (of course) denotes a countable ordinal, ie. an element of the initial algebra for B in the obvious way.

According to this analysis, a notation is necessarily for a countable ordinal. Only for these can be hope to write down all their predecessors; part of the notion of a notation system, and of writing down an ordinal is that if we can write something down, then we should be able to write down all its predecessors.

A different approach is taken by Anton Setzer, who has developed a highly interesting notion of ordinal system' to analyse various systems of ordinal notations.

As originally conceived (by Cantor, in the 1880's, set out first in a work whose title begins "Grundlagen ..": -- I have not checked the history) an ordinal is an order type, ie. an isomorphism class of well-founded linear orders. Such an order is a pair (S,<) where S is a set and <  is a binary relation on S which is transitive ( a < b /\ b < c => a < c ), linear (aka. total, connected, trichotomous, ...) ( a < b \/ a = b \/ b < a ), and such that any nonempty subset of  S contains a least element. A morphism between two such orders is the obvious thing: a function between the underlying sets which preserves the relation < . An isomorphism class is an equivalence class of orders with respect the equivalence relation defined by the existence of a pair of inverse morphisms. When set-theory had developed sufficiently (about half a century later), von Neumman was able to propose the identification of these equivalence classes by certain canonical' representative orders (in which the underlying set is the set of proper predecessors, and the order relation is set membership).

More than a century later, the concept of well-foundedness identified by Cantor has turned out to be among the most subtle of mathematical concepts. One subtlety relates to the fundamental incompleteness of formal systems. Not only is first order arithmetic incomplete, but we can in a meaningful way put a number on how incomplete it is - the least proper bound on the ordinals which can be recognised as wellfounded with proofs of accessibility within the system. In the case of first order arithmetic, this degree of incompleteness can be identified with a certain transfinite number, with the unexciting name $\epsilon_0$. This number was first identified by Cantor, on arithmetical grounds, about half a century before the relevance of transfinite arithmetic to the incompleteness of formal systems was contemplated. It is as if the "limits of language", or at any rate of a particular circumscribed formal system are in some sense susceptible of quantitative analysis. What is astounding isn't that there is a countable ordinal (which is obvious, given that there can be at most countably many proofs of accessibility) but that the ordinal should have a natural arithmetic description.

Ordinal analysis is the program of analysing the strength of formal fragments of mathematics, where strength can be thought of as capacity to recognise or capture wellfoundedness. A number of different connections between ordinals and systems have been studied. It is infamous that ordinal analysis stands in need of a clear notion of natural well-ordering'; that this need has been recognised for about half a century; but that no attempt to capture this notion mathematically has succeeded. Some explanation and discussion of this situation can be found in survey papers by the proof theorist Michael Rathjen.

## What is an ordinal?

Many questions present themselves first in the heroic (ontological) form "What is .. ?", and seem to demand a heroic answer. We can ask instead more practically oriented questions, whose content comes to the same thing. What are ordinals for? Why do we need them? It is one thing to characterise them by their structure, but what is so important about that structure? How do we know that a formal definition has hit the nail on the head?

The central thing here is well-foundedness. (However, something else is connected with the ordinals being canonical - they should be perfect' sets, where perfection includes transitivity, and perhaps other things.)

There are two "sides" to the concept of well-foundedness. I call these synthetic, and analytic. On the synthetic side, we think of building something "up", or "forwards in time"; on the analytic side, we think of digging "down" into its structure, going "backwards in time".

• That something is well-founded in the synthetic sense means intuitively that it can be completely produced/exhausted or constructed without any kind of cheating, or smuggling in anything which is not plainly declared up-front to be an ingredient. Eventually, you are done producing the thing.
• That something is well-founded in the analytic sense means that if you probe backwards more and more deeply into its structure, your investigations will eventually come to an end, no matter how they turn out.
What we have here is the same thing looked at from two different sides, or pictures which are complementary rather than competing. It is impossible to slide a cigarette-paper between the idea of producing something "completely", and the idea of the satisfactory conclusion of any investigation into its structure. They are different sides of the same coin.

[ Or rather, one corresponds to the physics of the coin, and the other to its monetary value. It is the physical shape and appearance, and metallic construction of a coin that makes it work in a coin-operated ticket-machine -- but however sophisticated the technology with which you study the metallurgy of a coin, or the physical makeup of a bank note, you will not find its monetary value. ]

The following are some ways in which one can formulate the notion of well foundedness.

• Consider a set S and a binary relation  <  on that set. (The relation need not be transitive, or linear.) We can say that S is well-founded with respect to < if every non-empty subset P of S contains an element  s such that the set
                  { y : S | y < s }

is empty. More formally:
               for all P : Pow(S) .
P =/= {}   =>  for some s : S . Phi(P,s)
where
Phi(P,s) = P(s) /\ { y : S | P(y) /\ y < s } = {}

Note that if one writes Q for the complement of P , then the body of this quantification over the powerset of S is classically equivalent to
            (for all s. (for all y . y < s => Q(y)) => Q(s)) => Q = S

Here's how:
            P =/= {} => for some s. P(s) /\ { y : S | y < s => P(y) } = {}
= { re-express }
P =/= {} => for some s. (for all y . y < s => Q(y)) /\ P(s)
= { def Q }
Q =/= S => for some s. (for all y . y < s => Q(y)) /\ -Q(s)
= { -(A=>B) = A /\ -B }
Q =/= S => for some s. - [(for all y . y < s => Q(y)) => Q(s)]
= { de Morgan }
Q =/= S => - (for all s.  (for all y . y < s => Q(y)) => Q(s))
= { contraposition }
(for all s. (for all y . y < s => Q(y)) => Q(s)) => Q = S


From a constructive point of view, the latter formulation is the "real" one. It expresses the validity of a certain kind of inductive argument, with induction hypothesis Q . If Q is progressive, in the sense that the set


{ s : S | for all y : S . y < s => Q(y) }

(the set of s's all of whose immediate predecessors satisfy Q ) is actually included in  Q (this amounts to a generalised induction step), then Q holds throughout S .

Note that the operation Q |-> { s : S | for all y : S . y < s => Q(s) } on the powerset of S is monotone (with respect to the inclusion order). Elements of S in the (impredicative) intersection of all progressive sets are sometimes called accessible, or (confusingly) well-founded. (Perhaps one should reserve that term for relations ... in one form or another.)

• What does it mean that S is not well-founded with respect to a relation < ? The first thing we can say is that S is non-empty, as all relations on the empty set are well-founded. But moreover, there must be an infinitely proceeding sequence s, s', s'', .. of (not necessarily distinct) values, such that each successive value is related to the value that < -precedes it:
                        ... < s'' < s' < s

(Note that a set which isn't wellfounded with respect to an irreflexive relation must be infinite. ) So we can also formulate the notion of well-foundedness of a set by using the idea of an infinitely preceding sequence, or rather of the absence of one - the set is well founded with respect to the relation if there is no such infinitely descending sequence.

We can express this using the set Nat -> S of functions with the set Nat of natural numbers as domain, with values in S :


for all f : Nat -> S. exists n : Nat . not (f(n+1) < f(n))

However, let us try to take seriously the informal concept we are trying to capture, and ask whether an infinitely proceeding sequence is really the same thing as a function in  Nat -> S . What about an empirical' sequence produced by consulting a Geiger counter, or the audience for MTV, or the entrails of chickens? What if the devil gets to choose the successive terms?

On the more technical side there are many clever examples which show how sensitive the interpretation of this Pi-1-1 formula is to the domain of the "for all" quantifier. So for example we can have relations that are not well founded, yet contain no primitive recursive descending sequences. We can have orders which contain no recursive descending sequence, yet the lexicographic order of their descending sequences has itself a primitive recursive descending sequence.

Do you feel you have understood the concept of well foundedness any better when it is explained with a quantification over function spaces instead of a quantification over powersets? These quantifications are "incommensurable".

Clearly the inductive formulation is as strong as the Pi-1-1 form, as the property of starting no infinitely descending chains is progressive. The question of the reverse implication is very close to the issues that underlie Brouwer's justification of his bar theorem, mentioned at the end of this note. Essentially, Brouwer wanted to license constructively the inference from the Pi-1-1 statement to the inductive formulation -- and justified this by meditation on the "possible form of a fully analysed canonical proof" of the Pi-1-1 statement. This is an over-simplification; a complete account is in pages 446-463 of van Heigenoort's book from Frege to Gödel.

• There is more than one way to model a binary relation. It is interesting to see how far we can go without invoking an equality relation given uniformly across all potential data types.

Another approach to modelling binary relations is described in transition systems'. The descriptions of well-foundedness we get centre round the following two predicate transformers:

                 Phi(X) = { s : S | Pi s' . s' < s => X(s') }
Psi(X) = { s : S | Pi t : T(s) . X(s[t]) }

Of course, Phi , can be put in  Psi form, by taking T(s) = (Sigma s' : S) s' < s , and for t[s] the projection \ (s',p) -> s which throws away the proof p that s' immediately precedes s . To express Psi in  Phi form, we would have to define s' < s to mean Sigma t : T(s) . s[t] = s' , and for that we would have to suppose that we are given an equality relation on S .
• The notion "next". In Cantor's first descriptions of well-orders, he stressed that any family which is bounded above should have a definite least proper bound, or next element. If we do not insist on a linear order (which is a very strong decidability property of an order), then it is interesting to look more closely at what this amounts to, as it will be connected with extensionality, and perhaps directedness.

A binary relation R : S -> Pow(S) is extensional if the following relation  \subseteq is a partial order.


s1 \subseteq s2  ==  (Pi s : S) R(s,s1) -> R(s,s2)


It is obviously reflexive and transitive. The point is that it should be antisymmetric, so that states s are actually equal if they have (extensionally) the same backward image { s' : S | R(s',s) } . Then we can think of each state as nothing more than' the set of states immediately related to it.

## What is it to write down an ordinal?

(Notes)

To write down an ordinal is surely to give a notation, and that is something finite, a concrete data structure, with decidable relations of syntactic equality and inclusion.

If we can write down an ordinal, then surely we can write down any' of its predecessors - but perhaps in a subtle sense, something like as many as you like', and in a delicate sense of predecessor'.

A notation system such as (based on) Cantor's (normal form) is actually a calculus, in which there is a notion of bringing expressions to normal form, and an algorithm for comparing expressions in normal form.

Part of what is meant by normal' form is that a (syntactical) difference in normal form should imply a difference in value.

The calculus here lets us compute a specified predecessor of a given expression in normal form. There are as it were 3 types here : Zero, Successor, and Limit, and for each a computation rule.

• If a is zero, then there is no predecessor.
• If a is a successor, there is exactly one predecessor a[0]
• If a is a limit, there is a predecessor  a[n] for every natural number n .
We then set up computation rules (or an operational semantics') for non-normal expressions involving the predecessor operator a[...]` .