Elementary arithmetic as syntactical operations

Suppose we use a `unary' notation for numbers: 0, S(0), S(S(0)), etc. Then an expression for a number is some expression in which `0' and `S' occur, which when fully evaluated occurs somewhere in the series above. Here is how you can simulate addition, multiplication, and exponentiation, as simple syntactical operations on expressions, with no use of recursion in any form.

I use the notation a[x <- b] as notation for substitution of b for (free occurrences of) x in a . For simultaneous substitution I use a[x, y <= b, c] .

I use the notation \ x -> a for the function whose value at x is a (ie. lambda abstraction). I write \ x y -> e instead of \ x -> \ y -> e . I regard S and 0 as bindable variable names.


To add b to a, do b, but starting with the result of doing a instead of zero.

     a + b = b [ 0 <- a ] 


To multiply a by b, do b, but where it requires you to do S, do instead the operation you get by abstracting from the starting point of a.

     a * b = b [ S <- (\ 0 -> a) ] 


It's tricky to explain in prose. Our system of expressions has to have a notion of application: ... applied to ...
     a ^ b = b [  0, S  <=  S, (\ S 0 -> a)  ] applied to 0  
As with addition and multiplication, we do something b times. What we do is `higher order' than S : it is the operation f |-> f^a of iterating an operation like S (in type) a times. You perform this operation b times starting with S . The exponential is formed by applying the resulting iterate of S to 0 .

Note that the substitution here does not preserve type, though it does preserve well-typedness.

Because higher order operations are essentially involved, exponentiation represents a kind of "Pons Asinorum" in arithmetic.


What might it mean to represent subtraction, division, and logarithms as syntactical operations? ``Oleg'' someone (I don't know his full name) has an interesting collection of web pages about these and other matters. One idea of Oleg's is to use fixed points: -a is a fixed point of \x -> a + x*2, 1/a is a fixed point of \x -> x * (a+1) - 1, log_a b is a fixed point of \x->a ^ x + x - b. He talks about least fixed points, but I think (integer, ie. cutoff) division and logarithms are really adjunctions to multiplication and logarithms, ie.
           a + x  .leq.  b    <=>  x  .leq.  b - a
           a * x  .leq.  b    <=>  x  .leq.  b / a
           a ^ x  .leq.  b    <=>  x  .leq.  log_a b 
and they should be characterised by greatest fixed points. ``oleg'' has other ideas on this topic.

What would it take to represent the following `tower' function x ^^ y as a syntactical operation? x ^^ y = x ^ x ^ ... ^ x with y ^'s.

Peter Hancock
Last modified: Sun Nov 11 13:20:04 GMT 2001