# Families

I use the word `family' in the sense of `indexed family', that is a pair consisting of an index set, and a function defined on it. A family is always of a family of somethings, or ``` A```'s , where ` A ` is a type. If ` A ` is a type, then ` Fam(A) ` is a type.
```
Fam : Type -> Type
Fam A = (Sigma I : Set) I -> A
```
So if `A` is a type, a family of `A`'s has the form:
```

(I,g)   where   I is a set -- the `index' set
g : I -> A -- the `generic' function
```
The function is an ``A`-valued' and `set-domained' function, meaning that the domain can be any set (i.e. small type).

In terms of category theory, one can extend `Fam` to a functor on the superlarge category of types and functions, by giving it the following action on arrows.

```
Fam(f : A -> B) = \ (I,g) -> (I, \a-> f(g(a)))
: Fam A -> Fam B
```
This action is sometimes (as in left section notation ) written ````(f .)```', where ``.` ' denotes composition. (The types `Fam(A)` themselves have a categorical structure, with arrows given by functions between index sets, as with slice catgories.)

The `alter ego' of `Fam` is the contravariant functor `Pow`, defined as follows.

```
Pow(A : Type)   = A -> Set : Type
Pow(f : A -> B) = \P,a->P(f(a))
: Pow(B) -> Pow(A)
```
In this case, the action of the functor on arrows can be written as a right section: ``(. f)`'. An object of type ` Pow(A) `, which I call a predicate over `A` (but see some terminological scruples below) is a ``A`-domained' and ``set`-valued' function, meaning that the values of the function are sets.

Clearly, if we have `P : A -> Set`, we can get a family `((Sigma a : A) P s, \(a,p)->a)`: so you can get a family of `A`'s from a predicate of `A`'s. But to go the other way round, from a family to a predicate, requires a relation on S, such as an equivalence relation. Given a genuine relation `=`, we can form from a family `(I,g)` the predicate

```
\ a -> (Sigma i : I) g i = a
```
Until we have some such genuine relation as ` = `, we cannot actually say (form the proposition) that something is a member of a family. Nor therefore can we say that one family is included in or intersects with another. On the other hand, we can say that a family is included in a predicate (`(Pi i : I)P (g i) `), or intersects with one (`(Sigma i : I)P (g i) `).

From the perspective of constructive type theory, the notion of powerset is if anything even more subtle than it is is ZF set theory. There are, as it were, genuine or `full-blooded' (`Pow`) subsets of a set, picked out comprehension-style by predicates, and virtual or `weak' (`Fam`) subsets of a set, described replacement-style as the set of values attained by a function varying over an index set. In Zermelo-Fraenkel set-theory, there is a single domain of values, with an extensional epsilon relation. One passes between these two kinds of subset almost without noticing. However, the equivalence relations that arise in nature do not seem to fall in generic form from the sky, but have to be constructed, case by domain-specific case.

In category theory, one sometimes models a family of `sets' using fibrations - that is, instead of a family ``` (A : Set, B : A -> Set) ```, one considers a function ` f : X -> A ` thought of as representing the set valued function ``` a |-> (Sigma x : X ) f(x) = a ``` that maps each element of ` A ` to the `fibre' of ` f ` over ` a `, consisting of ` X`'s that get the same value. The family ``` (A,B) ``` is represented by (for example) the first projection function ` \ (a,p) -> a : ((Sigma a : A)B(a)) -> A`. This works fine, of course, but only if we can assume that a set comes equipped with an equality relation.

Examples of families are:

• The empty family. The index set is the empty set. It is not really necessary to say what `A` is here, as we shall never encounter a member of the family.
• Singleton families. The index set is `the' singleton, i.e. some standard terminal object in the category of sets. If `a : A`, then there is an arrow from this singleton set whose value is `a`. (Indeed, category theorists would say that an element of `A` is nothing but an arrow with domain this singleton and codomain `A`.
• Predicates. Any predicate is a family `(S,P)` of sets where `S` is the base set, index set, or domain of the predicate, and `P : S -> Set`
• . Very often, the base set `goes without saying', and so one (harmlessly) confuses the predicate with its second component `P`, the set-valued function.
• The `successor' of a family of sets. Given an indexed family of sets `(A,B)`, we can think of forming a new family `(A',B')` by adding one further index to A, index set, and mapping that new element to `A` itself.
```                     A' = A + 1
B'(inl a) = B a
B'(inr 0) = A
```
In a sense, this operator on families is akin to the von-Neumman successor operation `\ x -> x \cup { x }`.

Note that the least family of sets which is closed under the successor operation is the family over the the set of natural numbers, which assigns to `n : Nat` a standard finite set of cardinal `n`. It seems appropriate to call this the family of finite sets. This construction is really part of the construction of the successor of a transition system

• Relativised families. Suppose we are given an indexed family of sets `(A,B)` - such as the finite sets. Then we can define an operator `fam` on sets, in which the family `(A,B)` plays the role of the universe `Set`. The definition is formally a `generalised polynomial'
```

fam ( S : Set ) = (Sigma a : A) B a -> S : Set
```
When ` (A,B) ` is the family of finite sets, ` fam(S) ` is the set of finite families of elements of ` S `. (By playing around with the base family, one can define other cardinality-limited powerset operators.) Note that whereas `Fam : Type -> Type`, we have `fam : Set -> Set`, since the domain of the `Sigma` quantifier is now a set (the index set `A`), rather than the type of sets itself. Here, for readability, I have omitted the dependency of `fam` on `(A,B)`. The type of the construction `fam` is actually `fam : Fam Set -> Set -> Set`. Note that the least fixed point of ` fam(A,B) ` is the set ` W(A,B) `, i.e. Martin-Lof's W-type.

# Internal families

If `(A,B)` is a family of sets, then an internal family of `(A,B)` is an element of ```( a : A ) * B a -> A```, ie a pair `(a,b)`, where ``` a : A ``` and ` b : B a -> A `. An internal family `(a,b)` determines the family of sets ```(B a, B . b)```. Given an internal family `(c,d)` of ```(B a, B . b)``` we have an internal family of `(A,B)` that determines the same family of sets, namely `(b c, b . d)`. In other words, the internal families of `(A,B)` form a transitive transition system.

If `(A,B)` is a family of sets, then the family `(A,B)^+` contains `(A,B)` as an internal family, namely `(0,S)`. The definition of `(A,B)^+` is

```       (A,B)^+ = (A^+,B^+) where
A+ = 1 + A
B+ = \ m -> case m of 0   -> A
S a -> B a
```

# Internal operators

If `(A,B)` is a family of sets, then an internal operator of `(A,B)` is a function ``` f : F -> F ```, where `F` is the set of internal families of `(A,B)`. An internal operator `f` consists of two pieces of information `(q,qq)` which I call a quantifier and a quantifier extension. We have
```       q  : ( a : A )-> (B a -> A) -> A
qq : ( a : A, b : B a -> A )-> B (q a b ) -> A

f ab   = (q a b, qq a b) : ( a : A ) * B a -> A
where  ab = (a,b) : ( a : A ) * B a -> A
q a b  = (f(a,b))_0 : A
qq a b = (f(a,b))_1 : B (q a b) -> A
```
The reason for the terminology is that q has the same type as a (set-bounded) quantifier, if one interprets elements of `A` as codes for formulas and domains of quantification. For then a quantifier is something expecting a code for a domain of quantification, and a function assigning to each element of that domain a formula. It assigns to each possible domain a predicate of predicates over that domain, if by a predicate over a domain one means a function assigning a formula to an element of that domain.

If `q` is a quantifier then `qq` is a quantifier extension for `q` if for given `(a,b)` we have that `qq a b` is a predicate over the quantified "formula" `q a b`.

For an example of a quantifier/quantifier extension pair, one can consider the `W` qunatifier, which to a family of sets `(A,B)` assigns the set `W = W(A,B)` where ` W = ( a : A ) * B a -> W `. This is a quantifier. (The terminology has the defect that the notion of quantity is completely absent.) For a quantifier extension, one can take `WW` defined as follows:

```     WW ( w : W ) = case w of ( a, f ) -> W (B a) (WW . f)
```

There are a number of extremely important internal operators for which one wants fixed points in a universe : for example the successor operation, whose least fixed point is the function which to a natural number `p` assigns a certain finite set with cardinal `p`.

# Predicates, inclusion order

Given a type ` S `, we can put a category structure on the type ` Pow(S) `, taking the set of arrows from ``` P ``` to ` Q ` to be the set ``` (Pi s : S) P(s) -> Q(s) ```, with all arrows in such a homset considered equal. By definition, there is at most one arrow between two predicates with respect to this equality relation, so we actually have a partial order, namely the inclusion order between extensionally equal predicates over `S`. A functor on the category ``` Pow(S) ``` is simply a predicate transformer which is monotone with respect to the inclusion order.

This is the classic `complete partial order' of the Knaster-Tarski theorem, which is the basis of the approach to the formal semantics of programs as monotone (not necessarily continuous) predicate transformers. By the Knaster-Tarski theorem, we can define a predicate over `S` by giving a monotone predicate transformer and saying that the predicate we mean is its least (or greatest) fixed point. The proof of the Knaster-Tarski theorem (for example in Winksel, The formal Semantics of Programming Languages, pp 74-75) uses universal quantification over ` Pow(S) ` in an essential way.

Actually, another classic example of a complete partial order is the partial order of partial functions on `S` to some type like the type of sets. Here the order is that one partial function is an extension of another - ie. they agree where they are both defined, and the domain of one includes that of the other. By appeal to Knaster Tarski for this partial order, one can justify the definition of families of sets by induction-recursion.

Here is something one definitely wants -- existence of least fixed points -- which seems to be independently justified only on the basis of the impredicative notion of proposition. Is it possible to have the baby of monotone inductive definitions without the bath-water of impredicative quantification? To what extent can turn the direction of justification around, and justify weak forms of impredicative quantification in terms of principles of inductive definition?

(A separate line of questions is: what about greatest fixed points? Is this `something one wants'?)

To try to understand the notion in its own terms, it seems to me one has to look for a way to reduce all kinds of monotone inductive definitions of predicates (and partial predicates) to a canonical central case, and then concentrate on this case. (Perhaps this is analogous to Turing's analysis of formal computation in general into the transitions of a Turing machine.) What else can one do?

A natural target area for such a reduction would seem to be Peter Aczel's rule sets. Peter Dybjer has shown a way to construct realisability models for his external scheme of inductive recursive definitions using rule sets -- his paper can be found here. The central case may be perhaps the definition of the set of derived rules of a rule set, or its `closure' under assumptions and cut. One approach to formulating Aczel's notion of rule set in type theoretical terms uses the notion of interactive system.

Home