next up previous contents
Next: Distributed Recursive Datatypes Up: Introduction Previous: Introduction

Recursive Functions

We begin by considering a problem that leads to a recursively defined data structure. We have a recursively defined integer function of the form:-

f(x) = if basecond(x) then return base(x)
 else return g( f(a(x)), f(b(x)), f(c(x)), x)

Then the evaluation of the function for a particular value of x will be a tree-like structure, each computation of f(y) will require three sub-computations of f(a(y)), f(b(y)) and f(c(y)), which are combined to produce the result. This function could then be evaluated by using a three-way tree structure, the values of the function being stored and updated at each node of the tree, until the value at the root is computed. If we know more about the properties of the descent functions [1] a, b and c (properties known as descent conditions ), then we could exploit those properties in the evaluation of the function. We will refer to this type of function as a function with commutative recursive calls.

For example, in [1], they consider the problem where all the functions commute , ab(x) = ba(x). This can be exploited, because we can predict in advance many of the nodes in the data structure that share the same value. This can be used to avoid re-computing the values of f at these shared points. Another way of solving this problem of re-computation would be by the use of a memo-table , whereby whenever f is computed, the value is stored and can simply be looked up if it is subsequently needed. If the evaluation of the function results in a large amount of computation, it is natural to look to ways of parallelising the work. In this situation, memo-tables will be inefficient, as the table must be a global to all the processors, and lookups will require a communications overhead, plus the synchronisation problems when the table needs to be updated. In [1] they use a system of memo-lists , which store, local to each part of the function call graph, all the re-usable values. Then during the computation, these lists are passed around, with the lookup and adding of the memo-table system reduced to quicker add-head and remove-head operations on these lists. Because the lists are no longer global, this method has the potential to be parallelised to a certain extent. The method is heavily reliant on the fact that all of the descent functions commute.

If we extend the generality to the situation where not all of the functions commute (say only a and b), then we arrive with a call graph as in fig. 1.1. Regarding method [1] it would be difficult to extend this method directly to exploit the remaining redundancy in the problem. The aim will be to use a more general method of manipulating tree-like structures in parallel, that also allows the redundancy in these problems to be expressed and exploited. So first we turn to the more general problem of manipulating recursive data structures in a parallel environment.


 
Figure 1.1:   Function call graph of function with redundancy
\begin{figure}
\begin{center}

\includegraphics {callgraph.eps}
\end{center}\end{figure}


next up previous contents
Next: Distributed Recursive Datatypes Up: Introduction Previous: Introduction
Timothy Lewis
11/12/1997