Chapter 7 provides an analysis of some of the algorithms and the performance of the implementation. We can draw a number of conclusions from this.
Firstly, operations may be performed using either the dyadic or signed binary representations. Algorithms performing the same operation using different representations have different behaviours. In general the signed binary algorithms are more complex and have greater lookahead requirements, but on the other hand perform better than the corresponding dyadic digit operations which generally suffer if the size of the dyadic digits involved swell.
Secondly, computing certain expressions exactly (eg. the iterated logistic map) necessarily involves examining many more digits of input and performing a significantly greater number of manipulations than would normally be performed with floating point arithmetic.
Thirdly, the order in which operations are performed can greatly affect the lookahead required. Rearranging the same expression can significantly reduce the computation time of complex expressions.
Lastly, the present implementation is slow when compared to the floating point arithmetic packages commonly used, even when those operations are performed to a high precision in a package such as Maple. This is in part be due to the fact that the algorithms themselves are more complex than the floating point operations one might otherwise use (c.f. Chapter 7, especially section 7.1.5). However the fact that the implementation uses a functional language and most floating point arithmetic is either written using an imperative language, assembler, or embedded in hardware makes it unreasonable to make a direct comparison.