Previous ML Club Meetings 1997-98
Compilers for Standard ML'97
Compiler development for Standard ML has been given a fresh impetus since the completion of the revision of the Definition of the language and a number of compilers have recently been completed which compile only SML'97, and others are in production. I've used five of these compilers on the latest version of the PEPA Workbench, namely SML/NJ 110, Moscow ML 1.43, Persimmon's MLJ 0.1, SMLC and Harlequin's MLWorks 1.0r2. I'll explain the different capabilities which these compilers offer and say how they performed on sample runs of the PEPA Workbench on a typical model, under different conditions. This is by no means a comprehensive benchmarking study but it does indicate that writing portable SML'97 code is a good idea to allow one to select the compiler which gives the best performance for one's application, traded off against other criteria.
Along the way, I'll mention what the important requirements from an SML'97 compiler are for me, the parts of SML that the PEPA Workbench uses most, and what appear to be the trends followed by the fashion-conscious young SML compiler writer.
From normalization-by-evaluation to type-directed partial evaluation
Normalization by Evaluation (NBE) is the currently favored generic name for a class of "holistic" algorithms to efficiently compute normal forms of typed lambda-terms. Unlike traditional, essentially rewriting-based normalization methods, NBE-based ones are designed to directly exploit the standard evaluation mechanism of a functional language in the normalization process. The resulting algorithms tend to be remarkably concise, efficient, and mystifying.
Type-directed partial evaluation (TDPE) is a more recent variant of these techniques, used for specialization and simplification of general functional programs, not only pure lambda-terms. As could be expected, additional complications arise because of features such as primitive functions, full recursion, and computational effects. Thus, both the exact algorithms and their formal justifications differ somewhat from their NBE relatives. But fortunately, their conciseness and efficiency remain largely unaffected.
TDPE has already proven itself as a practical partial-evaluation technique, but development of the the underlying theory has lagged seriously behind. In this talk I will therefore formulate and show a precise correctness result for TDPE of a simple PCF-like functional language, and sketch how it further extends to ML-like ones.
(This is essentially a repeat of a talk I gave at the BRICS/Chalmers APPSEM Workshop on Normalization by Evaluation, May 1998. Although much of that context will necessarily be missing, I will try to make the presentation as self-contained as possible.)
Restartable Exceptions: How to signal an error and then change your mind
It is well known that while exceptions are a fine and well-behaved languages feature, "callcc" and explicit continuations are not so tidy. This is the goto of the functional world, the dark side of control operators. A few well-placed "'a cont" types and "throw" operations in a New Jersey ML program, and all comprehensibility is lost.
Unfortunately for language purists, callcc is just too useful to ignore, so there is good reason to look for constructions that will give us the same power in a safer form. In this talk I shall present an enhanced form of ML exceptions that could offer exactly that. I'll explain how these `restartable exceptions' work, what you can do with them, and how they can be implemented.
Three Type-Inference Algorithms
I will explain my new type-checking algorithm and review two recent papers on other type-checking algorithms. All three algorithms are designed to handle
programs better than the conventional algorithm. My own algorithm,
unification of substitutions
to type-check application expressions symetrically. It is designed to be more bottom up than the conventional algorithm
and to reduce the bias towards detecting errors late in the program. The second algorithm is from the paper
Debugging Type Errors
by Karen L. Bernstein and
Eugene W. Stark
. It introduces the notion of
to handle unbound variables and to type-check expressions in a context free manner. In contrast, the third algorithm is highly context sensitive (a truly top-down type-checking algorithm).
was used in an early implementation of Caml and is analysed in the paper
Proofs about a Folklore Let-Polymorphic Type Inference Algorithm
Slides and notes available
(20 pages postscript)
When is a functional program not a functional program?
This talk is concerned with a rather surprising fact: there are ML programs whose behaviour is completely "functional" (in the sense that they give equal outputs for equal inputs), but the functions they compute cannot be written in the purely functional fragment of ML.
Experiences with the ML module system or, Why I Hate ML
At great personal risk (to minimise which, attenders will be frisked at the door and offensive weapons, rotten fruit etc. confiscated) I plan to talk about my experiences over the last three years with using Standard ML to (re)engineer the Edinburgh Concurrency Workbench. The CWB is on the one hand a showcase for concurrency theory and logic, and on the other hand a legacy system: each aspect is significant. I'll try to explain why I'm not yet completely convinced that all other languages should be abolished, illustrating what I say by talking about what I've been trying to achieve with the CWB, why, and the problems I've had.
If time permits, I may also get to the other half of the talk, Why I Love ML.
I hope that in at least some cases members of the audience may be able to say "But you could do it this way..."
Program example 1 slides
Program example 2 slide
Department of Computing & Electrical Engineering, Heriot-Watt University
Automatic assessment of elementary SML program style
A linguistic characterisation of program style is introduced and compositionality is proposed as a criterion for semantic style assessment of functional programs. Seven simple semantic style rules for an SML subset, based on program transformation, are presented and their incorporation in an automatic style assessor is discussed.
Dave Berry (
Separate compilation (discussion)
The Definition of Standard ML says nothing about separate compilation. The result is that implementations have taken several different approaches to expressing or calculating dependencies between modules. MLWorks 1.0 used "require" declarations in files. This has not been popular with customers, so we are considering alternatives. I will discuss several possible approaches.
Department of Artificial Intelligence
: An Analogy-Based Editor for Standard ML
This talk describes and demos a novel editor for ML, called CYNTHIA. In CYNTHIA, functions are constructed by transforming previously defined functions. This is done through the use of a collection of editing commands which make changes to a definition: these can be as simple as renaming a variable and as complex as changing the type of a function. In addition, CYNTHIA uses an underlying proof representation to guarantee the correctness (syntactic, well-typedness, well-definedness, termination) of definitions. CYNTHIA is particularly suited to novice ML programmers and this talk will also report on experiences of novices using the system.
Bruce J. McAdam
Useful Programming with
I will show that sometimes it is useful to define functions in ML using the fixed-point combinator. By doing this we can add features to recursive functions and simulate some uses of other programming constuctions such as exceptions and continuations.
(24 pages, postscript)
Functional Programming with Effects
I will give an overview of some of my work on programming with first- class computational effects in ML-like languages. In particular, I will introduce
, a construct for uniformly adding arbitrary programmer-defined effects to a language, and sketch how it can be efficiently implemented in SML/NJ (using continuations).
(36 pages, postscript)
27th Nov. Informal discussion
When types get in the way
(16 pages, postscript)
6th Nov, 1997.
Using LEMMA with C++
This is another talk about subtyping: it will be long on pictures and short on technical details, so it should be accessible to the ML programmer on the Clapham Omnibus. I'll talk about a class of posets called Helly Posets, which seems to encompass all of the orderings of practical interest in type inference with subtypes. I'll sketch how constraint sets over these posets can be tested for solvability, how they can be solved, and how graph closure over Helly Posets generalises first-order unification.
GUI Toolkits for functional languages
Bruce J. McAdam
Ideas about how
Changing type-checkers could help you debug programs
the PLAN homepage
for more information.
Return to the ML-Club Home Page.