CSL'02 List of Accepted Papers

On Continuous Normalization

This work aims at explaining the syntactical properties of
continuous normalization, as introduced in proof theory by Mints,
and further studied by Ruckert, Buchholz and Schwichtenberg.

In an extension of the untyped coinductive lambda-calculus by
void constructors (so-called repetition rules), a primitive
recursive normalization function is defined. Compared with other
formulations of continuous normalization, this definition is much
simpler and therefore suitable for analysis in a coalgebraic
setting. It is shown to be continuous w.r.t. the natural topology
on non-wellfounded terms with the identity as modulus of continuity.
The number of repetition rules is locally related to the number of
beta-reductions necessary to reach and the number of applications
appearing in the normal form, as represented by the Böhm tree.

Keywords: lambda-calculus, continuous normalization,
standardization, coinduction, non-wellfounded terms

Resolution refutations and propositional proofs with height-restrictions

Height restricted resolution (proofs or refutations) is a natural restriction of resolution
where the height of the corresponding proof tree (in case of sequence-like proofs) is
bounded. Height restricted resolution does not distinguish between tree- and
sequence like proofs. We show that polylogarithmic-height restricted resolution is
strongly connected to bounded arithmetic theory $S^1_2(\alpha)$. We separate
polylogarithmic-height restricted resolution from quasipolynomial size tree-like
resolution (which is known to be separated from quasipolynomial size sequence-like
resolution by results from Goerdt '93 and Alekhnovich et al. '02).

More general we will study infinitely many sublinear-height restrictions given by
functions $n\mapsto 2_i((\log^{(i+1)} n)^{O(1)})$ for $i\ge0$. We show that resulting
resolution systems are connected to certain bounded arithmetic theories, and that
they form a strict hierarchy of resolution proof systems.

To this end we develope some proof theory for height restricted propositional proofs.
This includes several cut elimination results, and the following so called boundedness
theorem: Any resolution proof of the order induction principle for a well-ordering
must have height at least the ordertype of the well-ordering. On the other hand there
are resolution proofs of the order induction principle which have height linear and size
quadratic in the ordertype of the well-ordering.

All our results hold for proofs and refutations.

Keywords: Height of proofs; Length of proofs; Resolution refutation,
Propositional calculus, Frege systems, Order induction principle,
Cut elimination, Bounded arithmetic.
MSC: Primary 03F20; Secondary 03F07, 68Q15, 68R99.

A fully abstract relational model of Syntactic Control of Interference

Using familiar constructions on the category of monoids, a fully abstract model of Basic SCI is constructed. Basic SCI is a version of Reynolds's higher-order imperative programming language Idealized Algol, restricted by means of a linear type system so that distinct
identifers are never aliases. The model given here is concretely the same as Reddy's "object spaces" model, so this work also shows that Reddy's model is fully abstract, which was not previously known.

Keywords: semantics, Algol-like languages, interference control, full abstraction, object spaces, monoids.

Local problems, planar local problems and linear time


In an effort to investigate some linear analogue of the conjecture
P $\neq$ NP, that is DLIN $\neq$ NLIN
(deterministic/nondeterministic linear time
complexity on random-access machines), this paper aims at being a
step in the precise classification of the many NP-complete problems
which are seemingly not NLIN-complete. We define and discuss the
complexity class LIN-LOCAL -- the class of problems linearly reducible
to problems defined by Boolean local constraints -- as well as its
planar restriction LIN-PLAN-LOCAL. We show that both "local" classes
are computationally robust and give some arguments for the conjecture
that the following "linear hierarchy" is strict at each level:
DLIN $\subseteq$ LIN-PLAN-LOCAL $\subseteq$ LIN-LOCAL $\subseteq$ NLIN.
After showing that SAT and PLAN-SAT are complete in classes LIN-LOCAL
and LIN-PLAN-LOCAL, respectively, we prove that some unexpected
problems that involve some seemingly global constraints indeed belong
to those classes and are complete for them. More precisely, we show
that VERTEX-COVER and many similar problems involving cardinality
constraints are LIN-LOCAL-complete. Our most striking and most
technical result is that PLAN-HAMILTON -- the planar version of the
Hamiltonian problem, a problem that involves a connectivity
constraint of solutions -- is LIN-PLAN-LOCAL and even is
LIN-PLAN-LOCAL-complete. Further, since our linear-time reductions
also turn out to be parsimonious, they yield new DP-completeness


descriptive complexity, monadic logic, computational
complexity, linear time, planar graphs, Hamiltonian and
satisfiability problems.

Possible World Semantics for General Storage in Call-By-Value

We describe a simple denotational semantics, using possible worlds, for a call-by-value language with ML-like storage facilities, allowing the storage of values of any type, and the generation of new storage cells. We first present a criticism of traditional Strachey semantics for such a language: that it requires us to specify what happens when we read non-existent cells. We then obtain our model by modifying the
Strachey semantics to avoid this problem.

We describe our model in 3 stages: first no storage of functions or recursion (but allowing storage of cells), then we add recursion, and finally we allow storage of functions, without requiring any technology beyond solution of simultaneous cpo equations. However, in the last case, there is an equivalent presentation where we solve equations in a functor category, and this is somewhat advantageous. We prove the model adequate using Pitts' theory of invariant relations.

We discuss similarities and differences between our model and Moggi's model of ground store. A significant difference is that our model does not use monadic decomposition of the function type.

possible worlds
functor category
storage, state
cell generation, dynamically generated references
computation adequacy
invariant relation

A Fixpoint Theory for Non-monotonic Parallelism

This paper studies parallel recursions. The specification language
used in this paper incorporates sequentiality, nondeterminism,
general recursion, reactiveness (including infinite behaviours)
and parallelism. The language is a direct generalisation of
Z-style specification and is the minimum of its kind and thus
provides a context in which we can study recursions in general.
In order to use Tarski's theorem to determine the fixpoints of
recursions, we need to identify a well-founded partial order. The
bottom of the order corresponds to the empty loop, whose most
appropriate denotation should be the specification containing all
nonterminating behaviours. The Egli-Milner order has such a
bottom. However, conjunctive parallel composition is not
Egli-Milner monotonic. A theorem of this paper shows that in fact
no partial order makes all recursions in the language monotonic.
Tarski's theorem alone is not enough to determine the fixpoints
of all recursions. Instead of using Tarski's theorem directly, we
reason about the fixpoints of terminating and nonterminating
behaviours separately. Such reasoning is supported by the laws
of a new composition called partition. We propose a fixpoint
technique called the partitioned fixpoint. The surprising result
is that although a recursion may not be lexical-order monotonic,
it must have the partitioned fixpoint, which is equal to the
least lexical-order fixpoint. Since the partitioned fixpoint is
well defined in any complete lattice, the results are applicable
to various semantic models. The existing fixpoint techniques
simply become special cases of the partitioned fixpoint. For
example, an Egli-Milner-monotonic recursion has its least
Egli-Milner fixpoint, which can be shown to be the same as the
partitioned fixpoint. The new technique is more general than the
least Egli-Milner fixpoint in that the partitioned fixpoint can
be determined even when a recursion is not Egli-Milner monotonic.
Examples of non-monotonic recursions are studied. Their
partitioned fixpoints are shown to be consistent with our

Keywords: fixed point, recursion, specification, parallelism

Compactness and Continuity, Constructively Revisited

In this paper, the relationships between various classical
compactness properties, including the constructively acceptable one of total
boundedness and completeness, are examined using intuitionistic logic. Even
the Bolzano--Weierstra{\ss } principle, that every sequence in a compact
metric space has a convergent subsequence, is brought under our scrutiny;
although that principle is essentially nonconstructive, we produce a
reasonable, classically equivalent modification of it that is constructively
valid. To this end, we impose a certain condition on the sequences under
consideration, a condition whose constructive failure for arbitrary sequences
is then detected as the obstacle to any constructive relevance of the
traditional Bolzano-Weierstra{\ss } principle.

MR Classification 2000: Primary 03F60; Secondary 26E40, 54E45

Key Words and Phrases: Compact Metric Spaces, Uniform Continuity, Constructive Analysis

Implicit computational complexity for higher type functionals

In previous works we argued that second order logic with comprehension
restricted to positive formulas can be viewed as the core of
Feasible Mathematics. Indeed, the equational programs over strings
that are provable in this logic compute precisely the poly-time
computable functions.

Here we investigate the provable functionals of this logic, and
show that they are precisely Cook and Urquhart's basic feasible
functionals, BFF. We also show that BFF consists precisely of
the functionals that are lambda representable in F2 restricted to
positive type arguments (and trivially augmented with basic
constructors and destructors).

A Tag-Frame System of Resource Management for Proof Search in Linear-Logic Programming

In programming languages based on linear logic, the program can grow and shrink in a nearly arbitrary manner over the course of execution. Since the introduction of the I/O model of proof search [Hodas and Miller 1991/1994], a number of refinements have been proposed with the intention of reducing the degree of non-determinism in proof search [Hodas 1993, Cervesato, Hodas,and Pfenning 1996, Lopez and Pimentel 199X, Hodas, Tamura, Watkins, and Kang, 1998]. Unfortunately each of these systems has had some limitations. In particular, while the {\em resource management} systems of Cervesato et al. obtained the greatest degree of determinism, they required global operations on the set of clauses which were suitable only for interpreter-based implementations. In contrast the {\em level-tag} system of Hodas and Tamura, et al. relied only on relabeling tags attached to individual formulas, and was hence appropriate as the specification of an abstract machine. However it retained more non-determinism than the resource management systems. This led to a divergence in the operational semantics of the interpreted and compiled versions of the language Lolli. In this paper we propose a {\em tag frame} system which recaptures the behavior of the resource management systems, while being appropriate as a foundation of a compiled implementation.

On generalizations of semi-terms of particularly simple form

It is well-known that we can elementarily bound the maximal depth of
terms occurring in cut-free proofs in Gentzen's sequent calculus
in the length of the given proof.

In this work we are especially interested in the question, whether it is
possible to generalise not only terms,
but also occurring semi-terms,
which may include bound variables as well. More consisely

Does LK admit generalisations of simple form for semi-terms?

The answer to this question is negative; if we stick to the
logical form of the endsequent.

We therefore investigate the notion of a more general form
of the endformula.
We allow the replacement of single quantors by "blocks" of quantifiers. More
precisely, the extension A' of the formula A is obtained
by replacing the strong quantifier occurrence Q x in A
by Q x, z for a (suitable defined) string of
bound variables z. Notice that A' is logical stronger
than A.

Now we can prove the following result:
Any proof P of A(t_1,\ldots,t_n), where the t_i
are either terms or semi-terms can be transformed
to a proof of A'(s_1,\ldots,s_n) such that (i)
the t_i are substitution instances of the s_i and
(ii) the maximal depth of the (semi-)terms s_i is elementary
bounded in the number of steps of the proof P and
the endformulas, without considering the (semi-)terms t_i.

Proof Theory, Generalization of Proofs, Proof-Complexity,

Greibach Normal Form in Algebraically Complete Semirings

We give inequational and equational axioms for semirings with a fixed-point
operator and formally develop a fragment of the theory of context-free
languages. In particular, we show that Greibach's normal form theorem
depends only on a few equational properties of least pre-fixed-points in
semirings, and elimination of chain- and deletion rules depend on
their inequational properties (and the idempotency of addition). It follows
that these normal form theorems also hold in non-continuous semirings having
enough fixed-points.

On the Automatizability of Resolution and Related Propositional Proof Systems

We analyse the possibility that a system that
simulates Resolution is automatizable. We call this notion "weak
automatizability". We prove that Resolution is weakly automatizable
if and only if Res(2) has feasible interpolation.
In order to prove this theorem, we show that Res(2) has
polynomial-size proofs of the reflection principle of Resolution
(and of any Res(k)), which is a version of consistency. We also show
that Resolution proofs of its own reflection principle require
slightly subexponential size. This gives a better
lower bound for the monotone interpolation of Res(2) and a
better separation from Resolution as a byproduct.
Finally, the techniques for proving these results give us
a new complexity measure for Resolution that refines the
width of Ben-Sasson and Wigderson. The new measure and techniques
suggest a new algorithm to find Resolution refutations, and a way
to obtain a large class of examples that have small Resolution
refutations but require relatively large width. This answers a
question of Alekhnovich and Razborov related to whether
Resolution is automatizable in quasipolynomial-time.

Keywords: Resolution, Automatizability, Feasible Interpolation

Classical Linear Logic of Implications

We give a simple term calculus for the multiplicative exponential
fragment of Classical Linear Logic, by extending Barber and Plotkin's
system for the intuitionistic case.
The calculus has the non-linear and linear implications as the basic
constructs, and this design choice allows a technically managable
axiomatization without commuting conversions.
Despite this simplicity, the calculus is shown to be sound and complete
for category-theoretic models given by $*$-autonomous categories with
linear exponential comonads.

keywords: linear logic, lambda calculus, type theory, category theory

Hoare Logics for Recursive Procedures and Unbounded Nondeterminism

This paper presents sound and complete Hoare logics for partial and total
correctness of recursive parameterless procedures in the context of
unbounded nondeterminism. For total correctness, the literature so far has
either restricted recursive procedures to be deterministic or has studied
unbounded nondeterminism only in conjunction with loops rather than
procedures. We consider both single procedures and systems of mutually
recursive procedures. All proofs have been checked with the theorem prover

Hoare logic
Theorem proving

Resource Tableaux (Extended Abstract)

The logic of bunched implications, BI, provides a logical analysis of
a basic notion of resource which has proved rich enough to provide a
``pointer logic'' semantics for programs which manipulate mutable data
structures. We develop a theory of semantic tableaux for BI, thereby
providing an elegant basis for efficient theorem proving tools for
BI. It is based on the use of an algebra of labels for BI's tableaux
to solve the resource-distribution problem, the labels being the
elements of resource models. In the case of BI with inconsistency, the
challenge consists in dealing with BI's Grothendieck topological
models within such a based-on labels proof-search method.
In this paper, we prove soundness and completeness theorems for a
resource tableaux method with respect this semantics and provides a
way to build countermodels from so-called dependency graphs.
As consequences, we have two strong new results for BI: the
decidability for propositional BI and the finite model property
(neither of which is available for intuitionistic linear logic with !)
with respect to Grothendieck topological semantics.
In addition, we propose, by considering a class of models based on
partially defined monoids, a new resource semantics which generalizes
the semantics of BI's pointer logic and for which BI is complete.

Keywords: bunched logic, resources, semantics, decidability, finite
model property.

The Stuttering Principle Revisited

It is known that LTL formulae without the `next' operator
are invariant under the so-called stutter-equivalence of
omega-words. In this paper we extend this principle to other
subclasses of LTL formulae, designing appropriate `stutter-like'
equivalences for each subclass so that the formulae of the
subclass are invariant under the defined form of stuttering.
This allows to prove that certain hierarchies of (fragments of)
LTL formulae are strict; we also indicate how to tackle the state-space explosion problem
with the help of presented results.

Decidability of Bounded Higher-Order Unification

It is shown that unifiability of terms in
the simply typed lambda calculus with beta and eta
rules becomes decidable if there is a bound
on the number of bound variables and lambdas
in a unifier in eta-long
beta-normal form

Keywords: unification, simply typed lambda calculus

Solving Pushdown Games with a Sigma_3 Winning Condition

We study infinite two-player games over pushdown graphs
with a winning condition that refers explicitly to the infinity
of the game graph: A play is won by player 0 if some vertex
is visited infinity often during the play. We show that the set
of winning plays is a proper Sigma_3-set in the Borel hierarchy,
thus transcending the Boolean closure of Sigma_2-sets which
arises with the standard automata theoretic winning conditions
(such as the Muller, Rabin, or parity condition). We also show
that this Sigma_3-game over pushdown graphs can be solved
effectively (by a computation of the winning region of player 0
and his memoryless winning strategy). This seems to be a first
example of an effectively solvable game beyond the second level of
the Borel hierarchy.

Keywords: Borel hierarchy, infinite games, pushdown automata,
Wadge game, effective winning strategies

Logical Relations for Monadic Types

Logical relations and their generalizations are a fundamental tool
in proving properties of lambda-calculi, e.g., yielding sound
principles for observational equivalence. We propose a natural
notion of logical relations able to deal with monadic types, an
essential ingredient of Moggi's computational lambda-calculus. The
treatment is categorical, and is based on notions of subsconing and
(extensions of) distributivity laws for monads. Our approach has a
number of interesting applications, including cases for
lambda-calculi with non-determinism (where being in logical relation
means being strongly bisimilar), dynamic name creation, and
probabilistic systems.

Keywords: logical relations, monadic types, semantics, typed

Higher-Order Positive Set Constraints

We introduce a natural notion of positive set constraints on simply-typed
lambda-terms. We show that satisfiability of these so-called positive
higher-order set constraints is decidable in 2-NEXPTIME. We explore a
number of subcases solvable in 2-DEXPTIME, or even in NEXPTIME, resp.
DEXPTIME. This uses a first-order clause format on so-called shallow
higher-order patterns, and automated deduction techniques based on ordered
resolution with splitting. This technique is then applied to the problem
of approximating success sets for lambda-Prolog (i.e., computing
higher-order types in the style of Fruehwirth et al.), and getting upper
bounds on sets of reachable terms in Nipkow-style higher-order rewriting.

Keywords: resolution, tree automata, set constraints, typing, lambda-calculus,

Open Proofs and Open Terms: a Basis for Interactive Logic

When proving a theorem, one makes intermediate claims, leaving parts
temporarily unspecified. When the proof is `finished', it is written
up in a style that corresponds loosely to natural deduction. Looking
more closely at this process of proof finding, one observes that also
in this phase, the proof-steps are intended to be correct in terms of
natural deduction. But what is the correctness criterion for
`unfinished proofs' (where some parts may be left open,
but in which the logical steps should be correct)?

Unfinished proofs appear prominently interactive theorem proving
systems, where the user types in tactics that guide the system through the proof-construction. To describe precisely what these interactive theorem provers actually operate on, we want to give a precise meaning to `unfinished proofs' and `proof states'.

We define `open higher order predicate logic', an extension of
higher order logic with unfinished (open) proofs and open
terms. Then we define a type theoretic variant of this open higher
order logic, using meta-variables for open terms. We define a formulas-as-types embedding from open higher order logic to this type theory and we show how this type-theory nicely captures the notion of `proof state', which is a type-theoretic context.

Keywords: Interactive Theorem Proving, Type Theory, Open terms, Meta-variables, Formulas-as-Types

Bijections Between Partitions by Two-Directional Rewriting Techniques

One basic activity in combinatorics is to establish combinatorial
identities by so-called `bijective proofs,'
which consist in constructing explicit bijections between two types
of the combinatorial objects under consideration.

We show how such bijective proofs can be established,
and how the bijections are computed by means of multiset rewriting,
for a variety of combinatorial problems involving partitions.
In particular, a new proof, the 'bijective' one, is given for
all equinumerous classes of the partition ideals of order 1
from the classical book ``The Theory of Partitions'' by G.Andrews.

Establishing the required bijections involves novel
two-directional reductions in the sense that forward and backward
application of rewrite rules head for two different normal forms
(representing the two combinatorial types).

It is well-known that non-overlapping multiset rules are
confluent. As for termination, it generally fails even
for multiset rewriting systems that satisfy certain natural
invariant conditions.
The main technical development of the paper,
which is important for establishing that the mapping yielding
the combinatorial bijection is functional,
is that the `restricted' two-directional strong normalization
holds for these multiset rewriting systems
(which yields, in addition, bounds on the complexity of the
computation of the combinatorial bijections).

Keywords: multiset rewriting, confluence, termination,
strong normalization, integer partitions,
partition identities, Euler's partition theorem

Proof Theoretical Account of Continuation Passing Style

We study "classical proofs as programs" paradigm in Call-By-Value
(CBV) setting. Specifically, we show CBV normalization
for CND (Parigot 92) can be simulated by cut-elimination for LKQ
(Danos-Joinet-Schellinx 93), namely q-protocol. We use proof-term
assignment system to prove this fact. The term calculus for CND we use
follows Parigot's lambda-mu calculus with new CBV normalization
procedure. A new term calculus for LKQ is presented as a variant of
lambda calculus with a let construct. We then define a translation
from CND into LKQ and show how CBV normalization procedure can be
simulated by q-protocol. We also show this relation can be thought of
the familiar CBV CPS-translation without translation on types.

Keywords: Classical Logic, Classical Natural Deduction, LKQ, Call-By-Value, CPS-translation, classical proof theory.

Duality between Call-by-name Recursion and Call-by-value Iteration

We investigate the duality between call-by-name recursion and
call-by-value iteration in the lambda-mu-calculi and their models.
Semantically, we consider that iteration is the dual notion of
recursion. Syntactically, we extend the call-by-name
lambda-mu-calculus and the call-by-value one with a fixed-point
operator and an iteration operator, respectively. This paper shows
that the dual translations between the call-by-name lambda-mu-calculus
and the call-by-value one, which is constructed by Selinger, can be
expanded to our extended lambda-mu-calculi. Another result of this
study provides uniformity principles for those operators.

Proofnets and context semantics for the additives

We provide a context semantics for Multiplicative-Additive
Linear Logic (MALL), together with proofnets whose reduction
preserves semantics, where proofnet reduction is equated with
cut-elimination on MALL sequents. The results extend the program of
Gonthier, Abadi, and L\evy, who provided a ``geometry of optimal
lambda-reduction'' (context semantics) for lambda-calculus and
Multiplicative-Exponential Linear Logic (MELL). We integrate three
features: a semantics that uses buses to implement slicing; a
proofnet technology that allows multidimensional boxes and
generalized garbage, preserving the linearity of additive reduction;
and finally, a read-back procedure that computes a
cut-free proof from the semantics, which is closely related to full
abstraction theorems.

Keywords : linear-logic, proofnets, context semantics, local reduction

Travelling on designs

Travelling on designs

Proofs in Ludics are represented by designs.
Designs (desseins) can be seen as an intermediate syntax between sequent calculus and proof
nets, carrying advantages from both approaches, especially w.r.t. cut-elimination.
To study interaction between designs and develop a geometrical intuition,
we introduce an abstract machine which
presents normalization as a token travelling along a net of designs.
This allows us for a concrete approach, from which to carry on the study of issues such as:
(i) which part of a design can be interactively recognized;
(ii) how to reconstruct the agents from the traces of their interactions.

Key words: Ludics, normalization, linear logic.

A logic of probability with decidable model-checking

A predicate logic of probability, close to logics of probability of Halpern and al., is considered. This study is motivated by the problem of verification of systems with uncertainty. Our main result concerns the following model-checking problem: deciding whether a given formula holds on a structure defined by a given Finite
Probabilistic Process. We show that this model-checking problem is decidable for a rather large subclass of formulas of a second-order monadic logic of probability. We discuss also the expressive power of nesting of probabilities, the decidability of satisfiability and compare our logic of probability with the probabilistic temporal logic pCTL*.

probabilistic logic, model-checking, verification

Optimal Complexity Bounds for Positive LTL Games

We prove two tight bounds for complexity of deciding graph games
with the winning conditions defined by formulas from fragments of LTL.

As our first result we show that the problem is EXPSPACE-hard if
the winning condition is from the logic
LTL_+( eventually,next,or,and). This solves an open problem
from Alur&La Torre LICS 2001 paper, where the authors use
the Buchi automata technique to show an EXPSPACE algorithm
deciding more general LTL(eventually,next,and,or) games,
but do not prove optimality of this upper bound.

Our second result is that deciding LTL_+(eventually,and,or)
games is in PSPACE. This is again a tight bound: the problem is
known to be PSPACE-hard even for much weaker logic
LTL_+(eventually, and). We use a method based on a notion of, as
we call it, persistent strategy. The best upper bound one could
prove for our problem with the Buchi automata technique,

Equivalence and Isomorphism for Boolean Constraint Satisfaction

A Boolean constraint satisfaction instance is a conjunction of
constraint applications, where the allowed constraints are drawn from
a fixed set C of Boolean functions. We consider the problem of
determining whether two given constraint satisfaction instances are
equivalent and prove a Dichotomy Theorem by showing that for
all sets C of allowed constraints, this problem is either
polynomial-time solvable or coNP-complete, and we give a simple
criterion to determine which case holds.

A more general problem addressed in this paper is the isomorphism
problem, the problem of determining whether there exists a renaming of the variables that makes two given constraint satisfaction instances equivalent in the above sense. We prove that this problem is coNP-hard if the corresponding equivalence problem is coNP-hard, and polynomial-time many-one reducible to the graph isomorphism problem in all other cases.

Keywords: computational complexity, propositional logic, constraint satisfaction problems, logic in computer science

Configuration Theories


A new framework for describing concurrent systems is presented. The
rules for combining fragments of computation into legal configurations
of concurrent programs are expressed by sequents
$\Gamma\vdash_\rho\Delta$, where $\Gamma$ and $\Delta$ are sequences
of partially ordered sets (of events) and $\rho$ is a matrix of
monotone maps from the components of $\Gamma$ to the components of
$\Delta$, witnessing the allowed mergings of events. The structural
rules of Gentzen's sequent calculus are decorated with suitable matrix
expressions, where {\em cut} corresponds to product of matrices. The
calculus thus obtained is shown to be sound with respect to
interpretation in van Glabbeek and Goltz's {\em configuration
structures}. Completeness is proven for the calculus restricted to
{\em finite} posets. This restriction still allows specification of
such complex behaviour as the interaction of memory and threads in
{\em Java}. As a case study we provide a full axiomatization of the
Java memory model and prove {\em computational adequacy} with respect
to the operational semantics of \cite{cencia98b}.


concurrency, configuration structures, sequent calculus, Java,
computational adequacy.

On the variable hierarchy of the modal mu-calculus

We investigate the structure of the modal mu-calculus L_\mu with
respect to the question of how many different fixed point variables
are necessary to define a given property. Most of the logics
commonly used in verification, such as CTL, LTL, CTL*, PDL, etc.
can in fact be embedded into the two-variable fragment of
the mu-calculus. It is also known that the two-variable fragment
can express properties that occur at arbitrarily high levels of
the alternation hierarchy. However, it is an open problem
whether the variable hierarchy is strict.

In the present paper we investigate this problem with a game-based
approach and establish the strictness of the hierarchy for the case
of existential (i.e., Box-free) formulae. It is known that these
characterize precisely the L_\mu-definable properties that are
closed under extension. We also prove that the variable hierarchy
is `orthogonal' to the alternation hierarchy and relate the
strictness of the variable hierarchy to the question whether
the finite variable fragments satisfy the existential preservation

Keywords: modal mu-calculus, games, descriptive complexity

Variants of realizability for propositional formulas and logic of the weak law of excluded middle

It is unknown, whether the logic of propositional formulas
that are realizable in the sense of Kleene
has a finite or recursive axiomatization.
In this paper another approach to realizability of propositional
formulas is studied. This approach is based on the following
informal idea:
a formula is realizable if it has a ``simple'' realization for
each substitution. More precisely, logical connectives are
interpreted as operations on sets of natural numbers and a formula
is interpreted as a combined operation; if some sets are substituted
for variables, then elements of the result are called realizations.
A realization is simple if it has low Kolmogorov complexity,
and a formula is called realizable if it has at least
one simple realization whatever sets are substituted.
Similar definitions may be formulated in arithmetical terms.
A few ``realizabilities'' of this kind are considered and
it is proved that all of them give the same finitely axiomatizable
logic, namely, the Jankov logic of the weak law of excluded middle.

Keywords: Realizability; Kolmogorov complexity;
Superintuitionistic logics.

Trading Probability for Fairness

Properties of open systems can be modeled as objectives in two-player
games. Depending on the interaction between the system and its
environment, the games may be either turn-based (transitions of the
system and its environment are interleaved) or concurrent (transitions
are taken simultaneously). Depending on the properties we model, the
objectives may be achieved in a finite game (safety properties) or an
infinite game (general properties, specified with Buchi, co-Buchi, or
parity fairness conditions). Finally, winning may be required surely
(the objective should be achieved for sure) or almost surely (the
objective should be achieved with probability 1). Almost-sure winning
involves randomized strategies for the players and is harder to
analyze. For example, while sure reachability games can be solved in
linear time, the best algorithm for solving almost-sure reachability
is quadratic.

In this paper it is shown that probabilistic concurrent games can be
reduced to non-probabilistic turn-based games with a more general
fairness condition. For example, finding the set of states that are
almost-surely winning for player 1 in a concurrent reachability game
can be reduced to finding the set of states that are surely winning
for player 1 in a turn-based Buchi game. From a theoretical point of
view, the reductions show that it is possible to trade the
probabilistic nature of almost-sure winning by a more general fairness
condition in a game with sure winning. The reductions improve our
understanding of games and suggest alternative simple proofs of some
known results such as determinacy of concurrent probabilistic Buchi
games. From a practical point of view, our reductions turn solvers of
non-probabilistic turn-based games into solvers of probabilistic
concurrent games. An improvement in the well-studied algorithms for
the former would immediately carry over to the latter. In particular,
a recent improvement in the upper bound for non-probabilistic
turn-based parity games allows us to improve the bound for solving
probabilistic concurrent co-Buchi games from cubic to quadratic.

A Logic for Probabilities in Semantics

Probabilistic computation has proven to be a challenging and interesting area of research, both
from the theoretical perspective of denotational semantics and the practical perspective of
reasoning about probabilistic algorithms. On the theoretical side, the probabilistic powerdomain of
Jones and Plotkin represents a significant advance. Further work, especially by Alvarez-Manilla,
has greatly improved our understanding of the probabilistic powerdomain, and has helped clarify its
relation to classical measure and integration theory. On the practical side, many researchers such
as Kozen, Segala, Desharnais, and Kwiatkowska, among others, study problems of verification for
probabilistic computation by defining various suitable logics for the classes of processes under
study. The work reported here helps to bridge the gap between the domain theoretic and
verification (model checking) perspectives on probabilistic computation by exhibiting sound and
complete logics for probabilistic powerdomains that arise directly from given logics for the
underlying domains.

The category in which the construction is carried out generalizes Scott's Information Systems by
taking account of full classical sequents. Via Stone duality, following Abramsky's Domain Theory in
Logical Form, all known interesting categories of domains are embedded as subcategories. So the
results reported here properly generalize similar constructions on specific categories of domains.
The category offers a promising universe of semantic domains characterized by a very rich structure
and good preservation properties of standard constructions. Furthermore, because the logical
constructions make use of full classical sequents, the morphisms have a natural non-deterministic
interpretation. Thus the category is a natural one in which to investigate the relationship between
probabilistic and non-deterministic computation. We discuss the problem of integrating
probabilistic and non-deterministic computation after presenting the construction of logics for
probabilistic powerdomains.

Partial Fixed-Point Logic and Infinite Structures

We consider an alternative semantics for partial fixed-point logic
(PFP). In this semantics the fixed point of a formula consists of
those elements which, after some stage of the induction, occur in all
further induction stages.
It is shown that on finite structures, this fixed-point semantics and
the standard semantics for PFP as considered in finite model theory
are equivalent, although arguably the formalisation of properties
might even become simpler and more intuitive. This is especially true
for the logic's modal analogue, the modal partial fixed-point logic.
Contrary to the standard PFP semantics which is only defined on finite
structures and can not be extended, the new semantics generalises
easily to infinite structures and transfinite inductions.
In this generality we compare - in terms of expressive power - partial
with other known fixed-point logics. In the main technical result of
the paper we show that on arbitrary structures, PFP is strictly more
expressive than inflationary fixed-point logic. The interest in this
originates in the fact that a separation of these logics on finite
structures would prove PTIME different from PSPACE. However, the
method used here relies on a diagonalisation argument over infinite
structures and does not apply to finite structures.

Fixed-point logics
Finite model theory

Designs, disputes and strategies

Logic has undergone an evolution toward interactive and dynamical models: Geometry of Interaction and Games Semantics are important examples. Ludics, recentely introduced by Girard, continues this evolution.

The foundamental artifact of Ludics are the designs, which
can be described as the skeleton of a sequent calculus derivation,
where we do not manipulate formulas, but their location (the address where the formula is stored).
An alternative presentation is to study the traces of interaction between designs (disputes) as primitive and describe a design as the set of its possible interactions.
The way we follow has the advantage of making precise the correspondence between the basic notions of Ludics and Games Semantics: designs, disputes and chronicles on one side,
strategies, plays and views on the other side.

Key words: ludics, games semantics, linear logic

Extraction of Proofs from the Clausal Normal Form Transformation.

We discuss methods for transforming a
first-order formula into Clausal Normal Form (CNF), and
generating a proof that the formula is logically equivalent
with its CNF. We use type theory for
representing the proof.
We discuss the main technical problems, and their solutions:

1. Some of the operations used in the CNF-transformation
are not equivalence preserving, in particular Skolemization.
This problem can be solved by using a choice axiom,
and allowing signature extensions. Instead of full
equivalence, one can prove
that there exist functions f1,...,fn for which the formula
F is equivalent with its CNF.
This is sufficient for refutation based theorem proving.

2. Careless construction of proofs leads to proofs of
size( O(n^3) ). Because the input formula can be large,
this is unacceptable. The size is caused by the fact that
transforming a proof of ( s equiv t ) into a proof
of $ F[s] equiv F[t] $ causes a quadratic blowup of the proof.
We solve this problem by sharing contexts as much as possible.
We define an intermediate proof calculus, in which maximal context
sharing can be obtained by proof normalization.

3. There are different types of Skolemization in use, namely
inner-outer Skolemization, strong Skolemization and
optimized Skolemization. We want to avoid having
a choice axiom for each type of Skolemization.
(to make things worse, some of the methods are parametrized
in two dimensions).
We show that the improved forms of Skolemization are actually
equivalent to standard Skolemization (but applied on a different
formula). Because of this, only one type of choice axiom
is needed for all forms of Skolemization.

automated theorem proving, type theory, logical frameworks.