Abstracts


2002


S. Ferré, O. Ridoux
Abstract: A formal context associates to objects a description that combines automatically extracted properties (intrinsic) and manually assigned ones (extrinsic). The extrinsic properties are expressed by users according to intentions that are often subjective and changing, and determine the classification and retrieval of objects. So, we find it important to assist users in this task through the automatic suggestion of extrinsic properties to be assigned and even the discovery of rules to automate these assignements. The principle is to learn from the description of existing objects the extrinsic description of a new object. Because of the changing nature of users' intentions, the assistance given in the incremental building of a logical context must be interactive. We present formal principles, and an application to the classification of email messages.

2001


Sébastien Ferré
Abstract: We present a generalization of logic All I Know by presenting it as an extension of standard modal logics. We study how this logic can be used to represent complete and incomplete knowledge in Logical Information Systems. In these information systems, a knowledge base is a collection of objects (e.g., files, bibliographical items) described in the same logic as used for expressing queries. We show that usual All I Know (transitive and euclidean accessibility relation) is convenient for representing complete knowledge, but not for incomplete knowledge. For this, we use serial All I Know (serial accessibility relation).
S. Ferré, O. Ridoux
Abstract: Logic-based applications often use customized logics which are composed of several logics. These customized logics are also often embedded as a black-box in an application. Their implementation requires the specification of a well-defined interface with common operations such as a parser, a printer, and a theorem prover. In order to be able to compose these logics, one must also define composition laws, and prove their properties. We present the principles of logic functors and their compositions for constructing customized logics. An important issue is how the operations of different sublogics inter-operate. We propose a formalization of the logic functors, their semantics, implementations, and their composition.
Sébastien Ferré, Olivier Ridoux
Abstract: Logical Concept Analysis is Formal Concept Analysis where logical formulas replace sets of attributes. We define a Logical Information System that combines navigation and querying for searching for objects. Places and queries are unified as formal concepts represented by logical formulas. Answers can be both extensional (objects belonging to a concept) and intensional (formulas refining a concept). Thus, all facets of navigation are formalized in terms of Logical Concept Analysis. We show that the definition of being a refinement of some concept is a specific case of Knowledge Discovery in a formal context. It can be generalized to recover more classical KD operations like machine-learning through the computation of necessary or sufficient properties (modulo some confidence), or data-mining through association rules.
T. Genet, Valérie Viet Triem Tong
Abstract: We present Timbuk - a tree automata library - which implements usual operations on tree automata as well as a completion algorithm used to compute an over-approximation of the set of descendants R*(E) for a regular set E and a term rewriting system R, possibly non linear and non terminating. On several examples of term rewriting systems representing programs and systems to verify, we show how to use Timbuk to construct their approximations and then prove unreachability properties of these systems.
Thomas Genet, Valérie Viet Triem Tong
Abstract: We present Timbuk - a tree automata library - which implements usual operations on tree automata as well as a completion algorithm used to compute an over-approximation of the set of descendants R*(E) for a regular set E and a term rewriting system R, possibly non linear and non terminating. On several examples of term rewriting systems representing programs and systems to verify, we show how to use Timbuk to construct their approximations and then prove unreachability properties of these systems.

2000


Thomas Colcombet, Pascal Fradet
Abstract: We propose an automatic method to enforce trace properties on programs. The programmer specifies the property separately from the program; a program transformer takes the program and the property and automatically produces another ``equivalent'' program satisfying the property. This separation of concerns makes the program easier to develop and maintain. Our approach is both static and dynamic. It integrates static analyses in order to avoid useless transformations. On the other hand, it never rejects programs but adds dynamic checks when necessary. An important challenge is to make this dynamic enforcement as inexpensive as possible. The most obvious application domain is the enforcement of security policies. In particular, a potential use of the method is the securization of mobile code upon receipt.
M. Ducassé, J. Noyé
Abstract: Tracing by automatic program source instrumentation has major advantages over compiled code instrumentation: it is more portable, it benefits from many compiler optimizations, it produces traces in terms of the original program, and it can be tailored to specific debugging needs. The usual argument in favor of compiled code instrumentation is its supposed efficiency. We have compared the performance of two operational low-level Prolog tracers with source instrumentation. We have executed classical Prolog benchmark programs, collecting trace information without displaying it. On average, collecting trace information by program instrumentation is about as fast as using a low-level tracer in one case, and only twice slower in the other. This is a minor penalty to pay, compared to the advantages of the approach. To our knowledge, this is the first time that a quantitative comparison of both approaches is made for any programming language.
Sébastien Ferré, Olivier Ridoux
Abstract: We propose a generalization of Formal Concept Analysis (FCA) in which sets of attributes are replaced by expressions of an almost arbitrary logic. We prove that all FCA can be reconstructed on this basis. We show that from any logic that is used in place of sets of attributes can be derived a contextualized logic that takes into account the formal context and that is isomorphic to the concept lattice. We then justify the generalization of FCA compared with existing extensions and in the perspective of its application to information systems.
Sébastien Ferré, Olivier Ridoux
Abstract: We present the design of a file system whose organization is based on Concept Analysis ``à la Wille-Ganter''. The aim is to combine querying and navigation facilities in one formalism. The file system is supposed to offer a standard interface but the interpretation of common notions like directories is new. The contents of a file system is interpreted as a Formal Context, directories as Formal Concepts, and the sub-directory relation as Formal Concepts inclusion. We present an organization that allows for an efficient implementation of such a Conceptual File System.
P. Fradet, V. Issarny, S. Rouvrais
Abstract: The mobile agent technology is emerging as a useful new way of building large distributed systems. The advantages of mobile agents over the traditional client-server model are mainly non-functional. We believe that one of the reasons preventing the wide-spread use of mobile agents is that non-functional properties are not easily grasped by system designers. Selecting the right interactions to implement complex services is therefore a tricky task. n this paper, we tackle this problem by considering efficiency and security criteria. We propose a language-based framework for the specification and implementation of complex services built from interactions with primitive services. Mobile agents, \scRpc, remote evaluation, or any combination of these forms of interaction can be expressed in this framework. We show how to analyze (i.e. assess and compare) complex service implementations with respect to efficiency and security properties. This analysis provides guidelines to service designers, enabling them to systematically select and combine different types of protocols for the effective realization of interactions with primitive services.
P. Fradet, J. Mallet
Abstract: We propose a parallel specialized language that ensures portable and cost-predictable implementations on parallel computers. The language is basically a first-order, recursion-less, strict functional language equipped with a collection of higher-order functions or skeletons. These skeletons apply on (nested) vectors and can be grouped in four classes: computation, reorganization, communication, and mask skeletons. The compilation process is described as a series of transformations and analyses leading to \SPMD-like functional programs which can be directly translated into real parallel code. The language restrictions enforce a programming discipline whose benefit is to allow a static, symbolic, and accurate cost analysis. The parallel cost takes into account both load balancing and communications, and can be statically evaluated even when the actual size of vectors or the number of processors are unknown. It is used to automatically select the best data distribution among a set of standard distributions. Interestingly, this work can be seen as a cross fertilization between techniques developed within the FORTRAN parallelization, skeleton, and functional programming communities.
T. Genet, F. Klay
Abstract: On a case study, we present a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set. Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved sessions, and a theoretical property ensuring safeness of the approximation.
T. Genet, F. Klay
Abstract: On a case study, we present a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set. Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved sessions, and a theoretical property ensuring safeness of the approximation.

1999


F. Besson, T. Jensen, J. P. Talpin
Abstract: We define an operational semantics for the Signal language and design an analysis which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system. A distinguished feature of the analysis is that it is expressed and proved correct with respect to the source program rather than on an intermediate representation of the program. The analysis calculates a safe approximation to the set of reachable states by a symbolic fixed point computation in the domain of convex polyhedra using a novel widening operator based on the convex hull representation of polyhedra.
M. Ducassé
Abstract: Opium is a system for analysing and debugging Prolog programs. Its kernel comprises an execution tracer and a programming language with a set of primitives for trace and source analysis. In this chapter we show the power of Opium for supporting abstract views of Prolog executions. Abstract views give high-level points of view about executions. They filter out irrelevant details; they restructure the remaining information; and they compact it so that the amount of information given at each step has a reasonable size. The examples of abstract views given in the following are a goal execution profile, some data abstractions, an instantiation profile, a failure analysis, a loop analysis, and a kind of explanation for an expert system written in Prolog.
M. Ducassé
Abstract: Traces of program executions are a helpful source of information for program debugging. They, however, give a picture of program executions at such a low level that users often have difficulties to interpret the information. Opium, our extendable trace analyzer, is connected to a ``standard'' Prolog tracer. Opium is programmable and extendable. It provides a trace query language and abstract views of executions. Users can therefore examine program executions at the levels of abstraction which suit them. Opium has shown its capabilities to build abstract tracers and automated debugging facilities. This article describes in depth the trace query mechanism, from the model to its implementation. Characteristic examples are detailed. Extensions written so far on top of the trace query mechanism are listed. Two recent extensions are presented: the abstract tracers for the LO (Linear Objects) and the CHR (Constraint Handling Rules) languages. These two extensions were specified and implemented within a few days. They show how to use Opium for real applications.
M. Ducassé
Abstract: We present Coca, an automated debugger for C, where the breakpoint mechanism is based on events related to language constructs. Events have semantics whereas source lines used by most debuggers do not have any. A trace is a sequence of events. It can be seen as an ordered relation in a database. Users can specify precisely which events they want to see by specifying values for event attributes. At each event, visible variables can be queried. The trace query language is Prolog with a handful of primitives. The trace query mechanism searches through the execution traces using both control flow and data whereas debuggers usually search according to either control flow or data. As opposed to fully ``relational'' debuggers which use plain database querying mechanisms, Coca trace querying mechanism does not require any storage. The analysis is done on the fly, synchronously with the traced execution. Coca is therefore more powerful than ``source line'' debuggers and more efficient than relational debuggers.
S. Ferré, O. Ridoux
Abstract: Nous proposons une généralisation de l'analyse de concepts formels (ACF) dans laquelle les ensembles d'attributs sont remplacés par des expressions d'une logique presque arbitraire. Nous prouvons que toute l'ACF peut être reconstruite sur cette base. Nous montrons qu'à partir de toute logique utilisée à la place des ensembles d'attributs, on peut dériver une logique contextualisée qui prend en compte le contexte formel et qui est isomorphe au treillis de concepts. Nous comparons ensuite la généralisation de l'ACF aux extensions qui y ont déjà été apportées. Enfin, nous présentons nos perspectives d'application aux systèmes d'information.
P. Fradet, D. Le Métayer, M. Périn
Abstract: Consistency is a major issue that must be properly addressed when considering multiple view architectures. In this paper, we provide a formal definition of views expressed graphically using diagrams with multiplicities and propose a simple algorithm to check the consistency of diagrams. We also put forward a simple language of constraints to express more precise (intra-view and inter-view) consistency requirements. We sketch a complete decision procedure to decide whether diagrams satisfy a given constraint expressed in this language. Our framework is illustrated with excerpts of a case study: the specification of the architecture of a train control system.
P. Fradet, M. Südholt
Abstract: In this position paper, we advocate the use of an aspect language for robust programming. AOP is particularly appealing for this task because robustness crosscuts traditional structuring means. Consider, for instance, the problem of ensuring that a global index remains in a given range. The code needed to check such an invariant is typically scattered all over the program. The paper presents an example-driven introduction of the proposed aspect language for program robustness; it then discusses its semantics and implementation and suggests extensions.
V. Gouranton, D. Le Métayer
Abstract: Slicing analyses have been proposed for different programming languages. Rather than defining a new analysis from scratch for each programming language, we would like to specify such an analysis once for all, in a language-independent way, and then specialise it for different programming languages. In order to achieve this goal, we propose a notion of natural semantics format and a dynamic slicing analysis format. The natural semantics format formalises a class of natural semantics and the analysis format is a generic, language-independent, slicing analysis. The correctness of the generic analysis is established as a relation between the derivation trees of the original program and the slice. This generic analysis is then instantiated to several programming languages conforming the semantics format (an imperative language, a logic programming language and a functional language), yielding a dynamic slicing analyser for each of these languages.
E. Jahier, M. Ducassé
Abstract: This document gathers the user manual and the reference manual of Opium-M, an analyser of execution traces of Mercury Programs. Opium-M is an adaptation to Mercury of Opium a trace analyser for Prolog. Mercury is a new logic programming language. Its type, mode and determinism declarations enable codes to be generated that is at the same time more efficient and more reliable than with current logic programming languages. The deterministic parts of Mercury programs are as efficient as their C counterparts. Moreover, numerous mistakes are detected at compilation time. However, our industrial partner experience shows that the fewer remaining mistakes, the harder they are to be diagnosed. A high-level debugging tool was thus necessary. Program execution traces given by traditional debuggers provide programmers with useful pieces of information. However, using them requires to analyse by hand huge amounts of information. Opium-M is connected to the traditional tracer of Mercury, it allows execution trace analyses to be automated. It provides a relational trace query language based on Prolog which enables users to specify precisely what they want to see in the trace. Opium-M, then, automatically filters out information irrelevant for the users.
E. Jahier, M. Ducassé
Abstract: Monitoring requires to gather data about executions. The monitoring functionalities currently available are built on top of ad hoc instrumentations. Most of them are implemented at low-level; in any case they require an in-depth knowledge of the system to instrument. The best people to implement these instrumentations are generally the implementors of the compiler. They, however, cannot decide which data to gather. Indeed, hundreds of variants can be useful and only end-users know what they want.

In this article, we propose a primitive which enables users to easily specify what to monitor. It is built on top of the tracer of the Mercury compiler. We illustrate how to use this primitive on two different kinds of monitoring. Firstly, we implement monitors that collect various kinds of statistics; each of them is well-known, the novelty is that users can get exactly the variants they need. Secondly, we define two notions of test coverage for logic programs and show how to measure coverage rates with our primitive. To our knowledge no definition of test coverage exist for logic programming so far. Each example is only a few lines of Mercury. Measurements show that the performance of the primitive on the above examples is acceptable for an execution of several millions of trace events. Our primitive, although simple, lays the foundation for a generic and powerful monitoring environment.


E. Jahier, M. Ducassé
Abstract: In this paper we show that a tracer with a trace analyser can be used to achieve more than debugging. We first illustrate how to compute coverage ratios for test cases. We also give 4 examples to monitor the behavior of programs. Thus, instead of building ad hoc instrumentations, which is currently the case for such tools, one can use a uniform environment which allows a synergy between the tools to take place. As a matter of fact, while studying the test coverage measurement we enriched the trace information, to the benefit of the other tools. Moreover, ad hoc instrumentations require an in depth knowledge of the system to instrument, either at low level or by source to source transformation. Even it is not technically difficult, it always requires a significant programming effort. On the opposite, with our approach, the instrumentation is generic. It is done once for all, and the specific analyses can be relatively simple. The examples of this article consist of less than a dozen of Prolog lines each.
E. Jahier, M. Ducassé, O. Ridoux
Abstract: In this article, we have give a formal specification of Byrd's box model and we show how this specification can be extended to specify richer trace models. We have also sho how these specifications can be executed by a direct translation into lambda-Prolog, leading to a Prolog interpreter that performs execution traces. This interpreter can be used both to experiment various trace models and to validate the different event specifications. Hence we have a formal framework to specify and prototype trace models.
D. Le Métayer, V. -A. Nicolas, O. Ridoux
Abstract: In this paper, we explore the testing-verification relationship with the objective of mechanizing the generation of test data. We consider program classes defined as recursive program schemes and we show that complete and finite test data sets can be associated with such classes, that is to say that these test data sets allow us to distinguish every two different functions in these schemes. This technique is applied to the verification of simple properties of programs.
S. Mallet, M. Ducassé
Abstract: Existing explanation systems for deductive databases show forests of proof trees. Although proof trees are often useful, they are only one possible interesting representation. We argue that an explanation system for deductive databases must be able to generate explanations at several levels of abstraction. One possible and well known technique to achieve this flexibility is to instrument meta-interpreters. It is, however, not often used because of its inefficiency. On the other hand, deductive databases often generate intermediate information stored in the physical database. This information can be considered as a low-level trace giving a faithful picture of what has happened at the relational level. The deductive reasoning is lost but can be very easily recovered by a meta-interpreter. In this article we describe a technique to generate explanations by integrating a relational trace and an instrumented meta-interpreter. The expensive aspects of meta-interpretation are reduced by the use of the trace which avoids many costly calculations. The flexibility of meta-interpretation is preserved, as illustrated by the generation of three different kinds of explanations: a box-oriented trace, a multi-SLD-AL tree and abstract AND trees. This technique enables powerful explanation systems to be implemented with very few modifications of the deductive database mechanism itself.
S. Mallet, M. Ducassé
Abstract: Deductive databases manage large quantities of data and, in general, in a set-oriented way. The existing systems of explanation for deductive databases do not take these constraints into account. We propose a tracing technique which consists of integrating a "relational" trace and an instrumented meta-interpreter using substitution sets. The relational trace efficiently gives precise information about data extraction from the relational database. The meta-interpreter manages substitution sets and gives explanation on the deduction. The expensive aspects of meta-interpretation are reduced by the use of the trace which avoids many calculations. The flexibility of meta-interpretation is preserved. It allows different profiles of trace to be easily produced.

1998


R. Douence, P. Fradet
Abstract: We introduce a unified framework to describe, relate, compare and classify functional lan guage implementations. The compilation process is expressed as a succession of program transforma tions in the common framework. At each step, different transformations model fundamental choices. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. This approach also paves the way to formal comparisons by making it possible to estimate the complexity of individual transformations or compositions of them. Our study aims at cov ering the whole known design space of sequential functional languages implementations. In particular, we consider call-by-value, call-by-name and call-by-need reduction strategies as well as environment and graph-based implementations. We describe for each compilation step the diverse alternatives as program transformations. In some cases, we illustrate how to compare or relate compilation tech niques, express global optimizations or hybrid implementations. We also provide a classification of well-known abstract machines.
M. Ducassé
Abstract: We present Coca, an automated debugger for C, where the breakpoint mechanism is based on events related to language constructs. Events have semantics whereas source lines used by most debuggers do not have any. A trace is a sequence of events. It can be seen as an ordered relation in a database. Users can specify precisely which events they want to see by specifying values for event attributes. At each event, visible variables can be queried. The trace query language is Prolog with a handful of primitives. The trace query mechanism searches through the execution traces using both control flow and data whereas debuggers usually search according to either control flow or data. As opposed to fully ``relational'' debuggers which use plain database querying mechanisms, Coca trace querying mechanism does not require any storage. The analysis is done on the fly, synchronously with the traced execution. Coca is therefore more powerful than ``source line'' debuggers and more efficient than relational debuggers.
M. Ducassé
Abstract: In January 1994, to replace a highly unpopular denotational semantics course, I undertook to set up a course on the B method at the INSA of Rennes (Institut National des Sciences Appliquées), at a Bac+4 level. I had almost no previous knowledge of formal methods. I had, however, programmed much in Prolog and felt the need for a strong programming discipline, supported if possible by methods and tools. The experience is, in my opinion, successful. The students do learn much during the course, find interesting placements where their competence is appreciated and every occurrence of the course teaches me something. In the article, I first list reasons to start the experience. I then discuss the pedagogical objectives of the course. The contents of the course is given and an assessment is made.
M. Ducassé, J. Noyé
Abstract: Tracing by automatic program source instrumentation has major advantages over compiled code instrumentation: it is more portable, it benefits from many compiler optimizations, it produces traces in terms of the original program, and it can be tailored to specific debugging needs. The usual argument in favor of compiled code instrumentation is its supposed efficiency. We have compared the performance of two operational low-level Prolog tracers with source instrumentation. We have executed classical Prolog benchmark programs, collecting trace information without displaying it. On average, collecting trace information by program instrumentation is about as fast as using a low-level tracer in one case, and only twice slower in the other. This is a minor penalty to pay, compared to the advantages of the approach. To our knowledge, this is the first time that a quantitative comparison of both approaches is made for any programming language.
P. Fradet, D. Le Métayer
Abstract: The Gamma language is based on the chemical reaction metaphor which has a number of benefits with respect to parallelism and program derivation. But the original definition of Gamma does not provide any facility for data structuring or for specifying particular control strategies. We address this issue by introducing a notion of structured multiset which is a set of addresses satisfying specific relations. The relations can be seen as a form of neighborhood between the molecules of the solution; they can be used in the reaction condition of a program or transformed by the action. A type is defined by a context-free graph grammar and a structured multiset belongs to a type $T$ if its underlying set of addresses satisfies the invariant expressed by the grammar defining $T$. We define a type checking algorithm that allows us to prove mechanically that a program maintains its data structure invariant. We illustrate the significance of the approach for program refinement and we describe its application to coordination.
P. Fradet, M. Südholt
Abstract: What exactly are aspects? How to weave? What are the join points used to anchor aspects into the component program? Is there a general purpose aspect language? In this position paper, we address these questions for a particular class of aspects: aspects expressible as static, source-to-source program transformations. An aspect is defined as a collection of program transformations acting on the abstract syntax tree of the component program. We discuss the design of a generic framework to express these transformations as well as a generic weaver. The coupling of

nent and aspect definitions can be defined formally using operators matching subtrees of the component program. The aspect weaver is simply a fixpoint operator taking as parameters the component program and a set of program transformations. In many cases, program transformations based solely on syntactic criteria are not satisfactory and one would like to be able to use semantic criteria in aspect definitions. We show how this can be done using properties expressed on the semantics of the component program and implemented using static analysis techniques. One of our main concerns is to keep weaving predictable. This raises several questions about the semantics (termination, convergence) of weaving.


V. Gouranton
Abstract: We consider specifications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property defined by recurrence on derivation trees. A recursive definition of a dynamic analyser can be obtained by fold/unfold program transformation combined with deforestation. A static analyser can then be derived by abstract interpretation of the dynamic analyser. We apply our framework to the derivation of a dynamic backward slicing analysis for a logic programming language.
V. Gouranton
Abstract: We consider specifications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property defined by recurrence on derivation trees. A recursive definition of a dynamic analyser can be obtained by fold/unfold program transformation combined with deforestation. We apply our framework to the derivation of a slicing analysis for a logic programming language.
V. Gouranton, D. Le Métayer
Abstract: Slicing analyses have been proposed for different programming languages. Rather than defining a new analysis from scratch for each programming language, we would like to specify such an analysis once for all, in a language-independent way, and then specialise it for different programming languages. In order to achieve this goal, we propose a notion of natural semantics format and a dynamic slicing analysis format. The natural semantics format formalises a class of natural semantics and the analysis format is a generic, language-independent, slicing analysis. The correctness of the generic analysis is established as a relation between the derivation trees of the original program and the slice. This generic analysis is then instantiated to several programming languages conforming the semantics format (an imperative language, a logic programming language and a functional language), yielding a dynamic slicing analyser for each of these languages.
T. Jensen
Abstract: We define an inference system for modular strictness analysis of functional programs by extending a conjunctive strictness logic with polymorphic and conditional properties. This extended set of properties is used to define a syntax-directed, polymorphic strictness analysis based on polymorphic recursion whose soundness is established via a translation from the polymorphic system into the conjunctive system. From the polymorphic analysis, an inference algorithm based on constraint resolution is derived and shown complete for variant of the polymorphic analysis. The algorithm deduces at the same time a property and a set of hypotheses on the free variables of an expression which makes it suitable for analysis of program with module structure.
T. Jensen, D. Le Métayer, T. Thorn
Abstract: We give a formal specification of the dynamic loading of classes in the Java Virtual Machine (JVM) and of the visibility of members of the loaded classes. This specification is obtained by identifying the part of the run-time state of the JVM that is relevant for dynamic loading and visibility and consists of a set of inference rules defining abstract operations for loading, linking and verification of classes. The formalisation of visibility includes an axiomatisation of the rules for membership of a class under inheritance, and of accessibility of a member in the presence of accessibility modifiers such as private and protected. The contribution of the formalisation is twofold. First, it provides a clear and concise description of the loading process and the rules for member visibility compared to the informal definitions of the Java language and the JVM. Second, it is sufficiently simple to allow calculations of the effects of load operations in the JVM.
T. Jensen, D. Le Métayer, T. Thorn
Abstract: A fundamental problem in software-based security is whether local security checks inserted into the code are sufficient to implement a given global security policy. We introduce a formalism based on a two-level linear-time temporal logic for specifying global security policies pertaining to the control-flow of the program, and illustrate its expressive power with a number of existing policies. We define a minimalistic, security-dedicated program model that only contains procedure call and dynamic security checks and propose an automatic method for verifying that an implementation using local security constructs satisfies a global security policy. For a given formula in the temporal logic we prove that there exists a bound on the size of the states that have to be considered in order to assure the validity of the formula: this reduces the problem to finite-state model checking. Finally, we instantiate the framework to the security architecture proposed for Java (JDK 1.2).
D. Le Métayer, V. -A. Nicolas, O. Ridoux
Abstract: Software development usually involves a collection of properties, programs and data as input or output documents. Putting these three kinds of documents at the vertices of a triangle, one sees that all three sides of the triangle have been exploited in formal methods, and that they have often been used in both directions. However, richer combinations have seldom been envisaged, and formal methods often amount to a strict orientation of the figure by imposing functional dependencies (e.g., infering test cases from specifications). Moreover, undecidability problems arise when properties are expressed in full predicate logic (or similar formalisms) or programs are written in Turing-equivalent programming languages. We advocate that (1) formal methods should provide more flexible ways to exploit the developer's knowledge and offer a variety of possibilities to construct programs, properties and test data and (2) it is worth restricting the power of logic formalisms and programming languages for the benefit of mechanization. We go one step in this direction, and present a formal method for generating test cases that combines techniques from abstract interpretation (program -> property) and testing (program+property -> test data), and takes inspiration from automated learning (test generation via a testing bias). The crucial property of the test suites generated this way is that they are robust with respect to a test objective formalized as a property. In other words, if a program passes the test suite, then it is guaranteed to satisfy the property. As this process leads to decision problems in very restricted formalisms, it can be fully mechanized.
J. Mallet
Abstract: We present a skeleton-based language which leads to portable and cost-predictable implementations on MIMD computers. The compilation process is described as a series of program transformations. We focus in this paper on the step concerning the distribution choice. The problem of automatic mapping of input vectors onto processors is addressed using symbolic cost evaluation. Source language restrictions are crucial since they permit to use powerful techniques on polytope volume computations to evaluate costs precisely. The approach can be seen as a cross-fertilization between techniques developed within the FORTRAN parallelization and skeleton communities.
J. Mallet
Abstract: The programming languages used for the parallel computers are either efficient languages with explicit parallelism but no portable and very complex to use, either simple portable languages but their compilation is complex and relatively inefficient.

We propose a specialized language based on program skeletons encapsulating data and control flow for which an accurate cost analysis of the parallel implementation exists. The compilation process deals with the automatic choice of the data distributions on the processors through the accurate cost guaranteed by the source language. This allows to obtain an automatic compilation with an efficient parallel code (the distributions representing a global choice of parallel implementation).

The compilation process is described as a series of program transformations, each transformation mapping one intermediate skeleton-based language into another. The target language is an SPMD-like skeleton-based language straightforwardly translating into a sequential language with calls to communication library routines. The main compilation steps are : the size analysis, the in-place updating transformation, the explicitation of the communications and the data distributions choice.


S. Mallet, M. Ducassé
Abstract: Le développement des bases de données déductives nécessite des outils, en particulier pour le débogage. Les bases de données déductives gèrent des quantités importantes de données et, en général, de manière ensembliste. Les systèmes d'explication existants pour les bases de données déductives ne prennent pas en compte ces contraintes. Nous proposons une technique de tra¸age qui consiste à intégrer une trace ``relationnelle'' avec un méta-interprète instrumenté utilisant des ensembles de substitutions. La trace relationnelle donne, de manière efficace, de l'information précise sur l'extraction de données de la base relationnelle. Le méta-interprète ensembliste gère des ensembles de substitutions et donne des explications sur la déduction. Les aspects coûteux de la méta-interprétation sont réduits par l'utilisation de la trace qui évite beaucoup de calculs. La flexibilité de la méta-interprétation est conservée. Elle permet de produire facilement des traces de profils différents.
V. -A. Nicolas
Abstract: The problem addressed in this thesis is the automatic generation of test data sets enabling the proof of program properties. We thus place ourselves half-way between the domain of software testing and that of program verification. The work on software testing has lead to easy-to-use semi-automatic tools, but which rely on hypothesis difficult to verify in practice. In the domain of verification, some tools based on formal methods have been developed but they often require an user very experienced with the proof techniques used by the tool. This fact is due to indecidability problems generated by the Turing completeness of the formalisms treated. Our primary contribution is a new approach to program verification, combining the techniques of software testing and static analysis.

We propose a formal method for the generation of finite test data sets, allowing one to prove that a program satisfies a given property. This method uses the program source and the property, which must belong to certain class of programs (or properties). These classes are represented by hierarchies of program schemes, which can be seen as modeling some test hypothesis. Every program belonging to one of our schemes and passing the generated test data set satisfy the tested property. For a given property our method is completely automatic and thus does not require any special competence of the user. We have implemented this method in a prototype (for a restricted functional language), for properties expressible in terms of list length.


1997


P. Fradet, D. Le Métayer
Abstract: Type systems currently available for imperative languages are too weak to detect a significant class of programming errors. For example, they cannot express the property that a list is doubly-linked or circular. We propose a solution to this problem based on a notion of shape types defined as context-free graph grammars. We define graphs in set-theoretic terms, and graph modifications as multiset rewrite rules. These rules can be checked statically to ensure that they preserve the structure of the graph specified by the grammar. We provide a syntax for a smooth integration of shape types in C. The programmer can still express pointer manipulations with the expected constant time execution and benefits from the additional guarantee that the property specified by the shape type is an invariant of the program.
V. Gouranton
Abstract: Nous présentons un cadre de conception d'analyseurs dynamiques et d'analyseurs statiques à partir de spécifications opérationnelles. Nous proposons des spécifications d'analyseurs en deux parties : la sémantique du langage de programmation considéré et la définition de la propriété recherchée. Nous considérons le cadre de la sémantique naturelle pour la définition du langage de programmation. Ces sémantiques présentent un double avantage : elles sont à la fois compositionnelles et intentionnelles. Les propriétés sont définies comme des fonctions sur les arbres de preuves de la sémantique. Celles qui ne sont pas spécifiques à un langage donné peuvent ainsi être définies une fois pour toutes et spécialisées pour des langages particuliers.

La composition de la sémantique et de la propriété définit une analyse dynamique a posteriori ; il s'agit en effet d'une fonction qui calcule dans un premier temps la trace d'exécution (arbre de preuve) complète d'un programme avant d'en extraire la propriété recherchée. Des transformations de programmes permettent d'obtenir une définition récursive autonome de l'analyseur. Cette fonction est en fait un analyseur dynamique à la volée dans le sens où il calcule la propriété recherchée au fur et à mesure de l'exécution du programme. On réalise ensuite une abstraction de cet analyseur pour obtenir un analyseur statique : on applique pour ce faire la théorie de l'interprétation abstraite.

Nous illustrons notre cadre à l'aide d'exemples d'analyses spécifiques comme la durée de vie d'un langage impératif et l'analyse de nécessité d'un langage fonctionnel d'ordre supérieur. Nous montrons aussi comment une analyse d'intérêt général, en l'occurrence le filtrage de programmes, peut être définie sur un format de sémantique, puis instanciée à différents langages.


V. Gouranton
Abstract: Dans [Gouranton:97:DerivationD] nous présentons un cadre de conception d'analyseurs dynamiques et d'analyseurs statiques à partir de spécifications opérationnelles. Nous montrons notamment comment une analyse d'intérêt général, en l'occurrence le filtrage (slicing) de programmes, peut être définie sur un format de sémantique, puis instanciée à différents langages. Dans cet article, nous rappelons les principaux points du cadre générique de définition d'analyseurs dynamiques et statiques présenté dans [Gouranton:97:DerivationD] et nous montrons comment notre cadre est appliqué au cas du filtrage d'un langage impératif.
V. Gouranton, D. Le Métayer
Abstract: We propose an approach for the formal development of static analysers which is based on transformations of inference systems. The specification of an analyser is made of two components: an operational semantics of the programming language and the definition of a property by recurrence on the proof trees of the operational semantics. The derivation is a succession of specialisations of inference systems with respect to properties on their proof trees. In this paper, we illustrate the methodology with the derivation of analysers for a non strict functional language.
C. Hankin, D. Le Métayer, D. Sands
Abstract: Gamma is a minimal language based on local multiset rewriting with an elegant chemical reaction metaphor. In this paper, we study a notion of refinement for Gamma programs involving parallel and sequential composition operators, and derive a number of programming laws. The calculus thus obtained is applied in the development of a generic ``pipelining'' transformation, which enables certain sequential compositions to be refined into parallel compositions.
A. A. Holzbacher, M. Périn, M. Südholt
Abstract: In this paper we develop in three phases a railway control system. We are mainly concerned with the software architecture of the control system and its dynamic evolution; we do not discuss here the implementation details of the components forming the control system. First, we informally discuss our design proposal for the architecture of the control system: a hierarchy of controllers whose leaves are local controllers connected in a network that mimics the underlying railway topology. Second, we formally define by means of particular graph grammars a style of software architectures for the railway control system consisting of two complementary views and ensuring several desirable properties by construction. The dynamic evolution of the architecture is modelled by a set of coordination rules which define graph transformations and are verified w.r.t. to the graph grammar. Third, using a coordination rule as a formal specification of a dynamic modification of the railway control system, we derive its implementation in ConCoord, a programming environment for concurrent coordinated programming. With regard to software engineering, the two first phases belong to the system design while the third one forms the first implementation step.
T. Jensen
Abstract: We describe how binding-time, data-flow, and strictness analyses for languages with higher-order functions and algebraic data types can be obtained by instantiating a generic program logic and axiomatization of the properties analyzed for. A distinctive feature of the analyses is that disjunctions of program properties are represented exactly. This yields analyses of high precision and provides a logical characterization of abstract interpretations involving tensor products and uniform properties of recursive data structures. An effective method for proving properties of a program based on fixed-point iteration is obtained by grouping logically equivalent formulae of the same type into equivalence classes, obtaining a lattice of properties of that type, and then defining an abstract interpretation over these lattices. We demonstrate this in the case of strictness analysis by proving that the strictness abstract interpretation of a program is the equivalence class containing the strongest property provable of the program in the strictness logic.
D. Le Métayer
Abstract: In order to play a bigger role in software engineering tools, static analysis techniques must take into account the specific needs of this application area, in particular in terms of interaction with the user and scalability. This new perspective requires a reexamination of the field of static program analysis both internally and in connection with related areas like theorem proving and debugging.
S. Mallet, M. Ducassé
Abstract: The power of deductive systems in general is that programs express what should be done and not how it should be done. Nevertheless, deductive systems need debugging and explanation facilities. Indeed, their operational semantics is less abstract than the declarative semantics of the programs. If users have to understand all the low level details of the operational semantics much of the benefits of using a deductive system is lost. Existing explanation systems for deductive databases produce proof trees to be shown to users. Although useful, proof trees give a fragmented view of query evaluations, and users face a, most of the time large, forest of proof trees. We propose a new data structure, called the DDB tree, which merges the information of a proof tree forest into one concise tree. A DDB tree gives a global picture of a query evaluation in a dramatically reduced structure with no loss of information. DDB trees can be shown to users or can be analyzed further by an explanation system.
P. Pepper, M. Südholt
Abstract: Parallel and distributed programming are much more difficult than the development of sequential algorithms due to data distribution issues and communication requirements. This paper presents a methodology that enables the abstract description of the distribution of data structures by means of overlapping covers that form data distribution algebras. Algorithms are formulated and derived by transformation in a functional environment using skeletons i.e. higher-order functions with specific parallel implementations. Communication is specified implicitly through the access to overlapping parts of covers. Such specifications enable the derivation of explicit lower-level communication statements. We illustrate the concepts by a complete derivation of Wang's partition algorithm for the solution of tridiagonal systems of linear equations.
M. Südholt, C. Piepenbrock, K. Obermayer, P. Pepper
Abstract: The design and implementation of parallel algorithms for distributed memory architectures is much harder than the development of sequential algorithms. This is mainly due to the communication and synchronization that is necessary to manage distributed data correctly. This paper applies a methodology for the transformational derivation of parallel programs using data distribution algebras that enable an abstract description of data distribution issues. Algorithms are formulated using skeletons, that is, specialized higher-order functions with particular parallel implementations. The methodology is applied to a the solution of a system of ordinary differential equations where convolutions can be computed using the Fast Fourier transformation. The example illustrates the practical optimization problems for a development model of the visual system that involves large scale neural network simulations. Finally, this algorithm is compared to an implementation of the same system of equations in the programming language C* on a CM-5.
T. Thorn
Abstract: Sun's announcement of the programming language Java more than anything popularized the notion of mobile code, that is, programs travelling on a heterogeneous network and automatically executes upon arrival at the destination. We describe several classes of mobile code and we extract their common characteristics, where security proves to be one of the major concerns. With these characteristics as reference points, we examine six representative languages proposed for mobile code. The conclusion of this study leads to our recommendations for future work, illustrated by examples of ongoing research.
L. Van Aertryck, M. Benveniste, D. Le Métayer
Abstract: In this paper, we present CASTING, a Computer Assisted Software Test engineering method. The method, supported by a prototype tool, generates realistic software test suites in a formal and semi-automatic way. Based on a two-layer modular architecture, CASTING is not tied to any particular style of input. Initially designed to engineer functional test suites from formal specifications, CASTING can easily be instantiated to accept other formal description levels, seamlessly supporting the main industrial testing techniques, ranging from functional testing to structural testing. A simple process scheduler specification is used in this paper to illustrate the interaction facilities of CASTING and to provide a sample of the resulting test suites.

1996


J. -M. Andréoli, C. Hankin, D. Le Métayer
Abstract: This book focuses on a class of coordination models where multiple pieces of software, possibly concerned with a wide range of domains and not necessarily built to work together, coordinate their activities through some shared dataspace and engage in interactions which are metaphorically analogous to "chemical" reactions. Exemplars of this class are Gamma, LO, Linda, and Shared Prolog. They have been extensively studied in the framework of the Esprit project BRA-9102 "Coordination", in which all the contributors of this book have been involved to some extent
J. -P. Banâtre, D. Le Métayer
Abstract: Gamma was originally proposed in 1986 as a formalism for the definition of programs without artificial sequentiality. The basic idea underlying the formalism is to describe computation as a form of chemical reaction on a collection of individual pieces of data. Due to the very minimal nature of the language, and its absence of sequential bias, it has been possible to exploit this initial paradigm in various directions. This paper reviews most of the work done by various groups along these lines and the current perspectives of our own research on Gamma. For the sake of clarity, we separate the contributions in three categories: (1) the relevance of the chemical reaction model for software engineering, (2) extensions of the original model and (3) implementation issues
G. Burn, D. Le Métayer
Abstract: A substantial amount of work has been devoted to the proof of correctness of various program analyses but much less attention has been paid to the correctness of compiler optimisations based on these analyses. In this paper, we tackle the problem in the context of strictness analysis for lazy functional languages. We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler and it exposes the various optimisations made possible by a strictness analysis
R. Douence, P. Fradet
Abstract: We present a unified framework to describe and compare functional language implementations. We express the compilation process as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices or optimizations. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. It also paves the way to formal comparisons by estimating the complexity of individual transformations or compositions of them. We focus on call-by-value implementations, describe and compare the diverse alternatives and classify well-known abstract machines. This work also aims to open the design space of functional language implementations and we suggest how distinct choices could be mixed to yield efficient hybrid abstract machines.
R. Douence, P. Fradet
Abstract: In Part I , we proposed an approach to formally describe and compare functional languages implementations. We focused on call-by-value and described well-known compilers for strict languages. Here, we complete our exploration of the design space of implementations by studying call-by-name, call-by-need and graph reduction. We express the whole compilation process as a succession of program transformations in a common framework. At each step, different transformations model fundamental choices or optimizations. We describe and compare the diverse alternatives for the compilation of the call-by-name strategy in both environment and graph-based models. The different options for the compilation of beta-reduction described in part I can be applied here as well. Instead, we describe other possibilities specific to graph reduction. Call-by-need is nothing but call-by-name with redex sharing and update. We present how sharing can be expressed in our framework and we describe different update schemes. We finally classify some well-known call-by-need implementations.
R. Douence, P. Fradet
Abstract: Nous proposons un cadre formel pour décrire et comparer les implantations de langages fonctionnels. Nous décrivons le processus de compilation comme une suite de transformations de programmes dans le cadre fonctionnel. Les choix fondamentaux de mise en oeuvre ainsi que les optimisations s'expriment naturellement comme des transformations différentes. Les avantages de cette approche sont de décomposer et de structurer la compilation, de simplifier les preuves de correction et de permettre des comparaisons formelles en étudiant chaque transformation ou leur composition. Nous nous concentrons ici sur l'appel par valeur et décrivons trois implantations différentes: la Cam, Tabac et une version stricte de la machine de Krivine.
M. Ducassé, J. Noyé
Abstract: Tracing by automatic program source instrumentation has major advantages over compiled code instrumentation: it is cheaper to develop and more portable, it benefits from most compiler optimizations, it produces traces in terms of the original program, and it can be tailored to specific debugging needs. The usual argument in favor of compiled code instrumentation is its supposed efficiency. Tolmach and Appel 1 designed and implemented a tracer for Standard ML based on automatic program source instrumentation. The resulting code runs only 3 times slower than optimized code. They conjectured that a low-level tracer would run at about the same speed. However they had no reasonable low-level tracer at hand to actually compare their results with. We have performed such a comparison in the context of Prolog, using the ECRC ECLiPSe environment. The built-in low-level tracer of ECLiPSe is, at present, one of the most interesting tracers for Prolog. We have compared it with an instrumentation based on O'Keefe's "advice" utility 2 , made compatible with the ECLiPSe tracer. We traced "standard" Prolog benchmark programs 3 with both tracing techniques and measured the resulting CPU times. On average the performances of both implementations are equivalent: tracing Prolog programs by program instrumentation is no slower than using a low-level tracer. To our knowledge, this is the first time that a quantitative comparison of both approaches is made. Another contribution is that our source instrumentation is more complete than O'Keefe's advice package. In particular, it deals with built-in predicates, and allows predicates to be skipped/unskipped.

---------------------------------------

1 A. Tolmach and A.W. Appel. A debugger for Standard ML. Journal of Functional Programming, 5(2):155-200, April 1995.

2 The "advice" utility is part of the DEC10 Prolog library, available by anonymous ftp from the AIAI of the University of Edinburg (aiai.edinburgh.ac.uk).

3 P. Van Roy and A. M. Despain. High-performance logic programming with the Aquarius Prolog compiler. Computer, 25(1):54-68, January 1992.


P. Fradet, R. Gaugne, D. Le Métayer
Abstract: The incorrect use of pointers is one of the most common source of bugs. As a consequence, any kind of static code checking capable of detecting potential bugs at compile time is welcome. This paper presents a static analysis for the detection of incorrect accesses to memory (dereferences of invalid pointers). A pointer may be invalid because it has not been initialised or because it refers to a memory location which has been deallocated. The analyser is derived from an axiomatisation of alias and connectivity properties which is shown to be sound with respect to the natural semantics of the language. It deals with dynamically allocated data structures and it is accurate enough to handle circular structures.
P. Fradet, R. Gaugne, D. Le Métayer
Abstract: The incorrect use of pointers is one of the most common source of bugs. As a consequence, any kind of static code checking capable of detecting potential bugs at compile time is welcome. This paper presents a static analysis for the detection of incorrect accesses to memory (dereferences of invalid pointers). A pointer may be invalid because it has not been initialised or because it refers to a memory location which has been deallocated. The analyser is derived from an axiomatisation of alias and connectivity properties which is shown to be sound with respect to the natural semantics of the language. It deals with dynamically allocated data structures and it is accurate enough to handle circular structures.
P. Fradet, D. Le Métayer
Abstract: We enhance Gamma, a multiset rewriting language, with a notion of structured multiset. A structured multiset is a set of addresses satisfying specific relations which can be used in the rewriting rules of the program. A type is defined by a context-free graph grammar and a structured multiset belongs to a type T if its underlying set of addresses satisfies the invariant expressed by the grammar defining T. We define a type checking algorithm which allows us to prove mechanically that a program maintains its data structure invariant
P. Fradet, D. Le Métayer
Abstract: The Gamma language is based on the chemical reaction metaphor which has a number of benefits with respect to parallelism and program derivation. But the original definition of Gamma does not provide any facility for data structuring or for specifying particular control strategies. We address this issue by introducing a notion of structured multiset which is a set of addresses satisfying specific relations and associated with a value. The relations can be seen as a form of neighbourhood between the molecules of the solution; they can be used in the reaction condition of the program or transformed by the action. A type is defined by a context-free graph grammar and a structured multiset belongs to a type T if its underlying set of addresses satisfies the invariant expressed by the grammar defining T. We define a type checking algorithm which allows us to prove mechanically that a program maintains its data structure invariant. We illustrate the significance of the approach for program reasoning and program refinement
D. Le Métayer
Abstract: We present a formalism for the definition of software architectures in terms of graphs. Nodes represent the individual agents and edges define their interconnection. Individual agents can communicate only along the links specified by the architecture. The dynamic evolution of the overall architecture is defined independently by a coordinator. An architecture style is a class of architectures characterised by a graph grammar. The rules of the coordinator are statically checked to ensure that they preserve the constraints imposed by the architecture style
D. Le Métayer, D. Schmidt
Abstract: In this paper, we argue that structural operational semantics (SOS) is an appropriate style for the definition of static analyses because it is both structural and intensional. Another significant quality of the SOS framework is that it can accomodate various classes of languages, and thus forms an ideal basis for the design of a program analysis platform
P. Louvet, O. Ridoux
Abstract: Typed Prolog and LambdaProlog are logic programming languages with a strict typing discipline which is based on simple types with variables. These variables are interpreted as denoting generic polymorphism. Experiments show that this discipline does not handle properly common logic programming practices used in Prolog. For instance, the usual transformation for computing the Clark completion of a Prolog program does not work well with some typed programs. We observe that the head-condition is at the heart of these problems, and conclude that it should be enforced. We propose a second-order scheme which is compatible with usual practices. In this scheme, type variables denote parametric polymorphism. It allows quantifying types and terms, passing type and term parameters to goals and terms, and to express type guards for selecting goals. We give its syntax and deduction rules, and propose a solution to keep the concrete notation of programs close to the usual one.
O. Ridoux
Abstract: An abstract representation for grammar rules that permits an easy implementation of several attributed grammar transformations is presented. It clearly separates the actions that contribute to evaluating attribute values from the circulation of these values, and it makes it easy to combine the representations of several rules in order to build the representation of new rules. This abstract form applies well to such transforms as elimination of left-recursion, elimination of empty derivation, unfolding and factorization. Finally, the technique is applied to DCGs and a LambdaProlog implementation of the abstract form and of the transforms is described.
S. Schoenig, M. Ducassé
Abstract: Le slicing est une technique d'analyse de programme développée à l'origine par Weiser pour les langages impératifs. Weiser a montré que le slicing est un outil naturel de débogage, mais il a également de nombreuses autres applications (intégration de programmes, optimisation, etc.) Dans cet article, nous proposons une définition du slicing pour Prolog et un algorithme. Celui-ci est au moins applicable à Prolog pur étendu par quelques prédicats de base (=/2 et arithmétiques). À notre connaissance, cet algorithme est le premier à être proposé pour Prolog. Les spécificités de Prolog (indéterminisme et manque de flot de contrôle explicite), ne permettent pas d'adapter trivialement les algorithmes existants pour langages impératifs.
S. Schoenig, M. Ducassé
Abstract: Slicing is a program analysis technique originally developed by Weiser for imperative languages. Weiser showed that slicing is a natural tool for debugging, but it has other numerous applications (program integration, program optimization, etc.) In this article we describe a backward slicing algorithm for Prolog which produces executable slices. The proposed algorithm is applicable at least to pure Prolog extended by some simple built-in predicates that handle the explicit unification =/2 and arithmetic. To our knowledge, this algorithm is the first one to be proposed for Prolog. Because of the indeterminism and lack of explicit control flow of Prolog, existing algorithms cannot be trivially adapted. The two main contributions of this paper are a general definition of slicing adapted to Prolog and a slicing algorithm that produces executable programs.

1995


J. -P. Banâtre, C. Bryce, D. Le Métayer
Abstract: Information flow control mechanisms detect and prevent illegal transfers of information within a computer system. In this paper, we give an overview of a programming language based approach to information flow control in a system of communicating processes. The language chosen to present the approach is CSP. We give the security semantics of CSP and show, with the aid of examples, how the semantics can be used to conduct both manual and automated security proofs of application programs
C. Belleannée, P. Brisset, O. Ridoux
Abstract: LambdaProlog est un langage de programmation logique dont les clauses et les termes généralisent ceux de Prolog. On peut se demander si toutes ces extensions sont nécessaires simultanément et si des langages intermédiaires intéressants ne pourraient pas être définis, au moins dans un but pédagogique. Nous répondons à cette question en montrant que des liens de nécessité conduisent à adopter toutes les extensions à partir de l'introduction du nouveau domaine de termes. De cette reconstruction découle une heuristique de programmation par induction sur les types qui est un guide commode pour utiliser LambdaProlog. LambdaProlog is a logic programming language in which clauses and terms are more general than in Prolog. One may wonder whether these extensions are simultaneously needed, and what are the useful subsets of LambdaProlog, at least for pedagogical purposes. We answer this question by exhibiting necessity links from the addition of the new term domain to the extension of the formula language. A handy heuristic for programming by induction on types can be derived from these links.
S. Coupet-Grimal, O. Ridoux
Abstract: Computational Linguistics and Logic Programming have strong connections, but the former uses concepts that are absent from the most familiar implementations of the latter. We advocate that a Logic Programming language need not feature the Computational Linguistics concepts exactly, it must only provide a logical way of dealing with them. We focus on the manipulation of higher-order terms and the logical handling of context, and we show that the advanced features of Prolog II and LambdaProlog are useful for dealing with these concepts. Higher-order terms are native in LambdaProlog, and Prolog II's infinite trees provide a handy data-structure for manipulating them. The formula language of LambdaProlog can be transposed in the Logic Grammar realm to allow for a logical handling of context.
R. Douence, P. Fradet
Abstract: We express implementations of functional languages as a succession of program transformations in a common framework. At each step, different transformations model fundamental choices or optimizations. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. It also paves the way to formal comparisons by estimating the complexity of individual transformations or compositions of them. We focus on call-by-value implementations, describe and compare the diverse alternatives and classify well-known abstract machines. This work also aims to open the design space of functional language implementations and we suggest how distinct choices could be mixed to yield efficient hybrid abstract machines.
M. Ducassé
Abstract: Traces of program executions tell how programs behave in given cases. They are a helpful source of information for automated debugging. Opium is an automated trace analyser for Prolog programs. It is programmable and extendable. It provides a trace query language and abstract views of executions as a basis for automated debugging. Opium has shown its capabilities to build abstract tracers and automated debugging facilities. This paper lists the extensions written so far, and describes two recent extensions: the abstract tracers for the LO (Linear Objects) language and for the CHR (Constraint Handling Rules) language.
V. Gouranton
Abstract: Nous définissons une analyse de globalisation pour un langage fonctionnel strict par des raffinements successifs de la sémantique naturelle du langage. Nous obtenons une sémantique instrumentée de traces puis un système d'inférence générique montré correct par rapport á cette sémantique. Il faudra alors spécialiser ce sysème d'inférence pour la propriété considérée définie par récurrence sur les traces.
V. Gouranton, D. Le Métayer
Abstract: We advocate the use of operational semantics as a basis for specifying program analyses for functional languages. We put forward a methodology for defining a static analysis by successive refinements of the natural semantics of the language. We use paths as the abstract representation of proof trees and we provide a language for defining properties in terms of paths (neededness, absence, uniqueness,...) and the mechanical derivation of the corresponding analyses
C. L. Hankin, D. Le Métayer
Abstract: Approaches to static analysis based on non-standard type systems have received considerable interest recently. Most work has concentrated on the relationship between such analyses and abstract interpretation. In this paper, we focus on the problem of producing efficient algorithms from such type-based analyses. The key idea is the notion of lazy types. We present the basic notions in the context of a higher-order strictness analysis of list-processing functions. We also sketch some recent work concerning a general framework for program analysis based on these ideas. We conclude with some experimental results
D. Le Métayer
Abstract: We present a technique for the mechanical proof of correctness properties of programs. We define a language of properties over recursive data structures and an inference system to assign properties to programs. In order to be able to design a tractable inference algorithm, we impose restrictions on both the programming language and the language of properties. We show that these restrictions do not preclude the treatment of interesting programs and properties. As an example, our analyser is able to prove automatically that a sort program returns a list of non-increasing values
O. Ridoux
Abstract: We study under which conditions the domain of lambda-terms and the equality theory of the lambda-calculus form the basis of a usable constraint logic programming language (CLP). The conditions are that the equality theory must contain axiom $\eta$, and the formula language must depart from Horn clauses and accept universal quantifications and implications in goals. In short, CLP-lambda must be close to LambdaProlog.

1994


J. -P. Banâtre, C. Bryce, D. Le Métayer
Abstract: We give a formal definition of the notion of information flow for a simple guarded command language. We propose an axiomatisation of security properties based on this notion of information flow and we prove its soundness with respect to the operational semantics of the language. We then identify the sources of non determinism in proofs and we derive in successive steps an inference algorithm which is both sound and complete with respect to the inference system
P. Brisset, O. Ridoux
Abstract: See [bo92b]
M. Ducassé, J. Noyé
Abstract: Programming environments are essential for the acceptance of programming languages. This survey emphasizes that program analysis, both static and dynamic, is the central issue of programming environments. Because their clean semantics makes powerful analysis possible, logic programming languages have an indisputable asset in the long term. This survey is focused on logic program analysis and debugging. The large number of references provided show that the field, though maybe scattered, is active. A unifying framework is given which separates environment tools into extraction, analysis, and visualization. It facilitates the analysis of existing tools and should give some guide lines to develop new ones. Achievements in logic programming are listed; some techniques developed for other languages are pointed out, and some trends for further research are drawn. Among the main achievements are algorithmic debugging, tracing for sequential Prolog, and abstract interpretation. The main missing techniques are slicing, test case generation, and program mutation. The perspectives we see are integration, evaluation and, above all, automated static and dynamic analysis.
P. Fradet
Abstract: We present a method, adapted to polymorphically typed functional languages, to detect and collect more garbage than existing GCs. It can be applied to strict or lazy higher order languages and to several garbage collection schemes. Our GC exploits the information on utility of arguments provided by polymorphic types of functions. It is able to detect garbage that is still referenced from the stack and may collect useless parts of otherwise useful data structures. We show how to partially collect shared data structures and to extend the type system to infer more precise information. We also present how this technique can plug several common forms of space leaks.
P. Fradet
Abstract: Functional language compilers implement only weak-head reduction. However, there are cases where head normal forms or full normal forms are needed. Here, we study how to use cps conversion for the compilation of head and strong reductions. We apply cps expressions to a special continuation so that their head or strong normal form can be obtained by the usual weak-head reduction. We remain within the functional framework and no special abstract machine is needed. Used as a preliminary step our method allows a standard compiler to evaluate under lambda's.
C. L. Hankin, D. Le Métayer
Abstract: The role of non-standard type inference in static program analysis has been much studied recently. Early work emphasised the efficiency of type inference algorithms and paid little attention to the correctness of the inference system. Recently more powerful inference systems have been investigated but the connection with efficient inference algorithms has been obscured. The contribution of this paper is twofold: first we show how to transform a program logic into an algorithm and second, we introduce the notion of lazy types and show how to derive an efficient algorithm for strictness analysis
C. L. Hankin, D. Le Métayer
Abstract: We present a type inference system for the strictness analysis of lists and we show that it can be used as the basis for an efficient algorithm. The algorithm is as accurate as the usual abstract interpretation technique. One distinctive advantage of this approach is that it is not necessary to impose an abstract domain of a particular depth prior to the analysis: the lazy type algorithm will instead explore the part of a potentially infinite domain required to prove the strictness property
C. L. Hankin, D. Le Métayer
Abstract: In this paper, we present a general framework for type-based analyses of functional programs. Our framework is parameterised by a set of types to represent properties and interpretations for constants in the language. To construct a new analysis the user needs only to supply a model for the types (which properties they denote) and sound rules for the constants. We identify the local properties that must be proven to guarantee the correctness of a specific analysis and algorithm. We illustrate the approach by recasting Hunt and Sand's binding time analysis in our framework. Furthermore, we report on experimental results suggesting that our generic inference algorithm can provide the basis for an efficient program analyser
D. Le Métayer
Abstract: Gamma is a kernel language in which programs are described in terms of multiset transformations following the chemical reaction metaphor. Previous work has demonstrated its significance for the construction of massively parallel programs but also highlighted some limitations. We introduce a higher-order extension of this formalism unifying the program and data syntactic categories. As a consequence, active programs can be inserted into multisets. This generalisation has far reaching effects and we show its relevance from both the formal and the practical point of view. In particular, we present a program defining the chemical abstract machine in higher-order Gamma. We show that more sophisticated control strategies can be expressed within the language and we put them into practice by deriving a scan vector model program suitable for execution on fine-grained parallel machines

1993


P. Brisset, O. Ridoux
Abstract: Continuations are well know in functional programming where they have been used to transform and compile programs. Some languages provide explicit manipulations of the continuation for the user: The user can catch and modify the current continuation. Continuations have also been used in the logic programming context to give a denotational semantics for Prolog, to generate Prolog compilers and to transform Prolog programs. In this paper, we propose to introduce new built-ins in a logic programming language to enable the user to explicitly replace the continuations. These built-ins allow the user to have a new control of the execution. We choose LambdaProlog because of its higher-order syntax and implications in the goals which are necessary for the definition and use of these built-ins. In order to define the built-ins, we extend to LambdaProlog the Prolog semantics based on continuations. Then, we show that an exception mechanism can be easily implemented using these new built-ins. The proposed semantics is also used to prove equivalence of goals changing the continuations.
P. Brisset, O. Ridoux
Abstract: See [bo92a]
M. Ducassé
Abstract: This article proposes a structuring view of the area of automated debugging. Nineteen automated debugging systems are analyzed. Thirteen existing automated debugging techniques are briefly evaluated from a pragmatic point of view. The three underlying strategies are identified, namely verification with respect to specification, checking with respect to language knowledge and filtering with respect to symptom. The verification strategy compares the actual program with some formal specification of the intended program. The checking strategy looks for suspect places which do not comply with some explicit knowledge of the programming language. The filtering strategy assumes correct parts of the code which cannot be responsible for the error symptom. Assertion evaluation and algorithmic debugging are the most promising verification techniques. Some intrinsic limitations of the checking strategy makes it only a complementary, though helpful, debugging support. The slicing technique should be included in any debugger.
S. Le Huitouze, P. Louvet, O. Ridoux
Abstract: A logic grammar formalism called DCG (Definite Clause Grammars), which has proved to be useful, is part of most Prolog implementations. We develop two new logic grammar formalisms called DCG' and lambda-HHG (higher-order Hereditary Harrop Grammars) that can be used in LambdaProlog implementations. The relations between DCG, DCG', and lambda-HHG, and Prolog and LambdaProlog can be summarized as follows: (1) The language Prolog, the DCG formalism, and the translation of DCG into Prolog by Prolog are classical. (2) The evolution from Prolog to LambdaProlog is due to Miller and the advantage of using LambdaProlog for doing natural language analysis is shown by Pereira, and Pareschi and Miller. (3) We propose a strongly typed variant of DCG (called DCG') for its translation into LambdaProlog by LambdaProlog. It is a first stage towards a more elaborate formalism. (4) A formalism that is to DCG what LambdaProlog is to Prolog is still missing, and also the way to translate it into LambdaProlog. Such a formalism combines the advantage of being grammatical and hiding the house-keeping operations (like DCG) and of having higher-order terms as attributes and providing a logical approach to context (like LambdaProlog). lambda-HHG is such a formalism.
S. Le Huitouze, P. Louvet, O. Ridoux
Abstract: La plupart des systèmes Prolog proposent un formalisme de grammaire logique appelé DCG (Definite Clause Grammar), dont l'utilité est reconnue. Nous présentons deux nouveaux formalismes de grammaire logique appelé DCG' et lambda-HHG (higher-order Hereditary Harrop Grammar)--grammaires héréditaires de Harrop d'ordre supérieur) destinés à être utilisés dans les systèmes LambdaProlog. Les relations entre DCG, DCG', et lambda-HHG, d'une part, et entre Prolog et LambdaProlog, d'autre part, peuvent être résumées de la manière suivante. (1) Prolog, DCG et la traduction de DCG en Prolog sont classiques. (2) Miller propose l'évolution de Prolog à LambdaProlog, et Pereira, Pareschi et Miller montrent l'intérêt d'utiliser LambdaProlog pour le traitement de la langue naturelle. (3) Nous proposons une variante fortement typée de DCG (appelée) afin de pouvoir la traduire en LambdaProlog dans le système LambdaProlog. C'est un premier pas vers un formalisme plus élaboré. (4) lambda-HHG est à DCG ce que LambdaProlog est à Prolog. Ce formalisme combine les avantages d'être grammatical et de cacher les opération d'un analyseur (comme DCG), et d'avoir des termes d'ordre supérieur comme attributs et de proposer une approche logique à la représentation des contextes (comme LambdaProlog).

1992


Y. Bekkers, O. Ridoux, L. Ungaro
Abstract: Logic programming languages are becoming more complex with the introduction of new features such as constraints or terms with an equality theory. With this increase in complexity, they require more and more sophisticated memory management. This survey gives an insight into the memory management problems in sequential logic programming languages implementations; it also describes the presently know solutions. It is meant to be understood by non-specialists in logic programming with good knowledge of memory management in general. We first describe a "usefulness logic" for run-time objects. Usefulness logic defines non-garbage objects. Next, memory management systems are presented from the most trivial original run-time system, with no real concern for memory problems, to elaborated run-time systems with memory management closely observing the usefulness logic. Finally, the choice of a garbage collection technique is discussed in relation with logic programming specifities.
P. Brisset, O. Ridoux
Abstract: We present a compiled implementation of LambdaProlog that uses the abstract memory MALI for representing the execution state. LambdaProlog is a logic programming language allowing a more general clause form than Standard Prolog's (namely hereditary Harrop formulas instead of Horn formulas) and using simply typed lambda-terms as a term domain instead of first order terms. The augmented clause form causes the program (a set of clauses) and the signature (a set of constants) to be changeable in a very disciplined way. The new term domain has a semi-decidable and infinitary unification theory, and it introduces the need for a beta-reduction operation at run-time. MALI is an abstract memory that is suitable for storing the search-state of depth-first search processes. Its main feature is its efficient memory management. We have used an original LambdaProlog-to-C translation along which predicates are transformed into functions operating on continuations for handling failure and success in unifications, and changes in signatures and programs. Two keywords of this implementation are ``sharing'' and ``folding'' of representations. Sharing amounts to recognising that some representation already exists and to reuse it. Folding amounts to recognising that two different representations represent the same thing and to replace one by the other.
P. Brisset, O. Ridoux
Abstract: LambdaProlog is a logic programming language accepting a more general clause form than standard Prolog (namely hereditary Harrop formulas instead of Horn formulas) and using simply typed lambda-terms as a term domain instead of first order terms. Despite these extensions, it is still amenable to goal-directed proofs and can still be given procedural semantics. However, the execution of LambdaProlog programs requires several departures from the standard resolution scheme. First, the augmented clause form causes the program (a set of clauses) and the signature (a set of constants) to be changeable, but in a very disciplined way. Second, the new term domain has a semi-decidable and infinitary unification theory, and it introduces the need for a beta-reduction operation at run-time. MALI is an abstract memory that is suitable for storing the search-state of depth-first search processes. Its main feature is its efficient memory management. We have used an original LambdaProlog-to-C translation: predicates are transformed into functions operating on several continuations. The compilation scheme is sometimes an adaptation of the standard Prolog scheme, but at other times it has to handle new features such as types, beta-reduction and delayed unification. Two keywords of this implementation are "sharing" and "folding" of representations. Sharing amounts to recognising that some representation already exists and reusing it. Folding amounts to recognising that two different representations represent the same thing and replacing one by the other. We assume a basic knowledge of Prolog and LambdaProlog.
M. Ducassé
Abstract: We present a general trace query language which is a solution to the ever growing command sets of other tracers. It provides all the required generality while being very simple and efficient. We model a program execution into a trace which is a stream of events. Execution events have a uniform representation, and can be analysed by Prolog programs. With this approach and thanks to the expressive power of Prolog, two high-level primitives plus Prolog are enough to provide a general trace query language. With a few optimizations this language can work on large executions without any loss of performance, if compared to traditional tracers. This paper describes the trace query mechanism from its high level specification down to some implementation details. The proposed model of trace query depends only on the sequentiality of the execution, and the principles behind the design of the optimizations do not depend on the traced language.
M. Ducassé
Abstract: The data used by program analysis in general is often restricted to the source code of the analysed programs. However, there is a complementary source of information, namely traces of program executions. Usual tracers, which extract this trace information, do not allow for general trace analysis. Opium, our debugger for Prolog, sets up a framework where program sources and traces of program executions can be jointly analysed. As the debugging process is heuristic and not all the debugging strategies have been identified so far, Opium is programmable. In particular, its trace query language gives more flexibility and more power than the hard coded command sets of usual tracers. This trace query language is based on Prolog. Opium is therefore both a helpful tool for Prolog and a nice application of Prolog. The most innovative extensions of Opium compute abstract views of Prolog executions to help users understand the behaviours of programs. In particular they help them understand how error symptoms have been produced. This article briefly recalls some general information about Opium. A debugging session is then commented in detail.
M. Ducassé
Abstract: The result of a Prolog execution can simply be ``no'', when the programmer is expecting something else. This symptom is typical of Prolog, and especially requires the help of an execution tracer to get clues of what the problem can be. We present a solution which helps programmers to understand how unexpected failures have occurred. We first propose a hierarchy of failing goals. We argue that there is one kind of leaf failures which is interesting to track at the first place. Then we give the algorithm for our leaf failure tracking and two examples illustrating its use.
M. Ducassé
Abstract: Automated debugging and expert system explanations have in common to aim at helping people to understand executions. We have designed an extendable trace analyser for the purpose of automated debugging, which we propose to use to prototype expert system explanations. Two examples illustrate how simple it is to implement abstract tracing of executions and how easy it is to play with them.
M. Ducassé
Abstract: The dissertation describes the innovative features of Opium, a high-level debugging environment for Prolog, designed and implemented at ECRC between 1985 and 1991. Debugging is a costly process, and automating it would significantly reduce the cost of software production and maintenance. However, it is unrealistic to aim at fully automating the task. In particular programmers have to understand rapidly changing situations, examining large amounts of data. In the current state of the art it is beyond the capabilities of computers to take the place of programmer's understanding. Nevertheless, computers can significantly help programmers to select the data to be analysed. The data used by program analysis in general is often restricted to the source code of the analysed programs. However, there is a complementary source of information, namely traces of program executions. An execution trace contains less general information than the program source, but it tells how the program behaves in a particular case. Furthermore, there are intrinsically dynamic aspects in a program which are best analysed at execution time, for example uses of read/write. These remarks suggested to build the automated debugging functionalities of Opium on top of an existing tracer, extending it to a general trace and source analyser. The most important features of Opium, not to be found in other debuggers, are as follows. - It provides a trace query language which is a solution to the ever growing command sets of other tracers. With two primitives plus Prolog, users can already specify more precise trace queries than with the hard coded commands of other tracers. - Opium is programmable and extendable. It is thus an environment where debugging strategies can be easily programmed and integrated. Some strategies are already implemented. - Abstract views of executions are proposed as a basis for automated debugging. They help users to understand the behaviours of programs by browsing through executions at a higher level than single steppers. Opium is fully implemented. More than 20 academic sites have recently requested Opium prototype, and some are actually implementing new abstract views.

1991


P. Brisset, O. Ridoux
Abstract: We propose a new implementation of logic programming with higher-order terms. In order to illustrate the properties of our implementation, we apply the coding of lists as functions to the context of logic programming. As a side-effect, we show that higher-order unification is a good tool for manipulating the function-lists. It appears that the efficiency of the program thus obtained relies critically upon the implementation of higher-order operations (unification and reduction). In particular, we show that a good choice for data-structures and reduction strategy yields a linear naïve reverse.
M. Ducassé
Abstract: Opium is a system for analysing and debugging Prolog programs. Its kernel comprises an execution tracer and a programming language with a full set of primitives for trace and source analysis. In this paper we show the power of Opium for supporting abstract views of Prolog executions. Abstract views give high-level points of view about the execution. They filter out irrelevant details; they restructure the remaining information; and they compact it so that the amount of information given at each step has a reasonable size. The examples of abstract views given in the following are a goal execution profile, some data abstractions, an instantiation profile, a failure analysis and a kind of explanation for an expert system written in Prolog.
M. Ducassé, A. -M. Emde
Abstract: Opium is an extensible debugging environment for PROLOG providing high-level debugging facilities for programmers and debugging experts. In the design of debuggers there are two tasks which are often mixed, extraction and analysis of debugging information. The aim of the extraction task is to collect the whole debugging information so that users do not miss any important information about their program. On the other hand, the aim of the analysis task is to restrict in an accurate way the amount of debugging information shown to the user so that the latter has to examine only the relevant parts. This task clearly depends on the debugging situation and, to our point of view, there is no general restriction which can be done a priori. However, the two tasks are usually mixed and hard-coded, the result is that not enough relevant information and too much useless information is displayed. In Opium the two tasks are clearly separated. The extraction module collects the whole debugging information (execution trace and program source) which is then available for the analysis module. The presentation concentrates on the analysis module, discussing the main aspects of Opium: programmability, high-level debugging, extensibility mechanisms, meta-debugging, support for end-users and debugging experts.
P. Fradet
Abstract: We tackle the problem of detecting global variables in functional programs. We present syntactic criteria for single-threading which improves upon previous solutions (both syntactic and semantics-based) in that it applies to higher-order languages and to most sequential evaluation strategies. The main idea of our approach lies in the use of continuations. One advantage of continuation expressions is that evaluation ordering is made explicit in the syntax of expressions. So, syntactic detection of single-threading is simpler and more powerful on continuation expressions. We present the application of the analysis to the compilation of functional languages, semantics-directed compiler generation and globalization-directed transformations (i.e. transforming non-single-threaded expressions into single-threaded ones). Our results can also be turned to account to get single-threading criteria on regular lambda-expressions for different sequential evaluation orders.
P. Fradet, D. Le Métayer
Abstract: One of the most important issues concerning functional languages is the efficiency and the correctness of their implementation. We focus on sequential implementations for conventional von Neumann computers. The compilation process is described in terms of program transformations in the functional framework. The original functional expression is transformed into a functional term that can be seen as a traditional machine code. The two main steps are the compilation of the computation rule by the introduction of continuation functions and the compilation of the environment management using combinators. The advantage of this approach is that we do not have to introduce an abstract machine, which makes correctness proofs much simpler. As far as efficiency is concerned, this approach is promising since many optimizations can be described and formally justified in the functional framework.
IRISA LANDE



Generated on Thu Oct 10 09:55:54 MET DST 2002 by genet
The generator is entirely written in Haskell (sources will be available soon)
Copyright ©1998 INRIA-Rennes