BACK TO INDEX

Publications of year 2005

Thesis

  1. David Pichardie. Interprétation abstraite en logique intuitionniste : extraction d'analyseurs Java certiés. PhD thesis, Université Rennes 1, Rennes, France, December 2005.
    @PhdThesis{PicThe05,
    author = {David Pichardie},
    title = {Interprétation abstraite en logique intuitionniste : extraction d'analyseurs Java certiés},
    school = {Université Rennes 1},
    year = 2005,
    address = {Rennes, France},
    month = {dec} 
    }
    


Articles in journal or book chapters

  1. F. Besson, T. de Grenier de Latour, and T. Jensen. Interfaces for stack inspection. Journal of Functional Programming, 15(2):179-217, 2005. [WWW]
    Stack inspection is a mechanism for programming secure applications in the presence of code from various protection domains. Run-time checks of the call stack allow a method to obtain information about the code that (directly or indirectly) invoked it in order to make access control decisions. This mechanism is part of the security architecture of Java and the .NET Common Language Runtime. A central problem with stack inspection is to determine to what extent the local checks inserted into the code are sufficient to guarantee that a global security property is enforced. A further problem is how such verification can be carried out in an incremental fashion. Incremental analysis is important for avoiding re-analysis of library code every time it is used, and permits the library developer to reason about the code without knowing its context of deployment. We propose a technique for inferring interfaces for stack-inspecting libraries in the form of secure calling context for methods. By a secure calling context we mean a pre-condition on the call stack sufficient for guaranteeing that execution of the method will not violate a given global property. The technique is a constraint-based static program analysis implemented via fixed point iteration over an abstract domain of linear temporal logic properties.

    @Article{Besson:05:Interfaces,
    author = "F. Besson and T. de~Grenier~de~Latour and T. Jensen",
    title = "Interfaces for stack inspection",
    journal = "Journal of Functional Programming",
    pages = "179--217",
    volume = "15(2)",
    year = "2005",
    url = "http://www.irisa.fr/lande/fbesson/fbesson.html",
    abstract = {Stack inspection is a mechanism for programming secure applications in the presence of code from various protection domains. Run-time checks of the call stack allow a method to obtain information about the code that (directly or indirectly) invoked it in order to make access control decisions. This mechanism is part of the security architecture of Java and the .NET Common Language Runtime. A central problem with stack inspection is to determine to what extent the local checks inserted into the code are sufficient to guarantee that a global security property is enforced. A further problem is how such verification can be carried out in an incremental fashion. Incremental analysis is important for avoiding re-analysis of library code every time it is used, and permits the library developer to reason about the code without knowing its context of deployment. We propose a technique for inferring interfaces for stack-inspecting libraries in the form of secure calling context for methods. By a secure calling context we mean a pre-condition on the call stack sufficient for guaranteeing that execution of the method will not violate a given global property. The technique is a constraint-based static program analysis implemented via fixed point iteration over an abstract domain of linear temporal logic properties.},
    
    }
    


  2. David Cachera, Thomas Jensen, David Pichardie, and Vlad Rusu. Extracting a Data Flow Analyser in Constructive Logic. Theoretical Computer Science, 2005.
    We show how to formalise a constraint-based data flow analysis in the specification language of the Coq proof assistant. This involves defining a dependent type of lattices together with a library of lattice functors for modular construction of complex abstract domains. Constraints are expressed in an intermediate representation that allows for both efficient constraint resolution and correctness proof of the analysis with respect to an operational semantics. The proof of existence of a correct, minimal solution to the constraints is constructive which means that the extraction mechanism of Coq provides a provably correct data flow analyser in Ocaml. The library of lattices together with the intermediate representation of constraints are defined in an analysis-independent fashion that provides a basis for a generic framework for proving and extracting static analysers in Coq.

    @ARTICLE{CaJePiRu05TCSExtractAnalyser,
    AUTHOR = {David Cachera and Thomas Jensen and David Pichardie and Vlad Rusu},
    TITLE = {{Extracting a Data Flow Analyser in Constructive Logic}},
    JOURNAL = {Theoretical Computer Science},
    YEAR = {2005},
    ABSTRACT = {We show how to formalise a constraint-based data flow analysis in the specification language of the Coq proof assistant. This involves defining a dependent type of lattices together with a library of lattice functors for modular construction of complex abstract domains. Constraints are expressed in an intermediate representation that allows for both efficient constraint resolution and correctness proof of the analysis with respect to an operational semantics. The proof of existence of a correct, minimal solution to the constraints is constructive which means that the extraction mechanism of Coq provides a provably correct data flow analyser in Ocaml. The library of lattices together with the intermediate representation of constraints are defined in an analysis-independent fashion that provides a basis for a generic framework for proving and extracting static analysers in Coq.} 
    }
    


  3. David Cachera and Katell Morin-Allory. Verification of safety properties for parameterized regular systems. Trans. on Embedded Computing Systems, 4(2):228-266, 2005.
    We propose a combination of heuristic methods to prove properties of control signals for regular systems defined by means of affine recurrence equations (AREs). We benefit from the intrinsic regularity of the underlying polyhedral model to handle parameterized systems in a symbolic way. Our techniques apply to safety properties. The general proof process consists in an iteration that alternates two heuristics. We are able to identify the cases when this iteration will stop in a finite number of steps. These techniques have been implemented in a high level synthesis environment based on the polyhedral model.

    @article{CacMorTECS05,
    author = {David Cachera and Katell Morin-Allory},
    title = {Verification of safety properties for parameterized regular systems},
    journal = {Trans. on Embedded Computing Systems},
    volume = {4},
    number = {2},
    year = {2005},
    pages = {228--266},
    publisher = {ACM Press},
    address = {New York, NY, USA},
    ABSTRACT = {We propose a combination of heuristic methods to prove properties of control signals for regular systems defined by means of affine recurrence equations (AREs). We benefit from the intrinsic regularity of the underlying polyhedral model to handle parameterized systems in a symbolic way. Our techniques apply to safety properties. The general proof process consists in an iteration that alternates two heuristics. We are able to identify the cases when this iteration will stop in a finite number of steps. These techniques have been implemented in a high level synthesis environment based on the polyhedral model.} 
    }
    


Conference articles

  1. David Cachera, Thomas Jensen, David Pichardie, and Gerardo Schneider. Certified Memory Usage Analysis. In Proc of 13th International Symposium on Formal Methods (FM'05), Lecture Notes in Computer Science, 2005. Springer-Verlag.
    We present a certified algorithm for resource usage analysis, applicable to languages in the style of Java byte code. The algorithm verifies that a program executes in bounded memory. The algorithm is destined to be used in the development process of applets and for enhanced byte code verification on embedded devices. We have therefore aimed at a low-complexity algorithm derived from a loop detection algorithm for control flow graphs. The expression of the algorithm as a constraint-based static analysis of the program over simple lattices provides a link with abstract interpretation that allows to state and prove formally the correctness of the analysis with respect to an operational semantics of the program. The certification is based on an abstract interpretation framework implemented in the Coq proof assistant which has been used to provide a complete formalisation and formal verification of all correctness proofs.

    @INPROCEEDINGS{CaJePiSc05MemoryUsage,
    AUTHOR = {David Cachera and Thomas Jensen and David Pichardie and Gerardo Schneider},
    TITLE = {{Certified Memory Usage Analysis}},
    BOOKTITLE = {Proc\. of 13th International Symposium on Formal Methods (FM'05)},
    YEAR = {2005},
    SERIES = {Lecture Notes in Computer Science},
    PUBLISHER = {Springer-Verlag},
    ABSTRACT = {We present a certified algorithm for resource usage analysis, applicable to languages in the style of Java byte code. The algorithm verifies that a program executes in bounded memory. The algorithm is destined to be used in the development process of applets and for enhanced byte code verification on embedded devices. We have therefore aimed at a low-complexity algorithm derived from a loop detection algorithm for control flow graphs. The expression of the algorithm as a constraint-based static analysis of the program over simple lattices provides a link with abstract interpretation that allows to state and prove formally the correctness of the analysis with respect to an operational semantics of the program. The certification is based on an abstract interpretation framework implemented in the Coq proof assistant which has been used to provide a complete formalisation and formal verification of all correctness proofs.} 
    }
    


  2. David Cachera, Thomas Jensen, and Pascal Sotin. Estimating Cache Misses with Semi-Modules and Quantitative Abstraction. In Proc. of IST-APPSEM II Workshop on Applied Semantics, 2005.
    @inproceedings{Sotin05,
    author = {David Cachera and Thomas Jensen and Pascal Sotin},
    title = {Estimating Cache Misses with Semi-Modules and Quantitative Abstraction},
    booktitle = {Proc. of IST-APPSEM II Workshop on Applied Semantics},
    year = {2005} 
    }
    


  3. T. Denmat, M. Ducassé, and O. Ridoux. Data mining and cross-checking of execution traces. A re-interpretation of Jones, Harrold and Stasko test information visualization. In T. Ellman and A. Zisman, editors, Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering, November 2005. ACM Press. Note: See RR-5661 for a long version of this article. Keyword(s): Software Engineering, Debugging, Artificial Intelligence, Learning, Knowledge acquisition.
    The current trend in debugging and testing is to cross-check information collected during several executions. Jones et al., for example, propose to use the instruction coverage of passing and failing runs in order to visualize suspicious statements. This seems promising but lacks a formal justification. In this paper, we show that the method of Jones et al. can be re-interpreted as a data mining procedure. More particularly, they define an indicator which characterizes association rules between data. With this formal framework we are able to explain intrinsic limitations of the above indicator.

    @InProceedings{denmat05b,
    Author={T. Denmat and M. Ducass\'{e} and O. Ridoux},
    Title={Data mining and cross-checking of execution traces. A re-interpretation of {Jones, Harrold and Stasko} test information visualization},
    Pages={ },
    BookTitle={Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering},
    Year={2005},
    Editor={T. Ellman and A. Zisman},
    Publisher={ACM Press},
    Month={November},
    Note={See RR-5661 for a long version of this article},
    Keywords={Software Engineering, Debugging, Artificial Intelligence, Learning, Knowledge acquisition},
    Abstract={The current trend in debugging and testing is to cross-check information collected during several executions. Jones et al., for example, propose to use the instruction coverage of passing and failing runs in order to visualize suspicious statements. This seems promising but lacks a formal justification. In this paper, we show that the method of Jones et al. can be re-interpreted as a data mining procedure. More particularly, they define an indicator which characterizes association rules between data. With this formal framework we are able to explain intrinsic limitations of the above indicator.}
    }
    


  4. T. Denmat, A. Gotlieb, and M. Ducassé. Proving or Disproving Likely Invariants with Constraint Reasoning. In A. Serebrenik, editor, Proceedings of the 15th Workshop on Logic-based Method for Programming Environments, Sitges, SPAIN, October 2005. Note: Satelite event of International Conference on Logic Programming (ICLP'2005). Published in Computer Research Repository cs.SE/0508108. [WWW] Keyword(s): Software Engineering, Testing and Debugging, Program verification, Constraint and logic languages.
    A program invariant is a property that holds for every execution of the program. Recent work suggest to infer likely-only invariants, via dynamic analysis. A likely invariant is a property that holds for some executions but is not guaranteed to hold for all executions. In this paper, we present work in progress addressing the challenging problem of automatically verifying that likely invariants are actual invariants. We propose a constraint-based reasoning approach that is able, unlike other approaches, to both prove or disprove likely invariants. In the latter case, our approach provides counter-examples. We illustrate the approach on a motivating example where automatically generated likely invariants are verified.

    @InProceedings{DGD05,
    Author={T. Denmat and A. Gotlieb and M. Ducassé},
    Title={Proving or Disproving Likely Invariants with Constraint Reasoning},
    BookTitle={Proceedings of the 15th Workshop on Logic-based Method for Programming Environments},
    Address = {Sitges, SPAIN},
    Year={2005},
    Editor={A. Serebrenik},
    Month={October},
    Note={Satelite event of International Conference on Logic Programming (ICLP'2005). Published in Computer Research Repository cs.SE/0508108},
    url={http://arxiv.org/abs/cs.pl/0508108},
    keywords={Software Engineering, Testing and Debugging, Program verification, Constraint and logic languages},
    Abstract={A program invariant is a property that holds for every execution of the program. Recent work suggest to infer likely-only invariants, via dynamic analysis. A likely invariant is a property that holds for some executions but is not guaranteed to hold for all executions. In this paper, we present work in progress addressing the challenging problem of automatically verifying that likely invariants are actual invariants. We propose a constraint-based reasoning approach that is able, unlike other approaches, to both prove or disprove likely invariants. In the latter case, our approach provides counter-examples. We illustrate the approach on a motivating example where automatically generated likely invariants are verified. }
    }
    


  5. A. Gotlieb, T. Denmat, and B. Botella. Constraint-based test data generation in the presence of stack-directed pointers. In 20th IEEE/ACM International Conference on Automated Software Engineering (ASE'05), Long Beach, CA, USA, Nov. 2005. Note: 4 pages, short paper. Keyword(s): Constraint-Based Test data generation, Constraint satisfaction, stack-directed pointers.
    Constraint-Based Test data generation (CBT) exploits constraint satisfaction techniques to generate test data able to kill a given mutant or to reach a selected branch in a program. When pointer variables are present in the program, aliasing problems may arise and may lead to the failure of current CBT approaches. In our work, we propose an overall CBT method that exploits the results of an intraprocedural points-to analysis and provides two specific constraint combinators for automatically generating test data able to reach a selected branch. Our approach correctly handles multi-levels stack-directed pointers that are mainly used in real-time control systems. The method has been fully implemented in the test data generation tool INKA and first experiences in applying it to a variety of existing programs tend to show the interest of the approach.

    @InProceedings{GDB05b,
    author = {Gotlieb, A. and Denmat, T. and Botella, B.},
    title = {Constraint-based test data generation in the presence of stack-directed pointers},
    booktitle = {20th IEEE/ACM International Conference on Automated Software Engineering (ASE'05)},
    address = {Long Beach, CA, USA},
    month = {Nov.},
    note = {4 pages, short paper},
    year = 2005,
    abstract = {Constraint-Based Test data generation (CBT) exploits constraint satisfaction techniques to generate test data able to kill a given mutant or to reach a selected branch in a program. When pointer variables are present in the program, aliasing problems may arise and may lead to the failure of current CBT approaches. In our work, we propose an overall CBT method that exploits the results of an intraprocedural points-to analysis and provides two specific constraint combinators for automatically generating test data able to reach a selected branch. Our approach correctly handles multi-levels stack-directed pointers that are mainly used in real-time control systems. The method has been fully implemented in the test data generation tool INKA and first experiences in applying it to a variety of existing programs tend to show the interest of the approach.},
    keywords = {Constraint-Based Test data generation, Constraint satisfaction, stack-directed pointers} 
    }
    


  6. A. Gotlieb, T. Denmat, and B. Botella. Goal-oriented test data generation for programs with pointer variables. In 29th IEEE Annual International Computer Software and Applications Conference (COMPSAC'05), Edinburh, Scotland, pages 449-454, July 2005. Note: 6 pages. Keyword(s): Goal-oriented test data generation, Constraint Logic Programming, Static Single Assignment form, pointer variables.
    Automatic test data generation leads to the identification of input values on which a selected path or a selected branch is executed within a program (path-oriented vs goal-oriented methods). In both cases, several approaches based on constraint solving exist, but in the presence of pointer variables only path-oriented methods have been proposed. This paper proposes to extend an existing goal-oriented test data generation technique to deal with multi-level pointer variables. The approach exploits the results of an intraprocedural flow-sensitive points-to analysis to automatically generate goal-oriented test data at the unit testing level. Implementation is in progress and a few examples are presented.

    @InProceedings{GDB05a,
    author = {Gotlieb, A. and Denmat, T. and Botella, B.},
    title = {Goal-oriented test data generation for programs with pointer variables},
    booktitle = {29th IEEE Annual International Computer Software and Applications Conference (COMPSAC'05)},
    address = {Edinburh, Scotland},
    month = {July},
    pages = {449-454},
    note = {6 pages},
    year = 2005,
    abstract = {Automatic test data generation leads to the identification of input values on which a selected path or a selected branch is executed within a program (path-oriented vs goal-oriented methods). In both cases, several approaches based on constraint solving exist, but in the presence of pointer variables only path-oriented methods have been proposed. This paper proposes to extend an existing goal-oriented test data generation technique to deal with multi-level pointer variables. The approach exploits the results of an intraprocedural flow-sensitive points-to analysis to automatically generate goal-oriented test data at the unit testing level. Implementation is in progress and a few examples are presented.},
    keywords = {Goal-oriented test data generation, Constraint Logic Programming, Static Single Assignment form, pointer variables} 
    }
    


  7. S.D. Gouraud and A. Gotlieb. Utilisation des CHRs pour générer des cas de test fonctionnel pour la Machine Virtuelle Java Card. In Premières Journées Francophones de Programmation par Contraintes (JFPC'05), Lens, FRANCE, 8-10 juin 2005.
    Le test fonctionnel basé sur une spécification formelle consiste à dériver des cas de test à partir d'un modèle formel pour détecter des fautes dans son implémentation. Dans nos travaux, nous étudions l'utilisation des \emph{Constraint Handling Rules} (CHRs) pour automatiser la génération de cas de test fonctionnel basée sur un modèle formel. Notre étude de cas est un modèle de la spécification de la Machine Virtuelle Java Card (JCVM) écrite dans un sous ensemble du langage Coq. Dans cet article, nous définissons une traduction automatique de ce modèle sous forme de règles CHR dans le but de générer des cas de test pour la JCVM. Le point clé de notre approche réside dans l'utilisation des \emph{deep guards} pour modéliser fidèlement la sémantique de notre modèle formel de la JCVM. Ensuite, nous proposons une approche globale pour la génération de cas de test fonctionnel basée sur les CHRs qui peut \^etre appliquée à d'autres modèles formels.

    @InProceedings{GG05,
    author = {Gouraud, S.D. and Gotlieb, A.},
    title = {Utilisation des CHRs pour générer des cas de test fonctionnel pour la Machine Virtuelle Java Card},
    booktitle = {Premières Journées Francophones de Programmation par Contraintes (JFPC'05)},
    year = {2005},
    address = {Lens, FRANCE},
    month = {8-10 juin},
    abstract = {Le test fonctionnel basé sur une spécification formelle consiste à dériver des cas de test à partir d'un modèle formel pour détecter des fautes dans son implémentation. Dans nos travaux, nous étudions l'utilisation des \emph{Constraint Handling Rules} (CHRs) pour automatiser la génération de cas de test fonctionnel basée sur un modèle formel. Notre étude de cas est un modèle de la spécification de la Machine Virtuelle Java Card (JCVM) écrite dans un sous ensemble du langage Coq. Dans cet article, nous définissons une traduction automatique de ce modèle sous forme de règles CHR dans le but de générer des cas de test pour la JCVM. Le point clé de notre approche réside dans l'utilisation des \emph{deep guards} pour modéliser fidèlement la sémantique de notre modèle formel de la JCVM. Ensuite, nous proposons une approche globale pour la génération de cas de test fonctionnel basée sur les CHRs qui peut \^etre appliquée à d'autres modèles formels.} 
    }
    


  8. G. Le Guernic and T. Jensen. Monitoring Information Flow. In Andrei Sabelfeld, editor, Proceedings of the Workshop on Foundations of Computer Security, pages 19-30, June 2005. DePaul University. [PDF]
    We present an information flow monitoring mechanism for sequential programs. The monitor executes a program on standard data that are tagged with labels indicating their security level. We formalize the monitoring mechanism as a big-step operational semantics that integrates a static information flow analysis to gather information flow properties of non-executed branches of the program. Using the information flow monitoring mechanism, it is then possible to partition the set of all executions in two sets. The first one contains executions which \emph{are safe} and the other one contains executions which may be unsafe. Based on this information, we show that, by resetting the value of some output variables, it is possible to alter the behavior of executions belonging to the second set in order to ensure the confidentiality of secret data.

    @INPROCEEDINGS{LeGuernic2005mif,
    AUTHOR = {G. Le~Guernic and T. Jensen},
    TITLE = {{M}onitoring {I}nformation {F}low},
    BOOKTITLE = {Proceedings of the Workshop on Foundations of Computer Security},
    YEAR = 2005,
    EDITOR = {Sabelfeld, Andrei},
    MONTH = JUN,
    PUBLISHER = {DePaul University},
    OPTNOTE = {Affiliated with LICS'05},
    PAGES = {19--30},
    PDF = {http://hal.inria.fr/docs/00/06/50/99/PDF/le-guernic05monitoringInformationFlow.pdf},
    ABSTRACT = {We present an information flow monitoring mechanism for sequential programs. The monitor executes a program on standard data that are tagged with labels indicating their security level. We formalize the monitoring mechanism as a big-step operational semantics that integrates a static information flow analysis to gather information flow properties of non-executed branches of the program. Using the information flow monitoring mechanism, it is then possible to partition the set of all executions in two sets. The first one contains executions which \emph{are safe} and the other one contains executions which may be unsafe. Based on this information, we show that, by resetting the value of some output variables, it is possible to alter the behavior of executions belonging to the second set in order to ensure the confidentiality of secret data.} 
    }
    


  9. G. Le Guernic and T. Jensen. Monitoring Information Flow. In Andrei Sabelfeld, editor, Proceedings of the 2005 Workshop on Foundations of Computer Security (FCS'05), pages 19-30, June 2005. DePaul University. Note: LICS'05 Affiliated Workshop.
    We present an information flow monitoring mechanism for sequential programs. The monitor executes a program on standard data that are tagged with labels indicating their security level. We formalize the monitoring mechanism as a big-step operational semantics that integrates a static information flow analysis to gather information flow properties of non-executed branches of the program. Using the information flow monitoring mechanism, it is then possible to partition the set of all executions in two sets. The first one contains executions which are safe and the other one contains executions which may be unsafe. Based on this information, we show that, by resetting the value of some output variables, it is possible to alter the behavior of executions belonging to the second set in order to ensure the confidentiality of secret data.

    @inproceedings{LeGuernic2005mif,
    author = "G. Le~Guernic and T. Jensen",
    title = "Monitoring Information Flow",
    booktitle = "Proceedings of the 2005 Workshop on Foundations of Computer Security (FCS'05)",
    year = "2005",
    editor = "Sabelfeld, Andrei",
    month = jun,
    pages = "19--30",
    publisher = "DePaul University",
    note = "LICS'05 Affiliated Workshop",
    abstract = "We present an information flow monitoring mechanism for sequential programs. The monitor executes a program on standard data that are tagged with labels indicating their security level. We formalize the monitoring mechanism as a big-step operational semantics that integrates a static information flow analysis to gather information flow properties of non-executed branches of the program. Using the information flow monitoring mechanism, it is then possible to partition the set of all executions in two sets. The first one contains executions which are safe and the other one contains executions which may be unsafe. Based on this information, we show that, by resetting the value of some output variables, it is possible to alter the behavior of executions belonging to the second set in order to ensure the confidentiality of secret data. " 
    }
    


  10. G. Le Guernic and J. Perret. FL-system's Intelligent Cache. In Alexandre Vautier and Sylvie Saget, editors, Proceedings of Majecstic 2005, pages 79-88, November 2005. [PDF]
    Cet article présente une application de techniques issues du génie logiciel à la modélisation d'environnements virtuels. Ces derniers, dès lors qu'ils sont complexes, constituent des masses de données particulièrement volumineuses et, de ce fait, difficiles à manipuler. De plus, les descriptions exhaustives de ces environnements, c'est-à-dire explicites, sont peu évolutives. Pour résoudre ces problèmes, il existe des méthodes basées sur des systèmes de réécriture permettant de décrire de fac¸on générique les objets de l'environnement. Chaque objet est ainsi décrit par un axiome qui est réécrit lors de la génération de l'environnement. Lors de cette phase, il est fréquent de retrouver deux séquences de réécriture aboutissant à un même objet. Il est alors possible d'utiliser un mécanisme tel que le cache afin d'améliorer les performances en profitant des calculs déjà effectués. Cependant, se contenter de mettre en correspondance les noms et paramètres des axiomes n'est pas suffisant pour établir l'ensemble des réécritures identiques. En effet, les réécritures d'un même axiome avec des paramètres effectifs différents peuvent aboutir au même objet. Le système proposé dans cet article effectue une analyse dynamique des réécritures afin d'améliorer les performances du cache en d´etectant de tels cas. La première partie de cet article présente le système de réécriture, la seconde l'analyse utilisée, et la dernière le mécanisme proposé.

    @INPROCEEDINGS{leguernic05flic,
    AUTHOR = {G. Le Guernic and J. Perret},
    TITLE = {{FL}-system's {I}ntelligent {C}ache},
    BOOKTITLE = {Proceedings of Majecstic 2005},
    EDITOR = {Vautier, Alexandre and Saget, Sylvie},
    PAGES = {79--88},
    YEAR = 2005,
    MONTH = NOV,
    PDF = {http://hal.inria.fr/docs/00/04/36/98/PDF/43.pdf},
    ABSTRACT = {Cet article présente une application de techniques issues du génie logiciel à la modélisation d'environnements virtuels. Ces derniers, dès lors qu'ils sont complexes, constituent des masses de données particulièrement volumineuses et, de ce fait, difficiles à manipuler. De plus, les descriptions exhaustives de ces environnements, c'est-à-dire explicites, sont peu évolutives. Pour résoudre ces problèmes, il existe des méthodes basées sur des systèmes de réécriture permettant de décrire de fac¸on générique les objets de l'environnement. Chaque objet est ainsi décrit par un axiome qui est réécrit lors de la génération de l'environnement. Lors de cette phase, il est fréquent de retrouver deux séquences de réécriture aboutissant à un même objet. Il est alors possible d'utiliser un mécanisme tel que le cache afin d'améliorer les performances en profitant des calculs déjà effectués. Cependant, se contenter de mettre en correspondance les noms et paramètres des axiomes n'est pas suffisant pour établir l'ensemble des réécritures identiques. En effet, les réécritures d'un même axiome avec des paramètres effectifs différents peuvent aboutir au même objet. Le système proposé dans cet article effectue une analyse dynamique des réécritures afin d'améliorer les performances du cache en d´etectant de tels cas. La première partie de cet article présente le système de réécriture, la seconde l'analyse utilisée, et la dernière le mécanisme proposé.} 
    }
    


  11. G. Le Guernic and Julien Perret. FL-system's Intelligent Cache. In Alexandre Vautier and Sylvie Saget, editors, Proceedings of Majecstic 2005, pages 79-88, November 2005.
    @inproceedings{leguernic05flic,
    author = "G. Le~Guernic and Julien Perret",
    title = "FL-system's Intelligent Cache",
    booktitle = "Proceedings of Majecstic 2005",
    editor = "Vautier, Alexandre and Saget, Sylvie",
    pages = "79--88",
    year = "2005",
    month = nov 
    }
    


  12. Katell Morin-Allory and David Cachera. Proving Parameterized Systems: The Use of Pseudo-Pipelines in Polyhedral Logic. In Proc. of 13th CHARME conference, Lecture Notes in Computer Science, pages 376-379, 2005. Springer-Verlag.
    @InProceedings{MorCacCHARME05,
    author = {Katell Morin-Allory and David Cachera},
    title = {{Proving Parameterized Systems: {The} Use of Pseudo-Pipelines in Polyhedral Logic}},
    booktitle = {Proc. of 13th CHARME conference},
    pages = {376--379},
    year = 2005,
    series = {Lecture Notes in Computer Science},
    publisher = {Springer-Verlag},
    
    }
    


  13. David Pichardie. Modular proof principles for parameterized concretizations. In Proc. of 2nd International Workshop on Construction and Analysis of Safe, Secure and Interoperable Smart Devices (CASSIS 2005), number 3956 of Lecture Notes in Computer Science, pages 138-154, 2005. Springer-Verlag.
    @InProceedings{Pich05ParamConcr,
    author = "David Pichardie",
    title = {Modular proof principles for parameterized concretizations},
    booktitle = "Proc.\ of 2nd International Workshop on Construction and Analysis of Safe, Secure and Interoperable Smart Devices (CASSIS 2005)",
    year = "2005",
    number = "3956",
    pages = "138--154",
    series = "Lecture Notes in Computer Science",
    publisher = "{Springer-Verlag}",
    
    }
    


Internal reports

  1. F. Besson, T. Jensen, and D. Pichardie. A PCC Architecture based on Certified Abstract Interpretation. Research Report RR-5751, INRIA, November 2005. Note: Also Publication Interne IRISA PI-1764. [PDF]
    @TechReport{RR-5751,
    Author={F. Besson and T. Jensen and D. Pichardie},
    Title={A PCC Architecture based on Certified Abstract Interpretation},
    Institution={INRIA},
    Year={2005},
    Type={Research Report},
    Number={RR-5751},
    Month={November},
    pdf={ftp://ftp.inria.fr/INRIA/publication/publi-pdf/RR/RR-5751.pdf},
    Note={Also Publication Interne IRISA PI-1764} 
    }
    


  2. David Cachera and Katell Morin-Allory. Proving Parameterized Systems: the use of a widening operator and pseudo-pipelines in polyhedral logic. Technical report TIMA-RR--05/04-01--FR, TIMA-IMAG, 2005.
    We propose proof techniques and tools for the formal verification of regular parameterized systems. These systems are expressed in the polyhedral model, which combines affine recurrence equations with index sets of polyhedral shape. We extend a previously defined proof system based on a polyhedral logic with the detection of pseudo-pipelines, that are particular patterns in the variable definitions generalizing the notion of pipeline. The combination of pseudo-pipeline detection with the use of a simple widening operator greatly improves the effectiveness of our proof techniques.

    @TechReport{CacMorRR05,
    author = {David Cachera and Katell Morin-Allory},
    title = {Proving Parameterized Systems: the use of a widening operator and pseudo-pipelines in polyhedral logic},
    institution = {TIMA-IMAG},
    year = 2005,
    number = {TIMA-RR--05/04-01--FR},
    ABSTRACT = {We propose proof techniques and tools for the formal verification of regular parameterized systems. These systems are expressed in the polyhedral model, which combines affine recurrence equations with index sets of polyhedral shape. We extend a previously defined proof system based on a polyhedral logic with the detection of pseudo-pipelines, that are particular patterns in the variable definitions generalizing the notion of pipeline. The combination of pseudo-pipeline detection with the use of a simple widening operator greatly improves the effectiveness of our proof techniques.} 
    }
    


  3. T. Denmat, M. Ducassé, and O. Ridoux. Data Mining and Cross-checking of Execution Traces. A re-interpretation of Jones, Harrold and Stasko test information visualization (Long version). Research Report RR-5661, INRIA, August 2005. Note: Also Publication Interne IRISA PI-1743. [WWW]
    @TechReport{denmat05c,
    Author={T. Denmat and M. Ducass\'{e} and O. Ridoux},
    Title={Data Mining and Cross-checking of Execution Traces. {A} re-interpretation of {Jones}, {Harrold} and {Stasko} test information visualization ({Long} version)},
    Institution={INRIA},
    Year={2005},
    Type={Research Report},
    Number={RR-5661},
    Month={August},
    url={http://www.inria.fr/rrrt/rr-5661.html},
    Note={Also Publication Interne IRISA PI-1743} 
    }
    


Miscellaneous

  1. G. Le Guernic. Roles & Security, October 2005. Note: Dagstuhl Seminar 03411: Language-Based Security. [WWW]
    @MISC{LeGuernic2003ras_misc,
    AUTHOR = {G. Le~Guernic},
    TITLE = {{R}oles \& {S}ecurity},
    NOTE = {Dagstuhl Seminar 03411: Language-Based Security},
    MONTH = OCT,
    YEAR = 2003,
    BOOKTITLE = {03411 Abstracts Collection -- Language-Based Security},
    YEAR = 2005,
    EDITOR = {Banerjee, Anindya and Mantel, Heiko and Naumann, David A. and Sabelfeld, Andrei},
    NUMBER = 03411,
    SERIES = {Dagstuhl Seminar Proceedings},
    ADDRESS = {Dagstuhl, Germany},
    PUBLISHER = {Internationales Begegnungs- und Forschungszentrum (IBFI), Schloss Dagstuhl, Germany},
    URL = {http://drops.dagstuhl.de/opus/volltexte/2005/173 [date of citation: 2005-01-01]} 
    }
    



BACK TO INDEX

This document was translated from BibTEX by bibtex2html