Keynote speakers

Moshe Vardi
Department of Computer Science, Rice University, Houston, TX (USA)
Bio Moshe Y. Vardi is the George Distinguish Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology Institute at Rice University. He is the co-recipient of three IBM Outstanding Innovation Awards, the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, and the EATCS Distinguished Achievements Award. He is the author and co-author of over 400 papers, as well as two books: "Reasoning about Knowledge" and "Finite Model Theory and Its Applications". He is a Fellow of the Association for Computing Machinery, the American Association for Artificial Intelligence, the American Association for the Advancement of Science, and the Institute for Electrical and Electronic Engineers. He is a member of the US National Academy of Engineering, the American Academy of Arts and Science, the European Academy of Science, and Academia Europea. He holds honorary doctorates from the Saarland University in Germany and Orleans University in France. He is the Editor-in-Chief of the Communications of the ACM.

Branching vs. Linear Time: Semantical Perspective

The discussion of the relative merits of linear- versus branching-time frameworks goes back to early 1980s. One of the beliefs dominating this discussion has been that the linear-time framework is not expressive enough semantically, making linear-time logics lacking in expressiveness. In this talk we examine the branching-linear issue from the perpsective of process equivalence, which is one of the most fundamental notions in concurrency theory. Defining a notion of process equivalence essentially amounts to defining semantics for processes. Over the last three decades numerous notions of process equivalence have been proposed. Researchers in this area do not anymore try to identify the "right" notion of equivalence. Rather, focus has shifted to providing taxonomic frameworks, such as "the linear-branching spectrum", for the many proposed notions and trying to determine suitability for different applications.

We revisit here this issue from a fresh perspective. We postulate three principles that we view as fundamental to any discussion of process equivalence. First, we borrow from research in denotational semantics and take contextual equivalence as the primary notion of equivalence. This eliminates many testing scenarios as either too strong or too week. Second, we require the description of a process to specify all relevant behavioral aspects of the process. Finally, we require the observable behavior of a process to be reflected in its input/output behavior. Under these principles, linear-time semantics emerges as the right approach. As an example, we apply these principles to the framework of transducers, a classical notion of state-based process that dates back to the 1950s and is well suited to reactive systems. We show that our principles yield a unique notion of process equivalence, which is trace based, rather than tree based.

(Joint work with Sumit Nain)


Kurt Jensen
Department of Computer Science, Aarhus University, Denmark
Bio Kurt Jensen is professor and head of the Department of Computer Science at Aarhus University, Denmark. Kurt Jensen is the "father" of Coloured Petri Nets. He made the initial definition of the CPN language, including the hierarchy constructs that allow a Petri net model to consist of a set of cooperating sub-models. He played a key role in the development of analysis methods and tool support for high-level Petri Nets, in particular place invariants and state spaces. He was the first to exploit symmetry to reduce state spaces and he also got the basic idea behind the more recent sweep-line method. Kurt Jensen is the founder of the CPN group at Aarhus University, Denmark. The group was considered to be the world-leader with respect to the practical application of high-level Petri Nets. The group made the Design/CPN tool, which was licensed to 1,000 organisations in 60 countries. The group has also developed CPN Tools, which is the "successor" of Design/CPN. It has more than 10,000 licenses in 150 countries.

Reflections on the History and Impact of High Level Petri Nets

Concurrent systems are difficult to design. They possess non-determinism, and the execution may proceed in many different ways, e.g. depending on whether messages are lost during transmission, the scheduling of the individual processes, and the time at which input is received from the environment. This means that concurrent systems have an astronomical number of possible executions and it is easy for the designer to miss important interaction patterns. This may lead to gaps or malfunctions in the system design. For many concurrent systems it is essential that they work correctly from the very beginning. Hence, it is crucial to provide methods that enable debugging and testing of central parts of the system designs prior to implementation and deployment.

This talk describes the role of High-level Petri Nets for the design and validation of concurrent systems. One way to design a complex concurrent system is to build a model. This is an abstract mathematical representation which can be manipulated by means of a computer tool. We use the model to investigate how the system will behave and the properties it will have. We make models to gain insight in the system, get ideas to improve the design, and to locate design errors and other shortcomings. Models help us to ensure the completeness and the correctness of the design.

In principle, we could build our models by means of elementary nets or place/transition nets. However, computer systems (and many other kinds of systems) contain complex data which influences the behaviour of the system. This means that we need a modelling language which makes it possible to represent data in an adequate and succinct way. This is offered by high-level Petri Nets.

The first successful type of high-level Petri Nets was called Predicate/Transition Nets. This net class was developed by Hartmann Genrich and Kurt Lautenbach from Petri’s group at Schloss Birlinghoven. The first paper was presented at a conference on Semantics of Concurrent Computation in 1979. The work was partly based on earlier work by Kurt Lautenbach and M. Schiffers (1977) and Robert Shapiro (1979). In Predicate/Transition Nets tokens can be distinguished from each other. They are said to be coloured – in contrast to the tokens of place/ transition nets which are indistinguishable and drawn as black dots. Transitions can occur in many different ways – depending on the token colours of the available input tokens. Arc expressions and guards are used to specify enabling conditions and the effects of transition occurrences.

The next step forward was achieved by the development of Coloured Petri Nets at Aarhus University, Denmark in 1979. They allowed the use of a number of different sets of colours. It gradually turned out that it was convenient to define the colour sets by means of data types known from programming languages, such as products, records, lists, enumerations, etc. Token colours became structured (and hence much more powerful). Type checking became possible (making it much easier to locate modelling errors). Colour sets, arc expressions and guards could be specified by the syntax and semantics known from programming languages.

High-level Petri Nets has made a significant impact. The most popular computer tool for Coloured Petri Nets has more than 10,000 licenses in nearly 150 countries (cpntools.org/). The books and research papers defining Coloured Petri Nets have close to 10,000 citations (www.cs.au.dk/CPnets/cpnbook/). High-level Petri Nets has become an international ISO/IEC standard and has been used in numerous industrial projects of which a considerable number have been documented in published peer-reviewed papers.


Stéphane Lafortune
Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, USA
Bio Stéphane Lafortune is a professor in the Department of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor, USA. He obtained his degrees from Ecole Polytechnique de Montréal (B.Eng), McGill University (M.Eng), and the University of California at Berkeley (PhD), all in electrical engineering. Dr. Lafortune is a Fellow of the IEEE (1999). His research interests are in discrete event systems and include multiple problem domains: modeling, diagnosis, control, optimization, and applications to computer systems. He is co-developer of the software packages DESUMA and UMDES. He co-authored, with C. Cassandras, the textbook Introduction to Discrete Event Systems (2nd Edition, Springer, 2008). He received the Presidential Young Investigator Award from the U.S. National Science Foundation in 1990 and the George S. Axelby Outstanding Paper Award from the Control Systems Society of the IEEE in 1994 (for a paper co-authored with S. L. Chung and F. Lin) and in 2001 (for a paper co-authored with G. Barrett).

Eliminating Concurrency Bugs in Multithreaded Software: A New Approach Based on Discrete-Event Control

This talk will describe the Gadara project, a research effort involving the University of Michigan, the Georgia Institute of Technology, and HP Laboratories, whose goal is to eliminate certain classes of concurrency bugs in concurrent software by controlling the execution of programs at runtime. The Gadara process involves four stages: modeling of the source code at compile time, offline analysis of its properties, feedback control synthesis, and control logic implementation into the source code. Concurrent programs are modeled as a special class of resource allocation Petri nets, called Gadara nets, that capture control flow and lock allocation and release operations. Deadlock-freeness of a concurrent program is captured in terms of liveness of its Gadara net model. We will describe two optimal control synthesis methodologies for Gadara nets that have been developed in our project. Optimality in this context refers to the elimination of deadlocks in the program with minimally restrictive control logic. These methodologies exploit the structural properties of the nets via siphon analysis or via classification theory. We will describe important properties of the proposed control synthesis methodologies and present experimental results that illustrate their scalability. Finally, we will discuss the application of our results to real-world concurrent software. The presentation will focus on the case of circular-wait deadlocks in multithreaded programs employing mutual exclusion locks for shared data; the application of the Gadara methodology to other classes of concurrency bugs will be briefly discussed.

This is collaborative work with the members of the Gadara team: http://gadara.eecs.umich.edu


Kees van Hee
Bio Kees van Hee has been full professor of Computer Science at Eindhoven University of Technology since 1984, although between 1994 and 2004 he was part-time professor. He retired 11-11-2011. He studied Mathematics, with Physics and Economics in Leiden. In 1971 he became assistant professor of Computer Science in Leiden. At the end of 1973 he moved to TU/e where he received a PhD in 1978. The subject of his thesis was Markov decision processes with incomplete information. After his PhD he became director of AKB, a consultancy firm in Rotterdam specializing in operations research and statistics. In 1984 he was appointed professor of Computer Science, in particular the theory of information systems. He focused on methods for modeling information systems and on intelligent information systems. During the academic year 1991-1192 he was visiting professor in Waterloo, Ontario. In 1994 he became director of Bakkenist Management Consultants. When this firm merged with Deloitte in 1999, he became a managing partner of Deloitte Consultancy. In 2004 he returned full-time to TU/e and focused on analysis methods for information systems using Petri nets. From 2005 till 2009 he was Dean of the Mathematics and Computer Science department. Since 2007 he has been director of the Stan Ackermans Institute, which provides professional doctorate programs in design and engineering. He published over 140 papers. He was first supervisor of 15 and second supervisor of 10 PhD students. Eight of them are full professors today. He supervised ca 130 master thesis projects.

The right timing; reflections on the modeling and analysis of time

Authors: Kees van Hee and Natalia Sidorova

In the last 25 years the modeling and analysis of time has been studied extensively in the context of Petri nets as well as for similar models of concurrency. There are many different approaches to model time such as: delaying transitions, duration of transitions, delays in tokens and clocks. We are interested in time for two reasons: analysis of safety properties and analysis of performance of systems. For this last reason time is related to stochastics because performance analysis is not concerned with the extremities of behavior but with the ‘normal’ behavior, i.e. behavior within bounds with a specified probability. So then each non-deterministic choice should be endowed with a probability. The extensive research has shown that the many different approaches all have their own merits and weaknesses. Some of the approaches are questionable from a modeling point of view, but are handy for analysis purposes while others are good for modeling but bad for analysis. Also the application domain has impact on the right choice of time. Two extreme application domains are logistic systems (e.g. the business chain from cows to the milk products) and embedded control systems (e.g. robots). Not only the time units are different but also the type of questions is different. In this presentation we first classify of the different approaches to time in relation to their applications. Then we focus on discrete time which we define as finite (or countable) time domains and discrete probability distributions over the choices. We show how several approaches can unified by transformations from one into the other, which is also useful for the application of software tools. In the last 25 years model checking methods have become feasible because of the enormous increase of computing power. However analytical methods to analyze behavior are still preferable because they often have lower complexity and they give better insight. For the discrete case we present such a method for the time analysis of workflow nets. Finally we show how classical reduction techniques can be used to analyze large models in a compositional way.


Catuscia Palamidessi
INRIA Saclay and LIX
Bio Catuscia Palamidessi is Director of Research at INRIA Saclay, where she leads the team Comète. She got her PhD at the University of Pisa in 1988. She worked as Full Professor at the University of Genova, Italy (1994-1997) and at the Pennsylvania State University, USA (1998-2002). She has been Program Committee Chair of several conferences, including CONCUR’00, ICALP’05 track B, SOFSEM’09, MSPS XXV and QEST’11. She has been invited to give a talk at various conferences, including CONCUR’99, PPDP’02, MFPS’06, AMAST’10, LICS’10 and ICALP’11. She has served as PC member in more than 60 conferences. She is in the Editorial board of the CUP journals MSCS (Mathematical Structures in Computer Science) and TPLP (Theory and Practice of Logic Programming), and of Elsevier’s ENTCS (Electronic Notes in Theoretical Computer Science). She is in the Steering Committee of EATCS (the European Association of Theoretical Computer Science) and of ETAPS (European Joint Conferences on Theory and Practice of Software).

Quantitative Aspects in Information Protection

One of the concerns in the use of computer systems is to avoid the leakage of secret information through public outputs. Ideally we would like systems to be completely secure, but in practice this goal is often impossible to achieve. Therefore, it is important to devise methods to define and reason about the amount of leakage. In this talk, we illustrate various quantitative approaches to information leakage which have been recently developed in the area of security. We then make a connection with differential privacy, which is a very successful notion of privacy emerged from the area of statistical databases. Finally, we generalize the notion of differential privacy so to make it applicable to domains other than databases. We start form the observation that the standard notion of differential privacy relies on the notion of Hamming distance on the set of databases, and we extend it to arbitrary metric spaces. We show various examples, and we revise some of the fundamental results of differential privacy in this extended setting. As particular case studies, we consider applications to location-based services, and to smart meters.


Page last modified: 4 March 2013