The discussion of the relative merits of linear- versus branching-time frameworks goes back to early 1980s. One of the beliefs dominating this discussion has been that the linear-time framework is not expressive enough semantically, making linear-time logics lacking in expressiveness. In this talk we examine the branching-linear issue from the perpsective of process equivalence, which is one of the most fundamental notions in concurrency theory. Defining a notion of process equivalence essentially amounts to defining semantics for processes. Over the last three decades numerous notions of process equivalence have been proposed. Researchers in this area do not anymore try to identify the "right" notion of equivalence. Rather, focus has shifted to providing taxonomic frameworks, such as "the linear-branching spectrum", for the many proposed notions and trying to determine suitability for different applications.
We revisit here this issue from a fresh perspective. We postulate three principles that we view as fundamental to any discussion of process equivalence. First, we borrow from research in denotational semantics and take contextual equivalence as the primary notion of equivalence. This eliminates many testing scenarios as either too strong or too week. Second, we require the description of a process to specify all relevant behavioral aspects of the process. Finally, we require the observable behavior of a process to be reflected in its input/output behavior. Under these principles, linear-time semantics emerges as the right approach. As an example, we apply these principles to the framework of transducers, a classical notion of state-based process that dates back to the 1950s and is well suited to reactive systems. We show that our principles yield a unique notion of process equivalence, which is trace based, rather than tree based.
(Joint work with Sumit Nain)
Concurrent systems are difficult to design. They possess non-determinism, and the execution may proceed in many different ways, e.g. depending on whether messages are lost during transmission, the scheduling of the individual processes, and the time at which input is received from the environment. This means that concurrent systems have an astronomical number of possible executions and it is easy for the designer to miss important interaction patterns. This may lead to gaps or malfunctions in the system design. For many concurrent systems it is essential that they work correctly from the very beginning. Hence, it is crucial to provide methods that enable debugging and testing of central parts of the system designs prior to implementation and deployment.
This talk describes the role of High-level Petri Nets for the design and validation of concurrent systems. One way to design a complex concurrent system is to build a model. This is an abstract mathematical representation which can be manipulated by means of a computer tool. We use the model to investigate how the system will behave and the properties it will have. We make models to gain insight in the system, get ideas to improve the design, and to locate design errors and other shortcomings. Models help us to ensure the completeness and the correctness of the design.
In principle, we could build our models by means of elementary nets or place/transition nets. However, computer systems (and many other kinds of systems) contain complex data which influences the behaviour of the system. This means that we need a modelling language which makes it possible to represent data in an adequate and succinct way. This is offered by high-level Petri Nets.
The first successful type of high-level Petri Nets was called Predicate/Transition Nets. This net class was developed by Hartmann Genrich and Kurt Lautenbach from Petri’s group at Schloss Birlinghoven. The first paper was presented at a conference on Semantics of Concurrent Computation in 1979. The work was partly based on earlier work by Kurt Lautenbach and M. Schiffers (1977) and Robert Shapiro (1979). In Predicate/Transition Nets tokens can be distinguished from each other. They are said to be coloured – in contrast to the tokens of place/ transition nets which are indistinguishable and drawn as black dots. Transitions can occur in many different ways – depending on the token colours of the available input tokens. Arc expressions and guards are used to specify enabling conditions and the effects of transition occurrences.
The next step forward was achieved by the development of Coloured Petri Nets at Aarhus University, Denmark in 1979. They allowed the use of a number of different sets of colours. It gradually turned out that it was convenient to define the colour sets by means of data types known from programming languages, such as products, records, lists, enumerations, etc. Token colours became structured (and hence much more powerful). Type checking became possible (making it much easier to locate modelling errors). Colour sets, arc expressions and guards could be specified by the syntax and semantics known from programming languages.
High-level Petri Nets has made a significant impact. The most popular computer tool for Coloured Petri Nets has more than 10,000 licenses in nearly 150 countries (cpntools.org/). The books and research papers defining Coloured Petri Nets have close to 10,000 citations (www.cs.au.dk/CPnets/cpnbook/). High-level Petri Nets has become an international ISO/IEC standard and has been used in numerous industrial projects of which a considerable number have been documented in published peer-reviewed papers.
This talk will describe the Gadara project, a research effort involving the University of Michigan, the Georgia Institute of Technology, and HP Laboratories, whose goal is to eliminate certain classes of concurrency bugs in concurrent software by controlling the execution of programs at runtime. The Gadara process involves four stages: modeling of the source code at compile time, offline analysis of its properties, feedback control synthesis, and control logic implementation into the source code. Concurrent programs are modeled as a special class of resource allocation Petri nets, called Gadara nets, that capture control flow and lock allocation and release operations. Deadlock-freeness of a concurrent program is captured in terms of liveness of its Gadara net model. We will describe two optimal control synthesis methodologies for Gadara nets that have been developed in our project. Optimality in this context refers to the elimination of deadlocks in the program with minimally restrictive control logic. These methodologies exploit the structural properties of the nets via siphon analysis or via classification theory. We will describe important properties of the proposed control synthesis methodologies and present experimental results that illustrate their scalability. Finally, we will discuss the application of our results to real-world concurrent software. The presentation will focus on the case of circular-wait deadlocks in multithreaded programs employing mutual exclusion locks for shared data; the application of the Gadara methodology to other classes of concurrency bugs will be briefly discussed.
This is collaborative work with the members of the Gadara team: http://gadara.eecs.umich.edu
In the last 25 years the modeling and analysis of time has been studied extensively in the context of Petri nets as well as for similar models of concurrency. There are many different approaches to model time such as: delaying transitions, duration of transitions, delays in tokens and clocks. We are interested in time for two reasons: analysis of safety properties and analysis of performance of systems. For this last reason time is related to stochastics because performance analysis is not concerned with the extremities of behavior but with the ‘normal’ behavior, i.e. behavior within bounds with a specified probability. So then each non-deterministic choice should be endowed with a probability. The extensive research has shown that the many different approaches all have their own merits and weaknesses. Some of the approaches are questionable from a modeling point of view, but are handy for analysis purposes while others are good for modeling but bad for analysis. Also the application domain has impact on the right choice of time. Two extreme application domains are logistic systems (e.g. the business chain from cows to the milk products) and embedded control systems (e.g. robots). Not only the time units are different but also the type of questions is different. In this presentation we first classify of the different approaches to time in relation to their applications. Then we focus on discrete time which we define as finite (or countable) time domains and discrete probability distributions over the choices. We show how several approaches can unified by transformations from one into the other, which is also useful for the application of software tools. In the last 25 years model checking methods have become feasible because of the enormous increase of computing power. However analytical methods to analyze behavior are still preferable because they often have lower complexity and they give better insight. For the discrete case we present such a method for the time analysis of workflow nets. Finally we show how classical reduction techniques can be used to analyze large models in a compositional way.
One of the concerns in the use of computer systems is to avoid the leakage of secret information through public outputs. Ideally we would like systems to be completely secure, but in practice this goal is often impossible to achieve. Therefore, it is important to devise methods to define and reason about the amount of leakage. In this talk, we illustrate various quantitative approaches to information leakage which have been recently developed in the area of security. We then make a connection with differential privacy, which is a very successful notion of privacy emerged from the area of statistical databases. Finally, we generalize the notion of differential privacy so to make it applicable to domains other than databases. We start form the observation that the standard notion of differential privacy relies on the notion of Hamming distance on the set of databases, and we extend it to arbitrary metric spaces. We show various examples, and we revise some of the fundamental results of differential privacy in this extended setting. As particular case studies, we consider applications to location-based services, and to smart meters.