PSI'09
A.P. Ershov Institute of Informatics Systems
Seventh International Andrei Ershov Memorial Conference «PERSPECTIVES OF SYSTEM INFORMATICS»
 

15-19 June, 2009, Novosibirsk, Akademgorodok, Russia


Invited Talks: Abstracts

Samson Abramsky

Games, Interaction and Computation
Samson Abramsky
Oxford University, UK


Our current understanding of computation has widened enormously beyond the original "closed world" picture of numerical calculation in isolation from the environment. In the age of the Internet and the  Web, and now of pervasive and ubiquitous computing, it has become  clear that interaction and information flow between multiple agents  are essential features of computation. The standard unit of description or design, whether at a micro-scale  of procedure call-return interfaces or hardware components, or a macro- scale of software agents on the Web, becomes a process or agent, the  essence of whose behaviour is how it interacts with its environment across some defined interface.

These developments have required the introduction of novel mathematical models of interactive computation. The traditional view whereby a program is seen as computing a function or relation from  inputs to outputs is no longer sufficient: what function does the  Internet compute? One of the compelling ideas which has taken root is to conceptualize the interaction between the System and the Environment  as a two-person game. A program specifying how the System should  behave in the face of all possible actions by the Environment is then  a strategy for the player corresponding to the System.

Over the past 15 years, there has been an extensive development of Game Semantics in Computer Science. One major area of application has been to the semantics of programming languages, where it has led to major progress in the construction of fully abstract  models for  programming languages embodying a wide range of computational effects, and starting with the first semantic  construction of a fully abstract model for PCF, thus addressing a  famous open problem in the field. It has been possible to give crisp characterizations of the "shapes" of computations carried out within certain programming disciplines: including purely functional programming, stateful programming, general references, programming with non-local jumps and exceptions, non-determinism,  probability, concurrency, names,  and more.

More recently, there has been an algorithmic turn, and some striking applications to verification and program analysis. We shall give an introduction and overview of some of these  developments.

Dines Bjørner

FME Invited Lecture
Rôle of Domain Engineering in Software Development. Why Current Requirements Engineering is Flawed!
Dines Bjørner
Technical University of Denmark, Denmark


We introduce the notion of domain descriptions (D) in order to ensure that software (S) is right and is the right software, that is, that it is correct with respect to written requirements (R) and that it meets customer expectations (D).

That is, before software can be designed (S) we must make sure we understand the requirements (R), and before we can express the requirements we must make sure that we understand the application domain (D): the area of activity of the users of the required software, before and after installment of such software.

We shall outline what we mean by informal, narrative and formal domain description, and how one can systematically, albeit not (in fact: never) automatically go from domain descriptions to requirements prescriptions.

As it seems that domain engineering is a relatively new discipline within software engineering we shall mostly focus on domain engineering and discuss its necessity.

The talk will show some formulas but they are really not meant to be read by the speaker, let alone understood, during the talk, by the listeners. They are merely there to bring home the point: Professional software engineering, like other professional engineering branches rely on and use mathematics. And it is all very simple to learn and practise anyway!

We end this paper with, to some, perhaps, controversial remarks: Requirements engineering, as pursued today, researched, taught and practised, is outdated, is thus fundamentally flawed. We shall justify this claim.

Kim Guldstrand Larsen

Compositional and Quantitative Model Checking
Kim G. Larsen
Dept. of Computer Science, Aalborg University, Denmark


Model checking refers to the problem of verifying whether a simplified model of a system satsifies a given specification. During more than twentey years substantial effort has been made toward the design of heuristic algorithmic methods allowing for efficient automatic model checking tools to be developed for finite-state systems. Pioneering work in model checking of temporal logic formulae was done by E.M.Clarke, E.A.Emerson and J.Sifakis shiring the 2007 Turing Award for their work on model checking.

A major obstacle is that of the so-called state-space-explosion problem, refering to the fact that the size of the state-spaces grow exponentially in the number of components of the model to be analysed. Thus, effort has been focused on the development of a variety of heuristics trying to overcome this problem at least for the analysis of a large range of realistic system models. the heuristics dveloped include symbolic model-checking, on-the-fly technques, guided model checking, bounded model checking and partial order techniques.

In the talk we shall present two particularly succesfull such methods that we have developed, both exploting possible (in)dependencies between components of a composite systems. The Compositional Backwards Reachability (CBR) method allows for reachability analysis of systems with thousands of components and more than 10^{500} states. The method is by now patented and used in the commercial tool visualSTATE. The second compositional method, the Partial Model Checking (PMC) method, enables efficient model checking of general temporal logic properties (expressed in the modal mu-calculus) using a so-called quotienting technique allowing for the components of the composite system to be removed iteratively from the model checking problem.

More recently, driven by the needs from the area of embedded systems, substantial effort has been made towards quantitative model checking, where the model (as well as the properties to be checked) include quantitative aspects such as time, cost, hybrid or probabilistic and stochastic information.

In the talk we shall reveal the "secrets" behind the success of the tool UPPAAL, being the most efficient tool for verification of real-time systems with extensions towards performance analysis and controller synthesis. We will also highlight the efforts made towards extending the compositional methods of CBR and PMC to the timed automata formalisms of UPPAAL.

Wolfram Schulte

VCC: Contract-based Modular Verification of Concurrent C
Wolfram Schulte
Microsoft Research, USA


Most system level software is written in C and executed concurrently. Because such software is often critical for system reliability, it is an ideal target for formal verification. Annotated C and the Verified C Compiler (VCC) form the first modular sound verification methodology for concurrent C that scales to real-world production code. VCC's verification methodology rests on two-state invariants that span multiple objects without sacrificing thread- or data-modularity. To capture knowledge about the system state VCC introduces claims, which are derived first-class objects.

VCC is integrated in Microsoft Visual Studio and it comes with support for verification debugging: an explorer for counterexamples of failed proofs helps to find errors in code or specifications, and a prover log analyzer helps debugging proof attempts that exhaust available resources (memory, time). VCC is currently used to verify the core of Microsoft Hyper-V, consisting of 50,000 lines of system-level C code.

Lothar Thiele

Distributed Embedded Systems: Reconciling Computation, Communication and Resource Interaction
Lothar Thiele
Swiss Federal Institute of Technology Zurich, Zurich, Switzerland


Embedded systems are typically reactive systems that are in continuous inter-action with their physical environment to which they are connected through sensors an actuators. Examples are applications in multimedia processing, auto-matic control, automotive and avionics, and industrial automation. This has as result that many embedded systems must meet real-time constraints, i.e. they must react to stimuli within a time interval dictated by the environment.

The embedding into a technical environment and the constraints imposed by a particular application domain often require a distributed implementation of embedded systems, where a number of hardware components communicate via some interconnection network. The hardware components in such systems are often specialized and aligned to their local environment and their functionality. And also the interconnection networks are often not homogeneous, but may instead be composed of several interconnected sub-networks, each with its own communication protocol and topology. And in more recent embedded systems, the architectural concepts of heterogeneity, distributivity and parallelism can even be observed on single hardware components themselves: They become system characteristics that can be observed on several abstraction layers.

It becomes apparent that heterogeneous and distributed embedded real-time systems as described above are inherently difficult to design and to analyze. During the system level design process of an embedded system, a designer is typically faced with questions such as whether the timing properties of a certain system design will meet the design requirements, what architectural element will act as a bottleneck, or what the memory requirements will be. Consequently it becomes one of the major challenges in the design process to analyze specific characteristics of a system design, such as end-to-end delays, buffer requirements, or throughput in an early design stage, to support making important design decisions before much time is invested in detailed implementations. This analysis is generally referred to as system level performance analysis.

Based on the above discussion we can summarize that Embedded systems are characterized by a close interaction between computation, communication, the associated resources and the physical environment. The solution of the above complex analysis and design problems relies on our abilities to properly deal with some of the following challenges:

  • Challenge 1: Designing component models whose interfaces talk about extra-functional properties like time, energy and resource interaction.
  • Challenge 2: Designing models of computation that talk about functional component properties and resource interaction.
  • Challenge 3: Developing system design methods that lead to timing-predictable and efficient embedded systems.

It will be necessary to (re)think the classical separation of concerns which removed very successfully physical aspects from the concept of computation. It will be necessary to (re)combine the computational and physical view of embedded software.

The presentation we will cover the following aspects:

  • Component-based performance analysis of distributed embedded systems (Modular Performance Analysis): basic principles, methods and tool support.
  • Real-time Interfaces: from real-time components to real-time interfaces, adaptivity and constraints propagation.
  • Application examples that show the applicability of the concepts and their use in embedded system design.

 

Design & development:
xTech

© 2008-2025 IIS SB RAS

WebmasterWebmaster