Invited Speakers


Thomas A. Henzinger (IST Austria)

Simulation Games.

Milner's simulation relation is a fundamental concept to compare the behaviors of two discrete dynamical systems. While originally defined for safety properties of state transition graphs, its game-theoretic formulation allows a natural generalization to liveness and quantitative properties. The resulting games are implication games on product graphs, i.e., the derived (simulation) objective takes the form of a logical implication between primary (system) objectives. We summarize the hardness of such implication games for important classes of system objectives: in some cases the implication game is no harder to solve than the corresponding primary game; in other cases the implication game is open even though we know how to solve the primary game.


Wiebe van der Hoek (University of Liverpool)

Two Themes in Modal Logic.

Modal logics form the basis for knowledge representation languages in AI, enabling us to reason about time, knowledge, beliefs, desires and obligations of agents. In my talk, I will address two contemporary research themes in this field.
A good old fashioned line of research in modal logic is that of "correspondence theory" which establishes a direct link between first order properties on Kripke models (basically, graphs) and modal sentences. Standard results have a typical global flavour: in terms of beliefs for instance, reflexive models guarantee that the agent's beliefs are correct, and inclusion of the doxastic relation of agent a in that of agent b guarantees that agent a believes whatever b believes. However, such results cannot cater for cases where we want to express that such properties only hold locally, as in "agent a believes his beliefs are correct, but this is not the case", or in "agent a believes anything agent b believes, but this will cease to hold as soon as b reads the letter". I will present a logic that can deal with such local cases.
The second theme concerns the question how we compare different logics. Standard ways to compare L1 with L2 address their expressivity, or the computational complexity of reasoning problems one can perform in each. In many cases, two logics are comparable on both measures. Only recently the field of knowledge representation has started to address the issue of succinctness: how economically can one express properties in each logic? I give a working definition of what it means that L1 is exponentially more succinct than L2, and then I present a tool which can be used to prove succinctness results, the so-called Formula Size Games. Such games are played on two sets of models, and it establishes a relation between the number of moves needed to win the game, and the length of a formula that discriminates between the sets. I will present some examples of succinctness results.


Alessio R. Lomuscio (Imperial College London)

Model Checking Systems Against Epistemic Specifications.

Twenty years after the publication of the influential article "Model checking vs theorem proving: a manifesto" by Halpern and Vardi, the area of model checking systems against agent-based specifications is flourishing.
In this talk I will present some of the approaches I have developed with collaborators. I will begin by discussing BDD-based model checking for epistemic logic combined with ATL operators and then move to abstraction techniques including symmetry reduction. I will then highlight how, in our experience, bounded model checking can also successfully be used in this context, particularly in combination with BDDs, and how synthesis problems can be formulated and solved in an epistemic setting.
The talk will include examples in the context of security protocols and a brief demo of MCMAS, an open-source model checker implementing some of these techniques.
(This talk was meant to feature in the SR 2013 programme but could not be given due to ill health of the speaker).


Wolfgang Thomas (RWTH Aachen)

What are "Good" Winning Strategies in Infinite Games?

Infinite games were invented in descriptive set theory, where the dominating question was determinacy - the mere existence of a winning strategy for one of the two players. In computer science the problem was put into an algorithmic setting: Can one decide who wins and can one effectively construct a winning strategy? In this talk we address quantitative refinements of the problem, reflecting a major current trend of research: How to construct winning strategies that are "good" or even "optimal" in some sense? The size of memory of finite-state machines executing winning strategies is a well-known criterion. Other criteria refer to the "efficient behavior" of strategies, as captured by the application of the solution of mean-payoff games. A third approach aims at novel formats of winning strategies, e.g. as "programs" (rather than state-machines). We survey old and recent work on these topics, spanning the literature from the beginnings (Büchi-Landweber 1969) to recent results obtained in the Aachen research group, among them the study of winning strategies as Boolean programs (Brütsch 2013) and the Turing machine based model of "strategy machine" (Gelderie 2014).