Computational Game Theory

Course dates – HT – week beginning Monday 04th February 2019 – for Year 1 students
Michael Wooldridge

Introduction

Game theory is the mathematical theory of strategic interactions between self-interested agents. Game theory provides a range of models for represent­ing strategic interactions, and associated with these, a family of solution con­cepts, which attempt to characterise the rational outcomes of games. Game theory is important to computer science for several reasons: First, interac­tion is a fundamental topic in computer science, and if it is assumed that system components are self-interested, then the models and solution con­cepts of game theory seems to provide an appropriate framework with which to model such systems. Second, the problem of computing with the solution concepts proposed by game theory raises important challenges for computer science, which test the boundaries of current algorithmic techniques. This course aims to introduce the key concepts of game theory for a computer sci­ence audience, emphasising both the applicability of game theoretic concepts in a computational setting, and the role of computation in game theoretic problems.

The course assumes no prior knowledge of game theory.

Objectives

The aims of this module are threefold:

  1. to introduce the key models and solution concepts of non-cooperative and cooperative game theory;
  2. to introduce the issues that arise when computing with game theoretic solution concepts, and the main approaches to overcoming these issues, and to illustrate the role that computation plays in game theory;
  3. to introduce a research-level topic in computational game theory.
Contents

Learning Outcomes

Upon completing this module, a student will:

  1. understand the key concepts of preferences, utility, and decision-making under certainty and uncertainty, and the key computational issues in representing and manipulating representations of preferences and util­ity;
  2. understand and be able to apply the key models and solution concepts of non-cooperative game theory, including both strategic form and ex­tensive form games, and the key computational issues that arise when applying these models;
  3. understand and be able to apply the key models and solution concepts of cooperative game theory, including TU and NTU games, and the
  4. understand a contemporary research-level topic at the intersection be­tween game theory and computer science.

Outline Syllabus

1. Preferences, Utility, and Goals:

  • Preference relations and their interpretation; utility as a numeric model of preference.
  • Decision-making under uncertainty: preferences over lotteries; von Neumann and Morgenstern utility functions; expected utility and expected utility maximisation.
  • Paradoxes of expected utility maximisation; framing effects and prospect theory.
  • Compact representations for preference relations (e.g., CP-NETS).
  • Dichotomous preferences and goals. Representations for specify­ing goals (e.g., weighted formula representations for combinatorial domains); expressiveness and computational issues.

2. Strategic Form Non-Cooperative Games:

  • The basic model; solution concepts: pure strategy Nash equilib­rium; dominant strategies; notable games (e.g., Prisoner’s Dilemma; Game of Chicken; Stag Hunt); coordination games and focal points; complexity of pure strategy Nash equilibrium.
  • Measuring social welfare; utilitarian social welfare; egalitarian so­cial welfare.
  • Mixed strategies; Nash’s theorem; c-Nash equilibrium.
  • Computing mixed strategy Nash equilibria: the Lemke-Howson algorithm.
  • Zero sum games; the Minimax Theorem.
  • Compact representations for strategic form games; Boolean games; congestion games.

3. Iterated Games:

  • Finitely repeated games and backward induction.
  • Infinitely repeated games; measuing utility over infinite plays; modelling strategies as finite state machines with output (Moore machines); the folk theorems; implications of using bounded au­tomata to model strategies.
  • Iterated Boolean games.
  • Axelrod’s tournament; the Hawk-Dove game; evolutionary game theory; evolutionarily stable strategies.

4. Extensive Form Non-Cooperative Games:

  • Extensive form games of perfect information; Zermelo’s algorithm and backward induction; P-completeness of Zermelo’s algorithm; subgame perfect equilibrium.
  • Win-lose games; Zermelo’s theorem.
  • Compact representations for extensive form games; the PEEK games and EXPTIME-completeness results; the Game Descrip­tion Language (GDL).
  • Imperfect information games; information sets; solution concepts for imperfect information games.
  • Compact representations for imperfect information games; PEEK games with incomplete information; undecidability results.

5. Cooperative Games:

  • Transferable utility (TU) characteristic function games; the basic model; stability & fairness solution concepts: the core; the kernel; the Nucleolus; the cost of stability; the Shapley value; the Banzhaf index.
  • Compact representations for TU games; induced subgraph repre­sentation; marginal contribution nets.
  • Simple TU games; swap and trade robustness; weighted voting games; vector weighted voting games; network flow games.
  • NTU games and representations for them; hedonic games.
  • Coalition structure formation; exact and approximation algorithms.
Other Sources

Recommended Reading

  • Michael Maschler, Eilon Solan, Shmuel Zamir. Game Theory, Cam­bridge UP, 2013.

The best contemporary overview of game theory.

  • Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press, 1994.

An excellent introduction to game theory, freely available from:

http://books.osborne.economics.utoronto.ca

  • Y. Shoham and K. Leyton-Brown. Multiagent Systems. Cambridge UP, 2009.

Freely available from:

http://www.masfoundations.org/

  • G. Chalkiadakis, E. Elkind, and M. Wooldridge. Computational As­pects of Cooperative Game Theory. Morgan & Claypool, 2011.

The book for cooperative games.

Assessment Mode

Mini-project