Attacker-Defender Games: An Introduction

I had the pleasure of lecturing “Game Theory” this semester. Although I only prepared the module about a day in advance (it was a busy semester), I think it went down pretty well. But let’s face it, you can’t really go wrong with game theory; everyone loves it. More importantly though, I loved it. It’s always good to go back to the basics and, for whatever reason, concepts and theories tend to sink in a lot more when you’re forced to stand in front of students and talk about them for hours on end.

So over the next few posts I want to talk a bit about game theory, but I also want to talk about a new project I will be working on from January 2016. The project regards an application of game theoretic concepts, but also uses notions from econometrics and some machine learning too. It’s an exciting project I promise.

What is game theory?

Before I go into any form of detail I should probably first explain what game theory actually is. In short, game theory provides a toolkit for studying situations of interactive decision-making. It’s a general theory that helps us analyse how people, firms, armies, or governments (generally just called ‘players’) make decisions in social situations where there is an element of strategic interdependence. This strategic interdependence refers to situations in which a players utility (or ‘payoff’) depends not only on their own actions (or ‘strategies’), but also on the actions of other players in the game.

Game theory can be defined as the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.

Therefore, all games have three main elements: players, strategies for each player, and utilities allocated to each player for each potential outcome of the game. Game theory provides a set of tools to analyse games; these are essential tools for economists but are now frequently used in computer science, law, psychology, and politics.

There are a number of different game formats. Two main formats include scenarios in which players move simultaneously (think of rock-paper-scissors) and scenarios in which players move sequentially (think of chess). Simultaneous games can be represented in the form of payoff matrices and sequential games can be represented in terms of decision trees.

There are a number of different ways to solve a game. The solution concept that should be used largely depends on the game form that is being analysed. A famous and commonly used solution concept is known as Nash equilibrium, which occurs where no player has any incentive to individually (unilaterally) deviate their strategy. A refinement of the Nash equilibrium concept is known as the subgame perfect Nash equilibrium, which is based on the notion of backward induction, and another concept is known as the mixed strategy Nash equilibrium.

Seriously though, if you’re new to game theory just check out the Wikipedia page for more information (here)… it will tell you everything you need to know!

The project

The project investigates ‘attacker-defender games’. These are games in which a number of defenders are attempting to defend a resource or piece of territory whilst a number of attackers are attempting to destroy or capture the same resource or territory that is being defended. These games can be applied to a lot of different scenarios such as terrorism & counter-terrorism, cybersecurity, industrial organisation, competition & firm entry, penalty shootouts, and international & civil wars. The games are typically represented in the form of payoff matrices and decision trees, but can take more complex forms when different tools are applied.

As noted above, all games have three basic elements: a set of players, a set of strategies given to each player, and a set of payoffs corresponding to each strategy tuple. Therefore attacker-defender games are comprised of a number of players; specifically, a set of ‘attackers’ and a set of ‘defenders’. Each player has a set of strategies; these strategies are determined by the type of player that is acting, either the attacker or defender. Each players payoffs are determined by their success in either attacking or defending a piece of territory or resource. Due to the interdependent nature of games, each players payoffs are dependent on both the strategies that they execute and the strategies that all other players execute as well.

Attacker-defender games: A simple example

I will assume that most people know a little bit of game theory and therefore know how attacker-defender games are traditionally constructed. For completeness, here is a simple attacker-defender game to jog your memory. For any game we need three main elements: players, strategies, and payoffs. Let’s go through each one below.

Players. The attacker-defender game we will initially analyse has just two players: an ‘attacker’ army and a ‘defender’ army.

Strategies. The defender army is defending a town that has two main entrances: one entrance is via an easy pass and the other entrance is via a hard pass. Therefore the defender army can defend either the easy path or the hard path. Likewise the attacker army can attack the town from either the easy path or the hard path. In all both players have identical strategy sets which contain two strategies each: the hard path or the easy path. To keep the terminology simple we will denote the easy path by ‘easy’ and the hard path by ‘hard’.

In all, the strategic situation is effectively shown by the simple diagram in Figure 1 below.

town
Figure 1: Attackers can attack via either the easy path (easy) or the hard path (hard) and the defender can also defend either the easy path (easy) or the hard path (hard).

Outcomes & payoffs. Given that there are two players, both of whom have two strategies each, there are four unique outcomes of the game. These are: (easy, easy); (easy, hard); (hard, easy); and (hard, hard), where the defenders executed strategy is the first element in the parenthesis and the attackers executed strategy is the second element.

Payoffs for each player depends on the strategy implemented by the other player. The payoffs allocated to each outcome are as follows.

  • If the attacker attacks via the easy path and the defender defends the easy path then both armies will meet and fight. Given that both armies choose the easy path, both armies are equally matched so there is a 50-50 chance that a given army wins. The payoffs to the outcome (easy, easy) is (1, 1).
  • If the attacker attacks via the easy pass and the defender defends the hard path then the armies will not meet and the attacker can position themselves to capture the town without serous confrontation. The defending army lose control of the town and lose the assault. The payoffs to the outcome (hard, easy) is (0, 2).
  • If the attacker attacks via the hard path and the defender defends the easy path then, again, the armies will not meet. However, the attacker army traverses the hard path meaning that they lose half (or a substantial number) of the army due to casualties and starvation, etc. Given the number of casualties endured by the attackers they are not in such a position to take over the town. As a consequence the defending army can regroup and defend the town. Therefore, the payoffs to the outcome (easy, hard) is (1, 1).
  • Finally, if the attacker attacks via the hard path and the defender defends the hard path then both armies meet. However, since the attacker has endured the casualties they are easily defeated by the defender. The payoffs to the outcome (hard, hard) is (2, 0).

So, we have defined all of the elements we need for a simple game. Specifically, we have a nice asymmetrical game which we can set up in a number of different ways. In this example we’ll look at two different ways: one in which the defender sees the attacker coming and selects a route to defend, and the other in which the attacker and defender have no idea where each other are going to attack or defend. The first game is one of perfect information and is described with the use of a decision tree, and the other is a game of imperfect information and is described with the use of a payoff matrix.

Game 1: Perfect information

Let’s say that the players move sequentially: first the attacker moves first, the defender sees the attacker coming and chooses to defend a certain path. This game is represented by the decision tree below.

spne
Figure 2: Decision tree of the attacker-defender game with perfect information where the attacker moves first and the defender responds.

The unique subgame perfect Nash equilibrium (SPNE) of this game is obvious and is given by (easy, easy / hard). This is indicated by the red arrows. Regardless of what the attacker chooses, the defender always wants to coordinate their actions with the attacker. Knowing this, the attacker will rationally choose to attack via the easy path and the defender will defend the easy path. The result isn’t all that interesting.

More interesting is when the moves are reversed. If the defender were to move first and the attacker were to respond, then the defender should always defend the easy path and the attacker will be indifferent as to attacking the easy or hard path. Therefore there are two equally valid subgame perfect Nash equilibria: (easy, easy / easy) and (easy, hard / easy). Regardless, the defender always has an incentive to defend the easy path is they know that the attacker perfectly responding to them.

In all subgame Nash equilibria discussed the payoffs are all (1, 1).

Game 2: Imperfect information

A more realistic scenario is where the attacker doesn’t know where the defender is going to defend, and the defender doesn’t know where the attack is going to attack. This is a game of imperfect information and is represented, again, by a decision tree as below. The only difference is that there exists a dotted line between the decision nodes of the defender: the defender does not know what move the attacker has taken.

imperfect
Figure 3: Decision tree of the attacker-defender game with imperfect information.

The attacker and defender have limited information; specifically, the defender does not know where the attacker will approach from. The game can be transformed into a payoff matrix of the following form.

ADmatrix
Figure 4: Payoff matrix representing the attacker-defender game with imperfect information.

In this situation the defender again wants to coordinate actions with the attacker, but the attackers strategy of attacking via the easy path weakly dominates their strategy of attacking via the hard path. Therefore, there are is a single pure strategy Nash equilibrium where the attacker attacks via the easy path and the defender defends the easy path, leading to a payoff of (1, 1).

There are (technically) an infinite number of mixed strategy Nash equilibria. Let p be the probability that the defender will select the easy path and q be the probability that the attacker will select the easy path.

If the defender believes that q > 0.5 then they should choose the easy path. Conversely, if the defender believes that q < 0.5 then they should select the easy path. If q = 0.5 then the defender is indifferent of what strategy they should take.

Likewise, if the attacker believes that the defender will definitely select the easy path p = 1 then they are indifferent to choosing the easy or hard paths. If, on the other hand, p < 1 then the attacker should definitely select the easy path. The best response curves can be seen in the axes below; all Nash equilbria are shown by the purple, where the best response curves intersect.

BRFs
Figure 5: Best response functions for the attacker and defender and full set of mixed strategy Nash equilibria.

Conclusions & future posts

I like game theory and I have recently been asked to undertake a project regarding attacker-defender games. Over the next few posts I want to elaborate on the problem, potential solutions, and applications of attacker-defender games. In doing so I will look more closely at minimax strategies, games of incomplete information, Bayesian games, machine learning, and network games with Byzantine players.

The next post will be on games of incomplete information and Bayesian reasoning… I’ll try not to leave it two years until my next post.