A key assumption in game theory is that the game being played is *common knowledge* to all players. Specifically, each player knows who is participating, what their options are, and how they evaluate outcomes. But what if this assumption of common knowledge didn’t hold? What if some player had private information that wasn’t known to other players?

Going back to the initial example from the previous post what if, say, a defender didn’t know the identity of the attacker? Or the defender didn’t know the agenda of the attacker? Or the defender didn’t know the strength (the potential threat) of the attacker? This is a scenario in which some information can be *private* to the players, i.e., the identity, agenda, and strength of the attacker is private information to the attacker and not known by the defender

There are many real-world settings in which people have some information that is known only to them. How can games of *private information* be solved? Why may these games of private information be important to attacker-defender games?

### Example I: The Munich Agreement

In order to motivate our discussion of games with private information let’s first turn the clock back to 1938. Nazi Germany had just annexed Austria, and it was believed that Adolf Hitler was considering a similar action against Czechoslovakia’s Sudetenland. With the Great War (now known as World War I) a recent memory, Europeans feared a repeat of such misery and horror. In an effort to preserve the peace, Prime Minister Neville Chamberlain of Great Britain traveled to Munich, Germany, to reach an agreement with Hitler. On September 28, 1939, Chamberlain and Hitler signed the Munich Agreement, giving Germany the Sudetenland in exchange for Hitler’s promise that he would go no further. A chunk of Czechoslovakia had been delivered as a concession to forestall war. Of course, peace proved to be an illusion. Germany would enter Prague the following spring and invade Poland a year later, starting World War II.

In deciding whether to propose and then sign this agreement, Chamberlain was uncertain as to the ultimate intentions of Hitler. Was Hitler only seeking additional lebensraum (“living space”) for the German people? If so, then perhaps a concession such as the Sudetenland would placate him and indeed avoid war. Or was Hitler concocting a more grandiose plan to invade much of Europe?

The situation in Munich can be described by the extensive-form game below.

Chamberlain moves first by deciding whether to offer *concessions* or *stand firm*. The presumption is that Hitler will accept the concessions, and our attention will focus on the decision regarding the *pursuit of war*. The preferences of Chamberlain are clear: His most preferred outcome is to stand firm whereupon Hitler avoids war, while his least preferred outcome is to provide concessions but then Hitler goes to war. Having been offered concessions, Hitler is given more time to prepare his war machine; thus, we shall suppose that Chamberlain finds that outcome less desirable than standing firm and going to war.

The challenge with analysing this situation lies with Hitler’s payoffs. While Hitler is presumed to know them, Chamberlain does not. The unknown payoffs of Hitler in the decision tree are given by question marks ‘?’.

Without knowing Hitler’s payoffs, how can Chamberlain determine what Hitler will do?

**Determining Hitler’s payoffs**

Let’s contemplate the possibilities that might have been racing through Chamberlain’s mind. One thought is that Hitler is *amicable*, as reflected in the payoffs presented in the first decision tree below. We refer to Hitler as amicable because his most preferred outcome is to gain concessions and avoid war. Note, however, that if Chamberlain stands firm, Hitler will go to war in order to gain additional land. Thus, if Chamberlain really did face an amicable Hitler and knew this fact, then he ought to provide concessions.

The other possibility is that Hitler is *belligerent*, as summarised by the payoffs in the second decision tree below. Here, Hitler has a dominant strategy of going to war, although he prefers to do so after receiving concessions. If this is the game Chamberlain is playing, then he would do better to stand firm.

In actuality, Chamberlain was uncertain as to whether he was playing the game described in the ‘amicable decision tree’ on the left or the ‘belligerent decision tree’ on the right. This situation is known as a game of *incomplete information*.

**Introducing Nature**

The trick to solving a game of incomplete information is to convert it to a game of imperfect information—that is, transform it from something we don’t know how to solve into something we do know how to solve!

This is done by introducing a new player referred to as *Nature*. Nature is intended, not to refer to trees, fleas, and bees, but rather random forces in players’ environment.

Nature takes the form of exogenously specified probabilities over various actions and is intended to represent players’ beliefs about random events. In the context at hand, Nature determines Hitler’s preferences (or payoffs) and thus the game that is being played, as is shown in the decision tree below.

Nature is modelled as moving first by choosing whether Hitler is amicable or belligerent. This move by Nature is not observed by Chamberlain—thereby capturing his lack of knowledge as to what Hitler’s payoffs are—but is observed by Hitler—since Hitler knows his own preferences. It is important to assume that the probabilities assigned by Nature to these two possibilities are common knowledge, and here we assume that there is a 60% chance that Hitler is amicable and a 40% chance that he is belligerent.

**Determining the unique equilibrium**

What should Chamberlain do given this strategy for Hitler?

Given Chamberlain’s uncertainty as to Hitler’s preferences, he isn’t sure how Hitler will respond to his action. Thus, Chamberlain will need to calculate *expected payoffs* in evaluating his two strategies.

By doing some trickery we can calculate that Chamberlain’s expected payoff from providing concessions is: **0.6 * 3 + 0.4 * 1 = 2.2**.

If, instead, Chamberlain stands firm, his expected payoff is: **0.6 * 2 + 0.4 * 2 = 2**.

The calculations result in an expected payoff of **2.2** by appeasing Hitler with the Sudetenland. Standing firm causes both Hitler types to go to war, so the payoff is **2**.

**The equilibrium.** In sum, we contend that a solution to this game has Chamberlain offer concessions, in which case Hitler avoids war if he is amicable and goes to war if he is belligerent. Of course, we all know that he was belligerent, which obviously lead to World War II.

### So, what are Bayesian games?

We saw in The Munich Agreement example above that the idea is to *convert a game of incomplete information into a game of imperfect information*, which is known as a **Bayesian game**.

A Bayesian game modifies a standard game by having an initial stage at which Nature determines the private information held by players.

A commonly used solution method for Bayesian games is **Bayes–Nash (or Bayesian) equilibrium**, which is a strategy profile that prescribes optimal behaviour for each and every type of a player, given the other players’ strategies, and does so for all players. From this we can use expected payoffs to determine the equilibrium strategies of each player.

### Example II: Another attacker-defender game

Consider the attacker-defender game that was described in the previous post. This game consisted of two different players-an attacker and a defender-where the attacker was trying to capture a town that is being defended. Let’s specifically consider the situation where the attacker and defender make their move in a simultaneously, but now let’s add a little twist to include some private information.

The twist that the defender does not know the experience and strength of the attackers’ army. For simplicity we assume that the attackers army can be either ‘*weak*‘ or ‘*strong*‘, and that there is only one entrance into the town.

Now the strategies for the both the attacker and the defender are to either ‘engage in conflict’ or ‘wait’. Given this set-up, the defender would rather engage in conflict if they feel that the attacker is going to engage, and would rather wait if they feel that the attacker is going to wait. The rationale being that the defender does not want to waste resources on engaging with a benign attacker.

A strong attacker would rather engage in conflict if the defender does not, as the attacker would be neutralised if they do not. A weak attacker would rather not engage even if the defender engages.

The payoff matrices that illustrate the incentives can be shown below:

From the matrices we can note that the attackers strictly dominant strategy is to ‘wait’ if they are weak regardless of what the defender is going to do. Likewise, if the attacker is strong then attacker has an incentive to always engage in conflict. This makes sense, and due to the strict domination these dominant strategies inform part of the Bayesian equilibrium.

For generality (and unlike The Munich Agreement example above) let’s just assume that the attacker is strong with a probability *p* and that the attacker is weak with a probability *1 – p*.

Given this we can use some magic to calculate that in Bayes-Nash equilibrium: if *p > 1/3* then the defender will engage in conflict; if *p < 1/3* then the defender will wait; and if *p = 1/3* then the defender is completely indifferent as to what strategy they should choose.

**Adding probabilities to the mix**

So, we finished up the previous example with some probabilities. Specifically, the defenders’ optimal strategy depended on the the probability of the attacker being either ‘weak’ or ‘strong’. So much depended on *p*. But what is *p*? What is the probability of the attacker being strong? Can we estimate it? … well, that’s the key… in fact two things become important here…

The first thing is **signalling**: maybe the attacker is wearing some strong armour, or have daunting weaponry, or they look coordinated in their activities, or have built a reputation for being strong against other defenders. This allows us to more accurately estimate their probability of being strong.

The second thing is the use of **statistics**: by acquiring data on how these signals relate to the strength of the attacker we can build nice models to accurately estimate the probability of the attackers’ strength and better inform the actions of the defender.

### Conclusions & next post

We considered a common and crucial feature of many strategic settings: A person may know something about him- or herself that others do not know. This scenario frequently arises in the form of a player’s payoffs being private information. As originally cast, the game is not common knowledge, because, for example, one player doesn’t know another player’s payoffs. We refer to such a game as having incomplete information. The trick to solving a game of incomplete information is to convert it into a game of imperfect information. The initial move in the game is now made by random forces, labeled Nature, that determine each player’s type, where a type encompasses all that is privately known to a player. This Nature-augmented game, which is known as a Bayesian game, is common knowledge, since, at its start, no player knows his type and thus has no private information. What is commonly known is that Nature will determine players’ types. What also is commonly known are the probabilities used by Nature in assigning a player’s type.

When solving Bayesian games the probabilities that Nature enforces become important when informing the payoffs and equilibrium strategies of players. We noted that these probabilities are typically unknown, but can be estimated to some extent by two elements: signalling and statistical analysis.

**Next post.** Signalling and statistical analysis are two elements that we will discuss in more detail in future posts. Specifically, in the next post we will look at *signalling games* and how they apply to *Bayesian games* and our project regarding attacker-defender games.

## 1 thought on “Attacker-Defender Games with Incomplete Information”