By Sonia Choy 蔡蒨珩
People are, in a nutshell, quite depressing. Even though when we know we need to cooperate with each other in order to do something, from an assignment to stopping global warming, people just can’t help but act out of their self-interest and prevent the best case scenario from happening. But why is this the case? Game theory, a branch of mathematics, might give you a few answers.
We first look at a one-off game which goes like this; imagine you are a prisoner with a prison guard baiting you to tell on your good friend (who is also in prison). If you both stay silent, then they do not have enough evidence, and you both get your sentence of a year; if only one of you confesses, the other gets punished with three years in prison, while the teller walks free; if both of you confess, you go to prison for two years. The catch: you cannot communicate with your friend throughout this process. In this scenario, what will you do?
This is the infamous prisoner’s dilemma, in which if you act to protect your self-interest, you prevent the best case scenario. Any sane person will betray their friend, because if your friend cooperates and you cheat, you walk free; if your friend cheated, it is definitely better for you to cheat, since at least you get out of prison a year earlier. The best scenario here, however, is if both of you stay silent and get out of prison together after a year. But your fear of being betrayed (or rather, your desire to walk free) prevents this from happening.
They cooperate | They cheat | |
You cooperate | You get: 1 year | You get: 3 years |
Opponent gets: 1 year | Opponent gets: freedom | |
You cheat | You get: freedom | You get: 2 years |
Opponent gets: 3 years | Opponent gets: 2 years |
These games always have an “equilibrium point”, known as the Nash equilibrium (footnote 1) – the point where both players are satisfied with their outcome, enough to stop them from switching to another strategy [1]. In that situation, the game has arrived at its optimal outcome, known as the value of this particular game. So here we will actually have a certain “best” solution to the game – we say that the dominant strategy here is to cheat. However, it does not give the best outcome.
But in life, we constantly make decisions; what happens to this game if it happens more than once? When the game is repeated, the optimal strategy might not be always cheating. For example, you could alternate between cheating and cooperating, at random, or repeat what your opponent does to you in the next round. You have the choice of pure-strategy (sticking to a certain plan on what to do) or mixed-strategy (i.e. using a bit of probability) tactics [2]. For the sake of our sanity, we’ll reword the game by asking players to bet points and awarding marks to each player instead – prison terms don’t really add up properly, and we are now able to deduct points.
They cooperate (bet 1) | They cheat (bet 0) | |
You cooperate (bet 1) | You get: +2 | You get: -1 |
Opponent gets: +2 | Opponent gets: +3 | |
You cheat (bet 0) | You get: +3 | You get: 0 |
Opponent gets: -1 | Opponent gets: 0 |
If you don’t want to dive into the math, we can chuck this scenario into a computer program (footnote 2) and repeat it multiple times to see what happens: what ultimately emerges as the victor in this game is the strategy of repeating your friend’s last play. The ancient Chinese wisdom of “do unto others as you would have done to you” seems to hold up here – if you want to win, cooperate since you want your opponent to cooperate as well, and if your opponent follows the strategy of repeating your friend’s last play, you two will always cooperate and achieve the best outcome.
This, though, is quite an idealized model. In the real world, people make mistakes, and blunders occur. What happens when you follow a strategy that is bound to make you win in theory (that is, repeating your friend’s last play), but your opponent occasionally makes mistakes on which option they choose? Now we go back to our simulator – among a crowd of generally distrusting opponents who have a 5% chance of making a mistake (50% will always cheat, and the rest are a mix of different strategies), we realize that the strategy of cheating only if your opponent cheats twice in a row will win you the game. However, as distrust increases, the winner of the game will be the character that always cheats no matter what – the sad truth. This shows the importance of clear and accurate communication; a little bit of miscommunication will lead to forgiveness; however, more and more mistakes will lead to widespread distrust [3].
You might think life is quite far from being a confession or coin-betting game, but game theory is remarkable in that it straddles the boundaries between math and social sciences – theorists have studied decisions made in history using game theory. A notable example is, unsurprisingly, from World War II, where there is often no room for collaboration between enemies, and so war is the perfect textbook example in looking for an equilibrium. This results in a zero-sum game, when your opponent’s gain is your loss, and vice versa. Here, unlike in the prisoner’s dilemma above, cooperation is simply not an option.
In the Battle of the Bismarck Sea, a Japanese admiral was forced to choose between two different routes, North and South [4]. The American general, George Kenney, tried to predict which route the Japanese would take, so that they could coordinate a more persistent bomb attack on the Japanese Navy. Basically, the Japanese aimed at minimizing the number of days of being bombed; but the American would like to maximize the duration of the attack. Both routes would take three days but American’s action was restricted by various limitations, such as poor visibility on the North route. The table for the scenario looks like this:
Possible days for attack | Japanese: North | Japanese: South |
American: North | 2 days | 2 days |
American: South | 1 day | 3 days |
From this table, we know that the Japanese will take the North route to minimize the days of possible attack from the Americans (footnote 3); while no routes seem particularly advantageous, the better route for the American navy, therefore, is also the North route based on the inference above. In fact, this is exactly what happened; the Allied forces sustained an air attack on the Japanese over two days, and ended up resisting the Japanese invasion into New Guinea. Game theory is powerful in this way – it extends far beyond just numbers, into the realms of disciplines such as history and biology.
Then, you might ask, what should we do in situations like this? In truth, life is almost never a zero-sum game, although the zero-sum game mindset is a common belief among people [3]. You don’t have to win at the expense of others – there is always a win-win solution. As much as I sound like an old person, look for these win-win situations. There is almost always room to compromise, and you don’t need to put people down in order to pull yourself up.
1 The Nash equilibrium is named after John Nash (1928–2015), a mathematician who made important contributions to game theory and geometry; the former won him the Nobel Prize for Economics in 1994. He was also portrayed in the film, A Beautiful Mind.
2 You may want to try out the simulator itself; the prisoner’s dilemma simulator that inspired this article is https://ncase.me/trust/.
3 If we think from the Japanese’s perspective, both routes are equally risky if the American chooses the North route, but the North route will become less dangerous if the American picks the South route. As a result, the North route is more favorable for the Japanese.
References:
[1] Pilkington, A. (2016). Optimal Mixed Strategy for Zero-Sum Games. Personal Collection of A. Pilkington, University of Notre Dame, Notre Dame, IN, USA. Retrieved from https://www3.nd.edu/~apilking/math10170/information/Lectures/16%20Optimal%20Mixed%20Strategy.pdf
[2] Manea, M. (2016) Strategic-Form Games: Dominated Strategies, Rationalizability, and Nash Equilibrium; Epistemic Foundations. Personal Collection of M. Manea, Massachusetts Institute of Technology, Cambridge, MA, USA. Retrieved from https://ocw.mit.edu/courses/economics/14-126-game-theory-spring-2016/lecture-notes/MIT14_126S16_gametheory.pdf
[3] Case, N. (2017, July). The Evolution of Trust: Feetnotes. Retrieved from https://ncase.me/trust/notes/
[4] Cornell University. (2016, September 16). Game Theory in World War 2. Retrieved from https://blogs.cornell.edu/info2040/2016/09/16/game-theory-in-world-war-2/