{"id":427,"date":"2022-04-18T11:10:00","date_gmt":"2022-04-18T11:10:00","guid":{"rendered":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/?p=427"},"modified":"2022-05-06T14:07:05","modified_gmt":"2022-05-06T14:07:05","slug":"the-prisoners-dilemma-on-groundhog-day","status":"publish","type":"post","link":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/2022\/04\/18\/the-prisoners-dilemma-on-groundhog-day\/","title":{"rendered":"The Prisoner&#8217;s Dilemma on Groundhog Day"},"content":{"rendered":"\n<p><\/p>\n\n\n\n<p>The <a href=\"https:\/\/en.wikipedia.org\/wiki\/Prisoner%27s_dilemma\">prisoner\u2019s dilemma<\/a> is a famous problem in game theory. The situation is as follows: you and an accomplice have been arrested on suspicion of a serious crime. The prosecutors have sufficient evidence to convict both of you on a lesser charge but offer both of you a bargain in the hopes of a conviction on the serious charge. If you betray your accomplice and testify that they committed the crime then you will get off with a lesser sentence. You must make your decisions in isolation without communicating, but you are aware that your accomplice has been offered the same bargain. If only you take the bargain then you will serve no time in prison while your accomplice serves 3 years. If both of you stay silent then you will both serve 1 year on the lesser charge, and if you both testify against each other then you will both serve 2 years. What do you do?<\/p>\n\n\n\n<p>The context and exact numbers in this formulation are unimportant &#8211; the key features are that mutual co-operation is better than mutual betrayal, while the best and worst outcomes come on either side of a unilateral betrayal. You could reformulate the problem in many contexts \u2013 for example, two rival companies might have to decide between spending either a high or low amount on advertising, given that they will only get a greater market share if they spend more than their competitor. While the collective \u201cbest\u201d option might be to both spend a small amount and share the demand equally, each company might be tempted to spend a higher amount in the hopes of dominating the market and pocketing a greater profit. If both do so, then they will each have a similar number of customers as if they had both spent the smaller amount, but will be out the extra sum of advertising money.<\/p>\n\n\n\n<p>All formulations of the dilemma have one thing in common: no matter what your fellow \u201cplayer\u201d chooses, you are always better off betraying them than co-operating \u2013 if they choose co-operation, you walk away with the best possible outcome if you opt to betray, and if they choose to betray then you can avoid the worst by betraying them in return. Unfortunately, the other player can follow the same line of reasoning themselves, so that if both of you act \u201crationally\u201d you will both choose to betray the other despite the fact that mutual co-operation is better for both of you. The story doesn\u2019t end here, however \u2013 the situation becomes much more interesting when you might have the opportunity to play again against the same person. Does having the chance to punish your competitor for breaking co-operation change the situation? This blog post will show that in the case that the game is repeated indefinitely, the answer is <strong>yes<\/strong>.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Nash Equilibrium<\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>It turns out that when the prisoner&#8217;s dilemma is repeated indefinitely, there is no longer a clear strategy that dominates any other. To compare different strategies it will be useful to consider the game theoretic concept of a <strong>Nash equilibrium<\/strong>.<\/p>\n\n\n\n<p>In a 2-player game, player 1 can select any action <span class=\"wp-katex-eq\" data-display=\"false\">A^1<\/span> in an action set <span class=\"wp-katex-eq\" data-display=\"false\">\\mathcal{A}^1<\/span>. Similarly, player 2 can choose an action <span class=\"wp-katex-eq\" data-display=\"false\">A^2<\/span> from set <span class=\"wp-katex-eq\" data-display=\"false\">\\mathcal{A}^2<\/span>. In the prisoner\u2019s dilemma both players have the same action set, <span class=\"wp-katex-eq\" data-display=\"false\">\\mathcal{A} = \\{co-operate, betray\\}<\/span>. Once all players have made their decision, players receive rewards <span class=\"wp-katex-eq\" data-display=\"false\">R^1(A^1, A^2)<\/span> and <span class=\"wp-katex-eq\" data-display=\"false\">R^2(A^1, A^2)<\/span> depending on the actions chosen. The key objective in solving such games is to identify <strong>Nash equilibrium<\/strong> policies, defined as pairs of strategies where neither player can guarantee a better expected outcome by unilaterally switching to a different strategy.<\/p>\n\n\n\n<p>Nash equilibria always exist for matrix games (there may also be more than one), but unless the game is <strong>zero-sum<\/strong> (<span class=\"wp-katex-eq\" data-display=\"false\">R^1 = -R^2<\/span>), players may have different payoffs in different Nash equilibria. Note that Nash equilibrium strategies are often <strong>mixed<\/strong>, meaning that they can define a distribution over possible actions rather than identifying one action as optimal. For example, consider the two-player zero-sum game with the following payoffs for player 1:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td><\/td><td><strong>Player 1 chooses A1<\/strong><\/td><td><strong>Player 1 chooses A2<\/strong><\/td><\/tr><tr><td><strong>Player 2 chooses A2<\/strong><\/td><td>1<\/td><td>-1<\/td><\/tr><tr><td><strong>Player 2 chooses A2 <\/strong><\/td><td>-1<\/td><td>1<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Since the game is zero-sum, the payoffs for player 2 are the above multiplied by -1.<\/p>\n\n\n\n<p>Here, if player 1 chooses action 1 (or 2), then if player 2 can guess their strategy they would be able to guarantee a payoff of -1. However, if player 1 randomly chooses either option with probability 1\/2 then their expected payoff is 0 regardless of player 2&#8217;s strategy. By using this strategy, player 1 can guard against any potential extra knowledge or scheming on player 2\u2019s part. In fact, if both players choose this strategy then we have a Nash equilibrium.<\/p>\n\n\n\n<p>In the prisoner\u2019s dilemma, your personal aim is to minimise the time you spend in prison, and you suppose that the same is true of your accomplice. We can specify the \u201cpayoffs\u201d of the prisoner\u2019s dilemma by the following table:<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td><\/td><td><strong>You choose co-operate<\/strong><\/td><td><strong>You choose betray<\/strong><\/td><\/tr><tr><td><strong>Opponent chooses co-operate<\/strong><\/td><td>-1<\/td><td>0<\/td><\/tr><tr><td><strong>Opponent chooses betray<\/strong><\/td><td>-3<\/td><td>-2<\/td><\/tr><\/tbody><\/table><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p>Here, the optimal strategies <span class=\"wp-katex-eq\" data-display=\"false\">(betray, betray)<\/span> form the only Nash equilibrium point in the game.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">The Iterated Prisoner&#8217;s Dilemma<\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>If we repeat the prisoner\u2019s dilemma game a fixed number of times, then mutual betrayal remains the only rational choice and hence the only equilibrium strategy. We can see this by considering what happens in the last time period and working backwards \u2013 the last time you play the game, the situation is exactly the same as when you play only once, since there is no future strategy to consider and your opponent will never get to retaliate against a betrayal. Given this, in the penultimate time you play, there is also no incentive to co-operate since you know that a rational opponent will betray you next even if you co-operate now. By backwards induction it remains rational to betray in every round of the game.<\/p>\n\n\n\n<p>The key issue here is that there is a fixed termination time, a point beyond which there are no consequences to consider beyond the immediate payoffs awarded by the game. This disappears if the game will be played infinitely many times, or if neither player knows which round will be the last. As long as both consider there is a sufficiently large chance of playing again, the potential future rewards will matter more than the outcome of any single game. It remains true that mutual betrayal is the only <strong>stationary<\/strong> Nash equilibrium strategy (a stationary strategy is one that doesn\u2019t depend on previous outcomes). However, if both players remember past events then there is incentive to co-operate, and it turns out that there are many possible Nash equilibrium strategies.<\/p>\n\n\n\n<p>Take the so-called <strong>grim trigger<\/strong> strategy, for example \u2013 in this strategy, you choose to co-operate in the first game and continue to do so until your opponent chooses to betray you, after which you never co-operate again. If both players choose this strategy, then we have a Nash equilibrium: clearly, if you know that your opponent will choose this strategy, then you will not benefit from choosing to betray them unprompted as you will collect a good reward once and then be stuck in mutual betrayal forevermore. Your best bet is any strategy (including grim trigger) which co-operates in response to a co-operative opponent, as then you will consistently get the better mutual co-operation reward.<\/p>\n\n\n\n<p>Another Nash equilibrium strategy that aims for mutual co-operation is the <strong>tit-for-tat<\/strong> strategy \u2013 here, you co-operate in the first game and then always choose the action your opponent played last. This would probably serve better than grim trigger in practice against an opponent who doesn\u2019t know exactly which strategy you are playing, since it offers the possibility of reconciliation without being too forgiving. It is a little harder to see that this is a Nash equilibrium strategy \u2013 notice that if your opponent chooses betrayal unprompted then they can only leave the resulting cycle of mutual betrayal by co-operating once, despite knowing they will be betrayed. As long as the reward for betraying a co-operative opponent and then co-operating knowing you will be betrayed is less than that for mutually co-operating both times instead (as is the case in most formulations of the problem), then there is nothing to be gained by betraying a tit-for-tat player.<\/p>\n\n\n\n<p>So, what is the \u201coptimal\u201d way to play in the indefinitely iterated prisoner\u2019s dilemma? The answer to this is not actually known, and may well depend on what you know about your opponent. Clearly the two strategies suggested in the previous section are good options, and have the potential to get you a much better payoff than the betrayal strategy, despite the fact that this is also a potential Nash equilibrium strategy. If you aren\u2019t sure of your opponent\u2019s strategy, then tit-for-tat might be the better option since you\u2019d rather end up in mutual co-operation than betrayal even if you are betrayed at some point early on. Indeed, this strategy generally does well in iterated prisoner\u2019s dilemma competitions. If there is a chance of miscommunication then you might choose to play tit-for-tat but with a chance of co-operation even if you heard that your opponent has just betrayed you, so as to have the possibility of avoiding becoming stuck in mutual betrayal or alternating betrayal and co-operation.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Further Reading<\/h3>\n\n\n\n<p><\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.nytimes.com\/1986\/06\/17\/science\/prisoner-s-dilemma-has-unexpected-applications.html\">PRISONER&#8217;S DILEMMA HAS UNEXPECTED APPLICATIONS<\/a> &#8211; James Gleick<\/li><li><a href=\"https:\/\/www.jstor.org\/stable\/4235437\">Nice Strategies Finish First: A Review of &#8220;The Evolution of Cooperation&#8221;<\/a> &#8211; Nicholas R. Miller<\/li><\/ul>\n","protected":false},"excerpt":{"rendered":"<p>The prisoner\u2019s dilemma is a famous problem in game theory, where it is always rational to choose betrayal despite the fact that mutual co-operation is better for both players. Does co-operation ever make sense in problems like this?<\/p>\n","protected":false},"author":43,"featured_media":429,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"slim_seo":{"title":"The Prisoner's Dilemma on Groundhog Day - Connie Trojan","description":"The prisoner\u2019s dilemma is a famous problem in game theory, where it is always rational to choose betrayal despite the fact that mutual co-operation is better fo"},"footnotes":""},"categories":[1],"tags":[],"class_list":["post-427","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/posts\/427","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/users\/43"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/comments?post=427"}],"version-history":[{"count":17,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/posts\/427\/revisions"}],"predecessor-version":[{"id":480,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/posts\/427\/revisions\/480"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/media\/429"}],"wp:attachment":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/media?parent=427"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/categories?post=427"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/connie-trojan\/wp-json\/wp\/v2\/tags?post=427"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}