We ran the Harvard Program on Negotiation case whose dull title (“Oil Pricing Exercise”) belies the fireworks that tend to erupt in a Prisoner’s Dilemma simulation. It was the first time that I have used this version and it seems to be an extended version of my preferred one hour version–with the benefit of drawing out the negotiations for up to three hours, although the students didn’t seem to take it as seriously as we hoped.
The case is historically exemplified by the Cuban Missile Crisis (another case that I run midway during the semester). A few of the key issues that always emerge in discussions and case debriefs include:
- trust, and the dissolution of it as the exercise progresses
- conflict styles and strategies
- dealing with escalation (e.g., self-fulfilling prophecy, entrapment)
- “defecting” (e.g., backstabbing–with its ethical and tactical implications)
In the case debriefing I read about Robert Axelrod at the Ford School (University of Michigan) who ran a computer simulations, which was something new that I hadn’t heard about previously.
In 1980, Robert Axelrod, professor of political science at the University of Michigan, held a tournament of various strategies for the prisoner’s dilemma. He invited a number of well-known game theorists to submit strategies to be run by computers. In the tournament, programs played games against each other and themselves repeatedly. Each strategy specified whether to cooperate or defect based on the previous moves of both the strategy and its opponent.
The result was Axlerod’s “Tit for Tat” strategy, recommended in the Harvard Case debriefing notes, which recommends choosing to cooperate initially and then follow the opponent’s previous move for the remainder of the game. Much was made of Axelrod’s computer simulation–even though his conclusions the source of ongoing discussion.
You can find critics. Ken Binmore, author of Playing Fair: Game Theory and the Social Contract, takes issue with any strategy that removes human proclivity for evil:
In brief, the simulation data on which Axelrod supposedly bases his conclusions about the evolution of norms is woefully inadequate, even if one thought that his Norms Game were a good representation of the Game of Life in which real norms actually evolve. One simply cannot get by without learning the underlying theory. Without any knowledge of the theory, one has no way of assessing the reliability of a simulation and hence no idea of how much confidence to repose in the conclusions that it suggests. It does not follow that the conclusions on norms and other issues which Axelrod offers in his Complexity of Cooperation are without value. He is, after all a clever man who knows the literature of his own subject very well. But I do not think one can escape the conclusion that the evidence from computer simulations that he offers in support of his ideas has only rhetorical value. His methodology may table some new conjectures that are worth exploring. But such conjectures can only be evaluated in a scientific manner by running properly controlled robustness tests that have been designed using a knowledge of the underlying theory.
Even so, Axelrod’s strategy has been useful and has attracted attention from evolutionary biology (Joshua Plotkin) and the guys at RadioLab, who wondered about altruism and global strategy. (Listen to the entire story, below, for an amusing retelling). The takeaway? We can see how an Old Testament, “eye for an eye” mentality came from biology and has an evolutionary (and mathematical) basis for understanding human experience.
The Khan Academy has its own version of a Prisoner’s Dilemma lecture, if you prefer an old-school refresher on the the MOOC lecture.