by Citizen Warrior
In the 1970’s the political scientist Robert Axelrod created a computer “world” using the famous Prisoner’s Dilemma as a game computer programs could play against each other. He wanted to find out which computer program would succeed the best.
The Prisoner’s Dilemma is a hypothetical situation used to test whether someone will cooperate or compete, and how well the strategies work in the long run.
The game is played by two people. If one cooperates and the other competes, the one who cooperated will lose and the competitive one (the selfish one) will win. If they both compete, they both lose, but not as badly.
If they both cooperate, they both win. That’s how the game is set up.
If you were one of the prisoners, what would you do? That’s the dilemma. How much can you count on the cooperative nature of the other person?
The game is often played repeatedly with the same two people, each of them choosing to cooperate or take advantage of the other through successive rounds of the game.
The Prisoner’s Dilemma game is designed to parallel real life. If two people in real life cooperate with each other, it very often works to their mutual advantage. But if one person cooperates and the other takes advantage, it often works out very well for the selfish one and very poorly for the cooperative one.
On the other hand, if you go around preempting people — trying to take advantage of them before they take advantage of you — you will miss out on the advantages of cooperation, people will resent you, and you might get people working against you.
What is the best long-term strategy? This is the dilemma we are faced with every day, personally as well as culturally.
Robert Axelrod, the man who created the computer world, invited computer programmers to create a program to play the Prisoner’s Dilemma with other programs. The question is, which program would succeed the best?
In a game that resembles the real dilemma we all face, what strategy is the most effective?
The program that proved the best was named TIT FOR TAT. It was designed by Anatol Rapoport and it was one of the simplest programs submitted. For the first interaction, it would cooperate. After that, it would repay in kind whatever the other did. That was the whole strategy.
If the other cooperated, TIT FOR TAT benefited. So did the other. If the other took advantage, TIT FOR TAT cut its losses immediately.
As the game went on, TIT FOR TAT gained more (and lost less) than any other program. In The Moral Animal, Robert Wright wrote, “More than the steadily mean, more than the steadily nice, and more than various ‘clever’ programs whose elaborate rules made them hard for other programs to read, the straightforwardly conditional TIT FOR TAT was, in the long run, self-serving.”
And it’s the most fair to everyone involved.
I suggest we in the West use the same program when dealing with other countries and other cultures. We should begin with tolerance and cooperation, and then be as tolerant and cooperative as the other is from that point on.”