7: Tit for Tat
The notion of reciprocation, while discussed in terms of trust, is also true of negative behavior. In the game of "tit for tat," players often begin cooperatively, but when one player attempts to cheat, the other retaliates in kind. In some instances, players continue to play cooperatively (for fear of retaliation if they cheat); in others, it degenerates into a vicious circle of escalation.
The negative effect is linked to the notion of "punishing" the other party, an example of which was given earlier. A second example is between a nanny and a child: the nanny disciplines the child, in hopes of encourage him to do better, but the child continues or escalates the behavior to "punish" the nanny.
The notion of reciprocation - or more aptly, avoiding reciprocation - is common in numerous religions. The Christian notion of the "golden rule" and forgiving those who have harmed you is echoed in Islam, Confucianism, Hinduism, and other world religions. Even secular philosophy (Kant's "categorical proposition") espouses benevolence toward others. (EN: To be fair, the Christian standard of "an eye for an eye" has its equivalents as well - hence the reason religious morality has historically done more harm than good.)
The author mentions an example from the animal kingdom: the cowbird will lay an egg in the nest of another species of bird. If the other bird incubates both eggs, all is well. But if it rejects the cowbird's egg, the cowbird will return to the next and destroy the other bird's eggs. Likewise, an experiment done with rats shows that there is the same notion of reciprocity: placed in adjacent cages, the rats are provided with levers that will release food into their neighbor's cage (but not their own). The test subjects were more inclined to feed their neighbors if their neighbors fed them as well. The same food-sharing notion has been seen in the wild, among chimps and bats, who feed others in their group.
The willingness to trust is closely related to the evaluation of risk and reward: when undertaking an action that benefits another, a person is taking a calculated risk that others will act to their benefit as well - either directly (the person I helped will help me) or indirectly (I will be helped, but not necessarily by the same individual). In the cell phone experiment of the previous chapter, people who returned phones commonly remarked that they hoped someone else would do as much for them, had they lost their own phone.
(EN: It's also worth noting that games of ethics that revolve around "good deeds" often progress the level of risk: the greater the chance of suffering harm or the greater the degree of harm that would be suffered, the less likely the person would claim to be willing to help. My sense is this principle would bear out, even though such games are hypothetical and unreliable.)
In practice, the cycle of retaliation and counter-retaliation can lead to serious consequences. Often it results in escalating violence between two parties, be they neighbors, gangs, or nations - and the outcome is often that nobody "wins," i.e., ended up better off than they were before the cycle began. As such, the goal should be to discover ways to prevent such cycles from starting, or stop them one they have.
Breaking the Cycle
The most obvious way to stop the cycle of retaliation is for one site simply to stop. It may be sufficient simply to forego "revenge" for the last action, or some act of conciliation may be necessary.
However, the inherent problem with this is that failure to retaliate may be seen as a sign of weakness. This generally is the case in instances in which a person ends up being taken advantage of: they silently "forgave" a person for doing something once, for the sake of keeping the peace, and the other party took their silence as consent to continue doing it.
On a more widespread basis, failure to retaliate decreases the likelihood of individuals to act in a cooperative manner at all. In this way, it is very similar to a law that is "on the books" but is never enforced - people pay it no heed at all.
The solution the author suggests, by way of a rather lengthy example, is communication. This is, the party that wishes to break the cycle must refrain from retaliation, but also make sure that the other person is aware of that - and that if the other person fails to desist, the retaliation cycle will be resumed. This provides the opportunity for the other side to choose to cooperate - and while it is at the risk of giving them a "free hit," it most often succeeds in breaking the cycle.
Other Strategies for Cooperation
One strategy for breaking the cycle without complete surrender is called "Pavlov" - it counterattacks once, but then attempts to cooperate the following round. This generally results in cooperation if two opponents use this strategy, but does not diffuse an opponent who will retaliate every time he is attacked.
Another strategy is called the "grim trigger" - making it clear to the other side that if they fail to cooperate, even once, you will never cooperate with them in future. This is the equivalent of threatening to divorce if the partner has a single affair, or threatening nuclear counterstrike if there is even a minor attach. It seems stern, but generally is effective in convincing the other side not to test the waters.
Another strategy is "generous TFT" - it permits one round of attach without retaliation, but retaliates after two successive rounds of attack. This generally results in cooperation if the opposing force uses a strategy of retaliating if attacked in the previous round. This largely conforms to the prescriptions of relationship therapists, who insist that partners must be firm, but prepared to forgive.
Incentive to Cooperate
(EN: There are a couple of sections that explore the notion of cooperation, seeking to ask why people bother, or why they should bother, to help others rather than constantly acting in self-interest. The answers presented are not entirely satisfactory - in effect, it's just our nature to be social, and cooperation achieves better results than each person could achieve if they went it alone. I don't disagree with these conclusions, but the evidence presented to the author adds nothing to the argument that would change the mind of someone who disagreed on an axiomatic level.)
Evolution of Cooperation in the Real World
The present trends in global societies seem to be leading us in a direction where cooperation will be more difficult to achieve. A mobile society means that we do not have established connections to a local community; people of different cultures must interact without a common platform of ritual; people are becoming more anonymous, making it more difficult to "know" a person enough to trust them and at the same time easier for a cheat to be anonymous; etc.
However, this does not mean that the motivation to cooperate is being eliminated. The author provides a number of reasons (EN: the proofs of which are a unfathomable to me) that the benefit to be gained by cooperating still outweighs the risks of betrayal, even if this means that we are interacting with strangers, with whom we have no past history or future expectations, and in whom we have a higher level of uncertainty.