If, like me, you are interested in understanding how human cooperation works, you quickly find yourself confronted with two extremes. On the one hand, humans are surely the only creature that surpasses all others in both quantity and quality of cooperation. No other animal is able to cooperate globally, anonymously, in large groups, and with long breaks in time.
On the other hand, many collaborations fail when it seems that the groups involved only needed to change small things to be successful. As mentioned in another article, this leads to the so-called tragedy of the commons in the case of communally managed resources such as forests, fish, irrigation systems and the like. For example, whole villages refuse to cooperate because the new leader enriches himself with his plans and cannot find a way out of this dilemma. One farmer’s jealousy of his neighbour leads him to consume more than he actually needs – just to deprive the other of the resources he so desperately needs and to gain an advantage over him. It is easy to lose sight of the fact that in the end he is the loser. And these examples could, of course, be continued.
Indeed, successful, efficient and sustainable management of natural resources is the exception rather than the rule in many projects around the world. The question is obvious: what is the ultimate reason for this? To find out, researchers are trying to replicate important framework conditions in laboratory experiments. Under what conditions are people more likely to cooperate, and under what conditions do egoists get the upper hand? These range from computer simulations and anonymous games to manipulated situations in the real world.
There have been many studies of this, the most famous being the prisoner’s dilemma and the so-called public goods games. In both games, each player faces a dilemma because selfishness is the most successful strategy. In the prisoner’s dilemma, a selfish strategy, called defection, usually leads to the other player also defecting, i.e. striking back. In other words, there are very few people who are so trusting and good-natured that they are willing to get fleeced round after round by cooperating even though their partner is not. As a result, both players get caught in a downward spiral of mistrust, selfishness, and the worst possible outcome, from which they can hardly find a way out on their own.
In public goods games, where a group of players contribute to a common pot, the situation is similar. If everyone behaves selfishly and pays nothing (comparable to fare dodgers or a country in which no one would sort the rubbish), no public good is produced or the public good is destroyed, and thus the worst possible outcome is achieved. On the other hand, if everyone puts their full amount into the pot, then the social optimum is reached and the efficiency gains from cooperation due to altruistic behaviour are maximised – everyone benefits!
Hence the dilemma – if no one contributes, everyone does badly; but if I don’t contribute and everyone else does, I do best.
So what do you find when you play these games in the lab? That depends a lot on the above-mentioned conditions. But two things can be said with certainty. First, the majority of people are altruistic rather than selfish by default (warning, bold generalisation!). Second, this is not a stable condition.
Is that good news or bad news? Personally, I would classify it as positive in principle, because this basic friendliness, willingness to cooperate or altruism can be built on with appropriate mechanisms. But these mechanisms have to be right, otherwise cooperation will break down. A typical problem is that there are always some egoists. When altruistic players observe their bets and strategy, they usually adjust their bets downwards – and this leads to the downward spiral mentioned above.