In a previous posting, I ruminated on the New York ban on trans fats in restaurants and coupled it with an examination of enforced altruism in social insects. The lesson there was quite clear: Altruism confers a selective advantage on the community, and an individual who disobeys the rules of altruistic behavior gets punished by the members of society.
Now, we know that this is not completely true in human societies. Witness the opposition to gun control in the face of incontrovertible evidence that rates of suicide, accidental shootings, and criminal activities involving guns would go down significantly if gun use was regulated. Polls show that 60-70% of the public favors some form of control. This leaves 30-40% who is not prepared to act altruistically; obviously, there are additional factors at play here.
A paper published in Nature (vol. 444, pp.718-723, 2006) by Bettina Rockenbach of the Dept. of Economics at the University of Erfurt, Germany and Manfred Milinski of the Dept. of Evolutionary Ecology at the Max-Planck Institute, Plön, Germany sheds some surprising light on the question.
The authors begin by stating that “Human cooperation is a paradox”. They go on describing the paradox:
“We obviously overuse public resources, for example, by over-fishing oceans, driving pension and health insurance systems to bankruptcy, and risking the collapse of global climate through unlimited use of fossil energy.”
How do you unravel the complexity of human behavior?
VERY slowly, if ever.
It may be a hopelessly complex proposition, but that does not prevent scientists from taking a stab at it. In this study, they used the “public goods game”, which is widely used to study social dilemmas. For example, four players are given an endowment that they are free to keep for themselves or to contribute to a common pool. “The content of the pool is then doubled and redistributed among the players irrespective of their contribution. If all contribute $1, everybody receives $2 (net gain of $1). If all players but one contribute, the defector has a net gain of $1.50 and the contributors $0.50 each. Individual self-interest results in defection and is at odds with group interest.” The investigators can introduce more complexity to the game. “For instance, by allowing players to punish others costly (that is, to reduce the others’ income at their own cost, for example, by investing $1 to reduce another player’s income by $3), defectors are heavily punished and typically increase their contributions in future rounds. Such altruistic punishment leads to a large increase in cooperation.”
There is an alternative to the “costly punishment” model—”reputation formation“. The two strategies differ in that direct punishment incurs costs for the punisher and the punished, while reputation mechanisms save costs for the punisher.
Now, imagine a society where reputation building is effective. We don’t have to go very far. In Japanese and Chinese cultures, loss of face is an unforgivable sin; in tribal societies, disobedience to the elders is shameful behavior; and in the financial world trust, believe it or not, is still considered indispensable to a business transaction (notwithstanding the rash of “perp walks” we have been treated to recently).
From a purely economic point of view, in environments where reputation building is important and effective, costly punishment should disappear altogether. After all, reputation is free to all and punishment is costly to the punished and the punisher. And yet…
In the “public goods” game, the subjects preferred a combination that included costly punishment, free-riders being more severely punished in the combination. Furthermore, the more effective the reputation building, the more severe the punishment. Other finding-punishment acts were drastically reduced by increasing the value of reputation. Stated differently, the more successful the reputation regime, the less frequent the punishment and the more severe the penalty. It is as if the altruists, who play by the rules, have more to lose the more successful they are and are willing to mete out more severe punishment to protect their gains.
Does it work in the real world?
You bet! I just watched an interview on CNBC with the CEO of Halliburton, of Dick Cheney reputation. To my amazement, the interviewer, Jim Cramer, no bleeding heart liberal and no slouch when it comes to interpreting the markets, suggested that many hardened, profit-maximizing companies would not award drilling contracts to Halliburton because of the tainted reputation of its ex-CEO. Furthermore, he pointed out the reluctance among investors to buy Halliburton stock, as attested by its lagging performance compared to its peer group. Hard to believe in this cynical age.
In a more positive vein, in old European Jewish communities, it was the Talmudic scholar in the Yeshiva, not the wealthiest man who enjoyed the highest reputation, that was invited to the wealthiest home to celebrate the Sabbath and the Holidays and was considered the most desirable catch for the wealthiest girls. Did this “strategy” have a genetic and selective advantage? Was the punishment of not being learned severe enough to induce scholarly pursuit? The answer to that is a resounding yes! No need for academic studies here; just read the Yiddish literature of the period.
What is the biological/evolutionary perspective?
Here we get into the realm of speculation, which is the most fun.
My first thought is: If punishment is so costly, and reputation-building is so cost-effective, why did punishment survive the evolutionary selection process? It must have been useful. I think that punishment made “softer” strategies feasible. Let’s look at the bees (my favorite creatures this week). The reason workers continue working and nurturing the eggs of their sister-queen is that they are programmed to be altruistic. But the occasional “defectors” who decide to pursue their own egg laying are punished by the community (they get their eggs eaten); they very quickly fall in line. But consider the nature of the punishment, it is not injuring or eliminating the offender; it would be costly to the community to lose a worker. It is rather consuming the eggs, thus “reimbursing” the community for the lost energy and foodstuffs involved in laying the “illegal” eggs, and making further attempts by the rebellious bee rather pointless.
Back to the original question
Is public health regulation (and accompanying penalties) social engineering (as the Cato Institute and Wall Street Journal editorial page have us believe) or is it enforced altruism?
Admittedly, the line separating the two is very thin. But if you examine the history of Social Engineering, it is always associated with a despot and an ideology purporting to create “a better person”, be it a purer master race (Hitler), a “new Soviet man” (Lenin and Stalin), “a proletariat in permanent revolution” (Mao and Pol Pot). Invariably, these ideologies degenerated into the most gruesome massacres in the name of the “common good”.
This characteristic of unmitigated cruelty is simply not present in advocacy for a better lifestyle and health. It is rather more akin to the evolutionary solution for building societies built on a degree of altruism: Reward the altruist, but, just in case, make sure there is some punishment for breaking the rules.
Are we pre-determined to be altruistic?
Last thought. Does the reward of reputation have a biological basis? I think yes. When people donating money or time to various community causes are queried as to their motivation, they give many reasons but always mention the great satisfaction as their greatest reward; in fact, a common refrain is “I get more out of it than…(fill in the blanks).” Is it conceivable that our reward system, hard-wired in our brain, perceives altruism with the same pleasure-inducing sensation (or, shall we call it, satisfaction) as, say, having a good glass of wine or great sex? I once asked a crusty Federal judge why he works in his spare time with ex-cons to find them jobs. His answer was astonishing: It is so satisfying it is almost orgasmic.