What does ‘rational’ mean from a gene’s perspective?

It is often claimed that “humans are irrational”. While true in many ways, the statement requires some clarification. A similar issue arises when people argue that politicians are incompetent idiots. You cannot assess agent’s ability to achieve goals without knowing its goals. It may not be a politician’s ambition to implement fair, happiness generating policies. Their core aim is to be elected and not get fired mid-term. It need not be their explicit, conscious goal. However, the population of politicians is formed according to being successful on this criterion. One can be elected because of history of successful policies. Still, another way to get chosen is to lie about your past (inflate), about your opponents’ past (derogate) and about the future (empty promises).

Thus, I refrain from calling politicians idiots even when their policies are short-sighted or inefficient. They choose policies that increase their re-electability and on that measure they do very well. When viewed through this lens neither the politicians nor their populist policies seem glaringly irrational.

Similarly, we judge humans as irrational when they subconsciously distort statistics or engage in motivated reasoning via numerous biases. Or when we want to stay in shape but decide eat a pack of doughnuts. From a number perspectives this is the right way to assess human rationality. However, to understand human morality we have to appreciate our behaviour from gene’s view.

The genes did not build us, so that we can achieve our goals maximally efficiently. In fact, as we manage to get closer to completion of our dream, like building a successful company or finding love, the post is shifted. We are happy for a moment, but our genes do not want us to rest on laurels. A new goal is generated and we continue the chase. More wealth, status, partners.

We are built to achieve genes’ goals — successful replication. The feeling of happiness is only instrumental to creating more offspring. It is not of terminal value in itself. That is, evolution does not build agents, so that they can satisfy themselves. (Genes that build wire-heading agents are weeded out.) And so, we should not expect that every action of an agent is an attempt to make the agent happier. Genes implement us with mechanisms for self-delusion, inaccurate reasoning, changing, context-sensitive desires. Our moods and resulting behaviours are slaves to chemistry, be it hormones or drugs. Presumably, most of these mechanisms exist, because humans without them were not as successful at reproducing.

We could imagine an evolutionary alternative to us: a rational being, free from biases, born with a fixed life-long goal of generating maximum number of copies. For whatever reason history took a different path. It is possible that this option was not available or maybe this was not the best way to create a reliable replicator.

Apparently, genes prefer to give us dynamic emotions that motivate us as if we were trying to maximise replication rather than giving us a unilateral explicit goal of multiplying. Parents think that they protect their children because they love them (and they are right). Genes’ want their organism’s offspring protected because it probably contains copies of themselves. Thus, genes implement organisms that love their children. It is not necessary for the genes to explain the theory of replicators to parents. This does not diminish love but the causation arrow is clear. A non-selfish feeling of love evolved from a selfish desire of genes.

From selfish individual’s perspective some morality-driven behaviours, like risking one’s life for a stranger or committing honour suicide, seem irrational. Could they be rational from the genes’ view?

In order not be too idealistic and appreciate weaknesses of this approach I need to mention that humans today can seem extremely irrational from gene’s perspective — most obvious example being birth control. We need to keep this in mind and halfheartedly justify it with the following points. We are adaptation-executers, not fitness maximisers. When giving a reason for a behaviour think about environment where the behaviour was established, not about the modern world. Gene’s actions are rational only when viewed across many generations. Locally, actions are just random mutations.

To conclude, genes make us irrational whenever that helps to achieve their goals in the long run.

Genes as decision-making agents

Genes have no goals in the sense that humans have. They are not rational consequentialists. They do not predict the outcomes of actions and they do not select actions according to their preference over outcomes. Still, random mutations and natural selection shape the population of genes is such a way, that those that are most successful at replicating are most prevalent. The evolution promotes the genes that enable their organisms to live and replicate competitively. When looking back across millennia of evolution, we can choose to view sets of genes changing their composition as if they had a goal — maximum replication rate.

The idea of genes as actors that collectively decide how an organism (or processes inside organism) should behave to maximise their replication was popularised by Dawkins in The Selfish Gene (though he wished he had called the book The Cooperative Gene). The concept lent itself to a new perspective on evolution. For example, it allows one to ask: why would the genes choose to give some animals wings. It is not that different from a more accurate question: what were the selection pressures that allowed for more successful replication of animals with wings. Still, sometimes reasoning from agent-y perspective is easier. Bear in mind that there are pitfalls that one should try to stay away from. It is important to remember that genes are not planning ahead. Thus, an answer like: because flying enables one find food quickly and evade predators is not great. A fuller answer has to explain why would genes give its animal a set of small wings that do not enable it to fly (before the proper large wings are designed).

Not all the traits that genes cause are rational actions towards the goal of replication maximisation. In fact, most of the mutations are detrimental to animal’s reproductive success. However, over large time scales the noisy, random actions are weeded out and we are left with clear signals — seemingly purposeful actions of genes. Since feelings associated with morality are present within most humans, rather than in small set of mutants, I expect that our genes have rational reasons to imbue us with those feelings.

Capturing real-life complexities

Farmers’ scenario included simplifications, some of which are not too important for our discussion, and some of which do matter, thus, making our deep answers incomplete. Let’s list the cosmetic ones first.

1) Utility cannot be measured in potatoes. We could try to come up with better measures of happiness. I think the model holds for reasonable choices of human preferences. An example of preference that would break it is one that dictates that people care about happiness of others as much as about their own. I am afraid assuming this would be insufficiently cynical.

2) Humans are not completely selfish. But I think they are sufficiently selfish for the reasoning to be valid. It may feel monstrous to imagine killing Bob just to get his potatoes but scenarios of this kind have been played out too many times in human history. On that note, killing animals is pretty monstrous and most people are fine with this. (Though they may spend some energy to justify why it is okay to be monstrous towards animals.)

To sum up our model so far. We know that even completely selfish societies have reasons to establish moral codes and punish those who break the rules. The resulting morality is effective because following it is actually beneficial to the agents who negotiated the contract. The code and enforcement of it modify the pay-offs for different actions. Stealing potatoes is no longer a high-reward action, if you have to spend years in jail when caught.

Let’s move to simplifications in the scenario that make a big difference, that is, having them in our model leads to inaccurate predictions about morality. Each point is a correction of an assumption made previously.

1) Moral codes cannot be enforced perfectly.

Some attacks on other players are difficult to detect. As a result, it is challenging to generate incentives which discourage those attacks. Think about a skilled thief or problem of internet piracy. With the punishment removed, rational agents are less inclined to act morally.

This tells us that we probably will not be able to solve all of the Prisoner’s Dilemma-like coordination problems. If some crimes are difficult to detect, the law enforcer may not be able to generate sufficient disincentive. An agent may calculate that breaking the law has positive expected reward because likelihood of getting caught is low. Similarly, it may be that some criminals are really skillful at avoiding being detected. Thus, even though we have established rules that prevent most people from acting anti-socially, those particular players will keep breaking them because we cannot disincentivise them.

Should a rational agent adhere to such rules nonetheless? No. By definition, it should grab any extra utility whenever possible. Do humans adhere to such unenforcable rules? Some do and some don’t. Overall, I think people are much more lax about their moral codes when they think no one will catch them. Think about society’s leniency with respect to internet piracy when compared to physical theft.

We often like to view criminal behaviour as a form of disease. Something that people resort to only when they are damaged. However, many crimes, for example robberies, are often committed with a profit in mind and involve deliberation about risks. This points to rationality of perpetrators. Acting against moral codes may be the optimal action.

2) Real-life is an iterative game.

Game theoretic problems in real-life are rarely one-shot. Typically, the actions a player took affect his reputation and dynamics of future games. This slightly decreases the value that comes from enforcement of the rules by centralised institutions. If a merchant does not pay to his suppliers, it may be not necessary to threaten him with police force. If he knows that not paying can hurt his reputation to the point where nobody trades with him, he may feel compelled to be a fair player.

Reputation plays an important role in interactions between agents. People do not to trust those who are considered morally corrupt. If someone lied to one person today, he will likely lie to you tomorrow. With this in mind, we can expect that people will try to inflate their reputations. If an agent acted immorally, it will hide this information from others, even if they would not report him to the law enforcer. If someone is caught stealing at a supermarket, they will try to diminish gravity of the crime — “This is the first time!” or “The corporation earns so much that it wouldn’t even notice!” At other times people can dishonestly advertise willingness to help others when they expect no one will ask for it.

In a society where reputation is a resource that can be converted into allies and, thus, a greater political power, we should expect rational agents to try to convince others to credit them high reputation. However, it is in the interest of listeners not to get fooled. Usually to maintain a high reputation the agent has to convince a large number of agents freely exchanging information and judging it. In a situation when the listeners expect attempts of deception societies may discover signalling. We will return to this concept later, when considering evolutionary origins of morality.

We should expect that in societies where reputation is important, not all of the rules need to be explicitly stated in the code and enforced by a centralised institution. Potential damage to reputation may be a sufficient deterrent from immoral behaviour. For example, in US it is perfectly legal to object to and advocate against legalisation of gay marriages. However, reputation of someone who publicly takes such a political position may be attacked by those who consider such act a moral transgression.

3) The game is not symmetric.

In the real world, the competing players are not clones. Some are better at assassinations, others have many friends to call to arms, others still are not great at anything. Thus, the resulting game is asymmetrical and so we should not expect symmetrical moral contracts. Each of the farmers decided to fund the police force for selfish reasons. If there is a better utility maximising strategy the players will try to implement it. If the society is non-uniform, gangs may form to dominate the weaker farmers. More generally, the moral contracts established by a society reflect preferences of agents with more political power (those in possession of armies, resources and strong allies).

Citizens of an invaded country or slaves from far away colonies may cry “Unfair!” as their oppressors mistreat them. Without backing of political power their voices go unheard. Somewhat puzzling is the fact that those in power typically present a moral excuse for their acts. It need not be very believable. Perhaps the invaders are the true heirs of the lands? Perhaps the slaves are not truly human but from a different, lower, species? Why do the oppressors make an effort to find excuses? Are they trying to fool the other players that they are actually noble and deserve high reputation?

These acts sound truly atrocious and we tend to think that humanity is now far above them. That our societies would not abuse the weak only because they cannot fight back. Well, how do we treat animals? As equals? Clearly no. Still, justice is on our side. You see, animals are not as intelligent as we are and, thus, their pain does not matter. Or they are not conscious in the relevant way because ability to self-model is the only cosmically important kind of computation. I doubt we would be happy to grant validity to those excuses should an advanced alien civilisation show up on Earth tomorrow.

Again, why go into trouble of explaining ourselves? Do we do it to trick the vegetarian 3% of the population? Not sure if it is working. They usually seem convinced that it is them who holds the moral high ground.

Let’s restate correction applied to our model in this step. Moral contracts established by a society reflect preferences of the agents with political power rather than of entire society. If that is most of the story, then we expect that morality should change as the power balance shifts. Over the course of history we observe something more similar to a continual moral progress than an oscillation. One can argue that this is because the power spreads over a higher proportion of society. But the direction of causality is not so obvious.

Gay rights are on the rise recently. Does this imply that homosexuals have recently gained significant political power? Did they suddenly got richer or made lots of friends? Or rather, did many non-homosexuals started including gay rights in their preferences and truly fight for their rights? Pretending to care about others to boost one’s reputation may be a reasonable strategy for rational agents. Actually changing one’s preferences seems more far fetched.

Our current model does not explain this behaviour too well.

4) Morality enforcers are selfish agents as well.

Whoever enforces the law is not free from selfish desires. And so we see authoritarian regimes which bend the laws to jail political opponents. Or drunk-driving rich kids whose criminal problems can be straightened out by powerful parents bribing authorities. In these settings acting selfishly may be misaligned with acting morally. It seems that our civilisation keeps improving on this problem by increasing institutional transparency and by never putting all of the enforcement power in a limited number of hands.

Incorporating this complication means that enforcement institutions are made of agents. Thus, they are as selfish as any agent. Other agents may take advantage of their egoism and, for example, try to bribe them.

In dictatorships all of the political power is focused in the hands of a few agents who can easily coordinate to dominate the society. Modern democracies, on the other hand, are run by thousands of politicians with conflicting goals. Additionally, since governors are selected by the population, democracies have high transparency because acting transparently can serve as a good signal of trustworthiness.

5) Humans are not perfectly rational.

While rationality dictates a specific (not necessarily easy to predict) behaviour, there are many ways to be irrational. It is informative to consider our evolutionary history to understand what kind of decision-making algorithms are run by our brains. This is a bigger issue, so I’ll explain it in a separate post. For now let’s assume humans are mostly rational, i.e. they understand consequences of actions up to some complexity and use this knowledge to select actions which guide them towards their goals.

***

We still have a couple of missing parts in our model. Nonetheless, let’s revisit the question the core question: “Why should a rational agent, that only cares about maximising his own utility, be moral?”

“In large part, he should behave morally because a society of rational agents sets up the world in such a way that acting immorally is punished. Thus it is in agent’s self interest to act morally. An agent which earns a high reputation can reap long term rewards. However, a good reputation can be gained by both truly being charitable and by looking charitable. The second option is usually cheaper. Thus, the agent may loudly promote moral rules to which he is not himself committed.”

“Moreover, sometimes the society may not have enough power to align moral and selfish behaviours of the agent. Maybe the agent can conceal his suffering generating actions (e.g. a skilled thief), or he has an army which renders him untouchable (e.g. a dictator), or maybe the society does not care about those who suffer enough to confront the agent (e.g. PETRL). In such cases the rational agent should not act morally because it is against his self-interest.”

Such an agent appears to be a perfect Machiavellian. He can be nice to others but all of this is just a mask. The moment you take eyes of him he lies, manipulates, backstabs and does whatever can bring him closer to the goal. All with zero honest remorse. While this is similar to some fictional characters and brings to mind dark triad traits, it doesn’t feel very human. People usually do not engage in such completely selfish calculations. They seem to genuinely care about others and not just pretend. On the other hand who they care about can be extremely selective and self-serving.

However, do not let this overshadow how well this model explains contracts between different groups of people now and in the past. I realise that, for example, utilitarianism was not devised to predict human behaviour. But you can question how well it approximates our moral intuitions, if for years we had no regrets about enslaving, torturing and killing others. And wherever dictators rise to power they remind us that mass cruelty is not as inhuman as one would like to think.

Morality of selfish agents

Contrast the previous responses with the answers when the problem is considered from an outside view.

Why do humans agree not to kill each other?

Relatively recent game theory helps to answer this question but Hobbes understood the problem much earlier. Humans have conflicting goals. Alice likes having lots of food, so that her children are not starving. Normally, she gets food from farming, but that takes a lot of time. Potentially, she could sneak up on another farmer, call him Bob, kill him and take all of his stuff, saving herself lots of time. But if she can engage in such ruthless calculation, so can Bob. Assuming about equal assassination skills of Alice and Bob, they find themselves in a very uncomfortable situation. They spend a lot of their resources on preventing the other from killing them.

Let’s make this a bit more numerical. Also, let’s assume Bob and Alice are totally selfish and their happiness is measured in potatoes. Both Bob and Alice can produce 10 potatoes a day. Bob can take 7 of his potatoes and hire mercenaries, whom he can use to protect himself or attack Alice. If Alice chooses not to hire her own mercenaries she is killed and robbed by Bob, who then has [[10 – 7] + 10] = 13 potatoes. If she does hire them, she is relatively safe, as, let’s say, defence is easier than offence. So Bob and Alice end up in situation where they invested in mercenaries but did not really gain anything. They are left with only [10 – 7] = 3 potatoes a day. Their total utility is only 6 potatoes.

Ideally, Alice and Bob would like to agree that none of them should be allowed to hire mercenaries or try to kill the other farmer. Then each of them would enjoy 10 potatoes, total of 20. But how do you agree with someone who was considering to kill you? If Bob has a reliable way of convincing Alice not to hire mercenaries, he will use this method to kill her (13 potatoes) rather than to cooperate with her (meagre 10 potatoes).

The above problem is the famous (one-shot) Prisoner’s Dilemma, where two agents are stuck in a low utility equilibrium because they cannot coordinate their actions.

A problem more relevant to our moral considerations is one where many more players than Alice and Bob take part in the game, and they all plot an attack and worry about being attacked.

The dilemma can be solved if a number of farmers decides to chip in (each 1 potato a day) and build a police force that responds to all contributing farmers equally (and is resilient to bribery…). The police is asked to severely punish anyone who attacks one of the contributing farmers. Since no single farmer can have an army greater than the entire community, hiring mercenaries is no longer profitable: 1) you cannot steal from anyone 2) police protection is offered for the competitive price of 1 potato.

Thus, by establishing moral rules (no killing) and implementing a mechanism to enforce this norm (police) everyone’s utility went up from 3 to 9 potatoes. This process can be used to establish various norms that we observe today, ban on theft, rape, drunk-driving. Notice, that all of the players are still totally egoistic, not at all compassionate, yet they construct something that, at least superficially, looks like a primitive moral system. Thanks to these rules, the agents have a rational reason to behave morally. Killing others is no longer a high-reward action.

The question was: “Why is it immoral to kill another man?”

The answer so far is: “Because in absence of such a moral contract people are stuck in a low-valued equilibrium. The society establishes this rule and a method for punishment, so that the total(?) happiness is greater (is maximised?).”

The deeper question was: “Why should a rational agent, that only cares about maximising his own utility, be moral?”

Currently, the answer appears to be: “Because a collection of rational agents shapes the societal rules and their enforcement in a way that acting morally is equivalent to acting selfishly.” Given that for some reason we have distinct words for acting morally and acting selfishly, this is not a very satisfying result. But at least we see that acting morally does not have to be entirely rooted in compassion. And that, perhaps, why one should act morally is not self-evident but has some rational foundations. Given that the evolutionary pressures are applied to individuals rather than groups, one would expect that moral behaviour, presumably dictated by genes, is beneficial to the individual animal rather than group of animals.

Confused foundations

People have not agreed on how to determine whether a given act is moral or not after millennia of discussing the problem. We used to be similarly confused about free will and still are, perhaps even more, perplexed about consciousness. When people disagree so reliably there is usually something confusing at the heart of the problem.

Let’s interrogate a couple of moralists.

Me: Abraham, why is it immoral to kill another man?
Abraham: God spoke: ‘Thou shalt not kill’. Should you do it, God will punish you and you will burn in hell forever.
Me: If a god does exist and cares, that is a pretty good reason not to kill others. What do you think Thomas Aquinas?
Thomas: It is not truly moral to do good just to receive rewards or avoid punishment. One should do good to be virtuous. This is the only way to happiness.
Me: Right… I think I could find other ways to happiness… Peter Singer, what do you think?
Peter: Thomas is perhaps not too far off, but we need to have a more formal way for calculating goodness of actions. I say we should value happiness of all conscious beings. Killing people, and animals, brings suffering. It decreases the value we would calculate and, thus, is immoral.
Me: Mhm… I mean, what you say *sounds* nice. It seems that most of the deontological rules I know approximate your method for decision-making, so you are certainly onto something. But… Why should I be moral? I know I will not be punished by gods. I do not believe Thomas is right, in that this is necessarily the easiest or only way to happiness. Why should a rational agent, that only cares about maximising his own utility, be moral? Pre-contractarian Scott Alexander, why should we assign a nonzero value to other people?
Scott: I was kind of hoping this would be one of those basic moral intuitions that you’d already have. That to some degree, no matter how small, it matters whether other people live or die, are happy or sad, flourish or languish in misery.
Me: This presupposes what we are trying to prove. If everyone already cared for everyone else, no one would need to be converted to consequentialism. Peter, why act morally?
Peter: Well, people sometimes do good deeds for self-serving reasons, say, they might be looking for praises. Society should take advantage of this, so that we have more utility. I do think that Thomas is naive when he says that acting morally is required for happiness. If that was true there would be no difference between acting morally and rationally. However, it does seem true that acting ethically correlates with being happy. I agree that we should think about reasons why sociopaths that do not care about others should behave morally. I have no answer.

Me: You see the shortcomings of moral philosophy. You have popularised a moral framework which convinced lots of people to do lots of good. I applaud you for both. But something is missing.

Asking moralists questions about foundations of their morality always leads to confusion and uncovers arbitrary rules. “Be moral because a god said so”, “Should questions cannot be asked without a moral framework in place”, “It is good for you”. It seems that in their considerations, moralists have to reason from inside their framework, that is, they always answer why it is bad to kill people in their particular society. They give reasons why they will not start killing tomorrow. What forces them to take this perspective should be clear soon.

Preamble

For reasons partially discussed in this series, people are very attached to their ethics. It is grossly offensive to claim that someone’s moral system is corrupt or inconsistent. People often do it to discredit their adversaries and to prove how praiseworthy they are themselves. In this sequence of notes I will be highlighting shortcomings of contemporary view of morality. I do this not to diminish someone’s work or attack their character. In fact, the scholars I will refer to are among people whom I respect the most. Knowing one’s weaknesses helps in the long run. I hope to further our understanding of morality, so that the world we build is nicer, stabler and more cooperative.

At some points you might think I am too cynical or egoistic or focused on unimportant practicalities. The world I aim for should ultimately satisfy most utilitarians even though the agents inhabiting it may not be fully selfless and inequalities still exist. I think this is the best one can hope for. However, let’s remember, the point of model building is not to find a model that predicts a nice world, but to find a model that corresponds to reality.

I attempt to construct a model of human morality that will allow me to answer questions like:

  • Should we expect rational agents to behave morally? How about populations of such agents?
  • Given a particular society what moral rules will this society develop? Will most of the agents adhere to the constructed rules? How will the rules tend to change over time?
  • Why do humans advocate moralities they advocate? Why do they sometimes say one thing and do another?
  • Why is there still no agreement about morality? Why is the topic so confusing? Why do we have such strong feelings about it?

The writing is based on materials on evolution, morality, game theories, lessons from history, my interactions with other people — in life and strategic games. No previous knowledge is required, but it is certainly useful to have considered concepts like utilitarianism or effective altruism in the past.