- Rationality – an example
- Misconceptions: what rationality is not
- Rationality is for everyone
- Epistemic and instrumental rationality
- Optimal decision-making
- Humans: systematically irrational
- Why we have biases
- Doing the best we can
Rationality – an example
On December 26th, 1983, the Soviet early-warning system for nuclear attacks reported a missile being launched from the United States. Shortly after, the system reported four more missiles underway. Stanislas Petrov, the sole officer on duty, was confronted with a monumental decision: Should he follow proper procedure and raise alarm, alerting the highest military officials and thereby quite possibly setting a retaliatory nuclear attack in motion?
Petrov, who only had minutes to make his decision, decided to classify the incident as a false alarm. He reasoned that an attack would most likely consist of hundreds of missiles, enough to render the Soviets incapable of hitting back. He further considered that the detection system was relatively new and might therefore be liable to make mistakes.
Indeed: The warning system had picked up sunlight reflections from clouds and mistaken them for intercontinental missiles. Petrov was right, and he didn’t simply get lucky. Remaining clear-headed, he correctly noticed that the prior probability of a nuclear first strike looking like the evidence from the warning system was very low, and he fortunately also factored in that warning systems, even if they are big and fancy, tend to have a track-record of giving false alarms. Petrov’s rationality quite possibly saved hundreds of millions of lives that day by preventing a nuclear war.
Misconceptions: what rationality is not
Properly understood – rationality is always to a person’s advantage. It is important to note that according to this view of rationality, the stereotypical “rational character” in popular TV shows is not always rational.
Consider Mr. Spock from Star Trek, who is judgmental, never relies on intuitions and whose emotional state is always the same: neutral. Or consider Sheldon Cooper from The Big Bang Theory, who admittedly becomes emotional quite often, but who is seemingly unable to make any decision without drawing a complicated chart.
Does being rational imply that we should constantly suppress all emotions, like Spock? Or that we’re not allowed to rely on intuitions or make decisions spontaneously? Fortunately, none of that follows. There is no general rule about how slowly or quickly rational decisions ought to be taken, or to what extent emotions are allowed to contribute to their outcome. It always depends on the specifics of the situation and, importantly, on the specific goal of a person.
With time and resources being limited, it can be quite bad for someone to spend too much time on a single decision. Similarly, emotions, although they certainly lead us astray sometimes, are often useful, and most if not all humans value them intrinsically as a central aspect of their ideal life.
Another misconception is that rationality must imply selfishness. When simplified models in economics or game theory assume that people are rational and choose the outcome that gives them the highest utility, “utility” is defined not as personal well-being, but rather as the sum of everything the person in question cares about – something that might perfectly well also include the well-being of others.
Rationality is for everyone
So, as a rational person, you are allowed to have emotions, be nice, use your intuitions and value non-quantifiable things like love and happiness. Indeed, you would probably be quite irrational if you were always cold, never trusted your intuitions and only cared about quantifiable things like the money in your bank account.
Rationality is goal-neutral, it’s about best achieving your goals (whatever they may be). By definition, everyone has an interest in becoming more rational, because everyone wants to achieve their goals as well as possible.
Epistemic and instrumental rationality
We can distinguish two subforms of rationality: Epistemic and instrumental rationality. Epistemic rationality focuses on finding truth, i.e. on getting the most accurate model (true beliefs) of reality. It deals with the question: Why do I believe what I believe? And are these reasons sound? Are the cognitive methods and mechanisms that led me to my convictions “truth-tracking”?
Instrumental rationality means to act in a way that best achieves one’s goals. The two forms of rationality are intertwined: Epistemic rationality can be seen as a form of instrumental rationality with the goal of getting true beliefs. Similarly, instrumental rationality seems to consist of epistemically acquiring true beliefs on which means are suited to best achieve one’s own goals.
Perhaps the misconception that rationality is associated with e.g. robot-like Mr. Spock comes from having unrealistic standards. The normative model of rationality describes ideal decision-making. Humans are far from ideal thinkers, so it would be foolish to try to follow the very same decision-procedures a perfect reasoner would use.
Nevertheless, studying these normative models is important because it can help us understand how improvements to our own thinking could look like. Ideal decision-making follows the laws of logic, Bayesian probability theory and rational choice theory.
- Logic: A rational agent avoids holding contradictory beliefs and correctly deduces conclusions.
- Bayesian probability theory: A rational agent’s beliefs always come with a corresponding credence function – how convinced one is that the beliefs are true. Degrees of credence are coherent and updated according to Bayes’ theorem whenever new relevant information comes in.
- Rational choice theory: A rational agent always chooses the option that gives her the highest expected degree of personal goal-fulfillment (utility).
Humans: systematically irrational
It should come as no surprise that humans don’t always act rationally. Our brains, admittedly huge for an animal of our size, are only finite. Compared to an ideal decision-maker, we lack the intelligence and computing power to make decisions equally well.
This part is trivial. Much more interesting is something that was only discovered as recently as the 1970s: Humans aren’t just irrational, they are often systematically irrational. What this means is we tend to make the same kind of mistakes over and over in the same kinds of situations. These systematic deviations from ideal decision-making are called cognitive biases. Some examples include:
- Status quo bias: We tend to irrationally favour the current state of affairs, even when we’d have good reasons to prefer specific changes. This leads to inertia in domains where new policies could have tremendous benefits.
- Confirmation bias: We tend to weigh supporting evidence more strongly than evidence that contradicts our current beliefs. This leads to people changing their minds less often than they rationally should.
- Scope insensitivity: At some point, when the stakes get higher and higher, everything intuitively feels the same to us. Something that could affect millions of people doesn’t remotely feel a thousand times as important as something that could affect thousands of people. This leads to worst-case scenarios being comparatively ignored.
The full list of cognitive biases is much longer.
The discovery of biases was revolutionary because it implies that there is easily accessible room for improvement. If we were irrational in a random way, we’d have a hard time fixing anything – we’d have to redesign the entire brain in order to make progress. On the other hand, if we’re predictably irrational, we can try to learn under which conditions human decision-making goes astray and then come up with methods to correct it.
The cognitive science of rationality is still in its infancy, but hopefully, there will come a day where knowledge about biases and ways to overcome them are taught at every educational institution.
Why we have biases
We have biases because our brain-design dates back to the stone age. Our intuitive decision-making consists of shortcuts, heuristics, that led to successful gene-copying more often than not in our ancestral environment. If you had a true belief that differed from the cherished beliefs of your group, you were at risk of ostracism. Our belief-acquiring mechanisms were not selected for producing accurate beliefs, but for producing beliefs that paid rent in terms of reproductive success. Upon reflection, we would hopefully come up with different personal goals. This mismatch between the (metaphorical) goals of our genes and our personal (very real) goals is one reason for the existence of cognitive biases.
The other reason is that our environment has changed drastically, so that once goal-tracking shortcuts are now misleading. Our ancestors only ever lived in small groups, not in a globalized world where they could positively or negatively affect the lives of future generations or people living on other continents. Our intuitions fail to adequately keep track of large numbers, because up until very recently, we were never playing for stakes this high. Similarly, exponential processes or low-probability high impact scenarios are intuitively neglected as well, because the corresponding scenarios did not come up often enough in our evolutionary past.
Doing the best we can
With our thinking being prone to biases, we are starting out in a suboptimal position. Nevertheless, this is no reason to be discouraged: rationality is completely pragmatic. Despite having biases, shortcomings and personal weaknesses, it is about making the best of the situations we find ourselves in. When two people share the same goals, their strategies for pursuing them might still be very different if they have different skills and personal preferences.
When we set our standards too high while trying to be more rational, we might fail and end up discouraged and less happy with our goals than before. It is therefore important to factor in personal limitations when it comes to our expectations of how much of our goal(s) we should be able to achieve.