In an article titled “We are more rational than those who nudge us”, Steven Poole writes:
Kahan’s argument about the woman who does not believe in global warming is a surprising and persuasive example of a general principle: If we want to understand others, we can always ask what is making their behaviour ‘rational’ from their point of view. If, on the other hand, we just assume they are irrational, no further conversation can take place.
That, in a nutshell, is the problem of the practical application of behavioural economics to modern governance, in the form of nudge politics. Kahan argues against what he calls the ‘public irrationality thesis’: the idea that ordinary citizens act irrationally most of the time. He thinks this thesis is ungrounded, but the liberal-paternalist architects of nudge policy simply assume it – in, so they claim, our best interests.
Paradoxically (or not), there seems to be a lot of irrationality in this passage (or in the views reported):
1) It’s not true that if we assume irrationality – which we should also be prepared to assume in ourselves (beating overconfidence bias) –, conversation is pointless. Quite the contrary! The existence of irrationality means that systematic mistakes are being made, which we may be able to spot and correct – in order to then have more accurate views about the world and make better decisions. Spotting and correcting mistakes is one crucial point and value of conversation. If we assume rationality, we’re assuming that no reasoning mistake are being made and that, therefore, much fewer opportunities for correction and improved decision-making exist. There’s an important sense in which conversation loses value the more rationality already exists.
2) The example used by Kahan is the one of a conservative climate change skeptic who holds her opinion for social conformity reasons – it’s not that there’s rational evidence supporting climate skepticism, but not doing so would lead to the person suffering from bad social consequences (from shunning to potential loss of employment). So the belief formation of said conservative person is not evidence driven, but socially driven. It’s rational/goal-achieving for her to hold climate skeptical beliefs in order to achieve her social goals. – But note what’s going on here: In order to stick with the rationality hypothesis, you need to assume a) that the person does basically not care at all about potentially saving the world/many others from the consequences of climate change (if climate change is/were real), i.e. that the person’s motivation for thinking about climate change is exclusively selfish, and b) that if the person doesn’t fake climate skepticism but really believes it, she’s unable to spot the elementary reasoning mistakes in the arguments adduced for climate skepticism. Given b), the rationality hypothesis isn’t even logically consistent. And with regard to a) you’d probably have to say that the person is deeply confused about what’s going on inside her: She (falsely) thinks that in forming beliefs about climate change, she’s driven by an interest in adequate answers to questions that are important for the world and for political decision-making. And it may be true that she would like to be driven by such an interest. But in fact her belief formation processes are just chasing after selfish social benefits. – Kahan’s example is thus a paradigm case of irrationality: System 1, i.e. the intuitive and automatic cognitive and behavioural pulls inside the person, are out of synch with and dominate System 2, i.e. what the person would believe and would like to do upon reflection. System 1 prevent System 2 – which is the person’s truer self – from achieving its goals. This example and the comment that “If we want to understand others, we can always ask what is making their behaviour ‘rational’ from their point of view” seem to be driven by a desire to “respect” people. But “I’m just pursuing selfish social goals and thus climate skepticism makes perfect sense” is not the perspective of the person considered. Suggesting that it is is what’s disrespectful and condescending.
3) If we want to understand others (including ourselves, and our future selves, which we should try and influence in ways no different from how we would recommend influencing others; we’re biased towards thinking we’re special when – as a matter of statistics – we usually aren’t), the hypothesis of quite pervasive irrationality has a lot going for it: Cognitive psychology has amassed a lot of evidence in its favour.
4) More evidence, from relevant everyday experience and systematic studies:
We all know we often don’t act the way we would, upon reflection, like to act. Weakness of will is one main source of behavioural irrationality, i.e. of behaviours that are not goal-achieving. Whether you’re procrastinating on exam preparations, never getting around learning a new programming language or setting up a planned project, not managing to do more sports or eat less, suffering from social anxiety or too little courage to approach people you haven’t met before, or struggling to act according to altruistic principles you’ve come to endorse upon reflection (with regard to donations or meat-eating, say) – behavioural mistakes are a very prevalent phenomenon. The above video explains how self-nudges can help us reduce it. A very simple example would be teaming up with someone to do sports and fixing dates and times, which exploits commitment and social expectations in order to nudge one’s own future self towards the better decision of actually doing sports.
5) There is no reason to expect mistakes to be less prevalent in the domain of highly societally/altruistically relevant decisions, which liberal politics legitimately regulates (because your freedom ends where someone else’s begins). The alternative to nudges in this domain are laws enforced by punishment, i.e. nudges much stronger than what is usually meant by “nudges”. Is that what the people denouncing (liberal) paternalism prefer? Also, thinking that there can be a nudge-free decisional context is an illusion: If a certain nudge-environment statistically shifts the individual decisions made in some direction, then its absence shifts the decisions in the other direction. A decisional context where only chocolate is offered influences my decisions/nudges me just as a context of chocolate and fruit does. There’s no reason to consider the addition of fruit to a context as more of a nudge than its absence/non-addition. – Not only are we being nudged by decision-contexts anyway, but given e.g. profit-oriented marketing, we should expect to find that we’re currently being nudged in many bad directions. Political counter-nudging can correct for the resulting behavioural biases.
6) Nudging does not require paternalism at all. In a democracy, people can decide to nudge themselves in preferable directions. This is especially true if we think a “public rationality hypothesis” (referring more to reasoning skills and less to will-power) is accurate: The public can then be expected to come up with smart ideas about how to self-nudge in preferable directions. Examples: People recognise they would like to donate more organs in order to prevent people from dying of a current lack of donor organs, and that their aversion primarily stems from the active effort required to become an organ donor – so they move from an opt-in to an opt-out organ donation policy. Or say people agree that meat consumption should be reduced in order to avoid its undesirable consequences, in which case they could politically decide to modify their food environments so as to nudge them more towards vegetarian foods (and nudge them less towards meat). As explained in 5), the result wouldn’t be more of a nudge, but just a better nudge.
7) Unfortunately, there’s grounds for doubting the “reasoning skills” aspect of the “public rationality hypothesis”, too. The second link in 3) presents the solid evidence from cognitive psychology about the many biases affecting our reasoning. The popularity of right wing political parties (and non-scientific beliefs such as climate skepticism, frequently associated with them) is one contemporary piece of societal evidence. And there’s historical evidence that crucial societal progress was often caused by elites being persuaded by new ethical and empirical arguments and exerting a disproportionate influence on society. Steven Pinker mentions many example of this process and uses the abolition of capital punishment for more detailed illustration: