Have you ever wondered who was more rational, Mother Teresa or Milton Friedman? If yes, great! No just me? Okay, suppose you had and bear along. The answer to this question isn’t obvious; indeed, there isn’t really a right answer.
We can start by asking, what is rationality or how is it defined? This seemingly simple question belies a deep epistemological debate that has bedeviled philosophers and social scientists since the mid-1800s, at least in its modern form (vestiges of this question date back to the classical Greek philosophers). Though the history of thought on rationality is fascinating, for the sake of space I’ll focus on the contemporary state of understanding, casually outlining theoretical developments.
The modern view of rationality and rational behavior has its roots in early modern moral philosophy and political economy, most famously expressed by Adam Smith’s Invisible Hand theory and Jeremy Bentham’s Utilitarianism. Fast forward to the 1940s and 1950s and we find economists and mathematicians like John von Neumann, Paul Samuelson, and Leo Savage formalizing the notion of rationality into what came to be known as Axiomatic (Consumer) Choice theory. The Axioms of Choice are a collection of mathematical statements defining how rational preferences and consequent choices must be arrayed to be ‘consistent’ and ‘optimizing’. These axioms, which describe how preferences must be structured to satisfy the tenets of rationality, are then translated into actually observed choices and behaviors by what’s known as revealed preferences and (expected) utility maximization. These constructs basically state that, provided properly structured preferences, rationality translates into choices or behaviors that have the greatest likelihood of maximizing an individual’s utility – utility being a conceptual measure of a person’s sense of wellbeing, happiness, or satisfaction. The plausibility of pure rationality as a behavioral assumption – as formalized in Axiomatic Choice theory – was reinforced in the 1960s and 1970s by certain streams of evolutionary biology and ecology research, most famously articulated in Richard Dawkins’s Selfish Gene theory.
At around the same time, the strict rationality assumption was subjected to greater scrutiny. In addition to recognizing the fallaciousness of the most stringent axioms of rationality, social psychologists and decision theorists began documenting consistent biases and imperfect heuristics that seemed to violate rational decision-making and behavior. A catalogue of all decision-making biases and heuristics that people are prone to is beyond the scope of this short blog, but some of the most influential work is coalesced into Prospect theory and the so-called framing effect, developed by Daniel Kahneman and Amos Tversky in the late 1970s and early 1980s.
These findings didn’t lead to the wholesale abandonment of rationality as a model for decision-making. Instead, theorists devised more realistic models like Bounded Rationality and Subjective Expected Utility that relaxed the stronger assumptions posited in Axiomatic (Consumer) Choice theory. Another softer take on rationality articulated by Herbert Simon describes a mental and behavioral process of satisficing, rather than strict maximizing, which entails choosing the best available means to achieve the most desirable goal, after weighing all the known possibilities arrayed in a means-end hierarchy. Put another way, rationality implies choosing the most effective actions and behaviors that help achieve operational goals, based on currently available knowledge and knowhow and a higher-order purpose, mission, or value. Here, hierarchy can mean prioritizing some goals over other, or that some goals are subsets of more encompassing goals. Simon also recognized that while humans are not unfailingly rational, people strive for rationality in many circumstances, especially in their economic affairs.
While many people feel uneasy about acknowledging their selfish proclivities, it is self-deceptive to not recognize that most of us look out for numero uno, much of the time. But that isn’t to say we don’t also pay attention to, or oblige the needs of other people, whether they be close friends, loose acquaintances, or even total strangers. On the contrary, acts of altruism, self-sacrifice, benevolence, charity, and cooperation with others are also part of our human nature, coexisting with self-interest.
Researchers have theorized and empirically shown that humans engage in (strong) reciprocity – sometimes strategically, and sometimes for purely altruistic reasons – as part of a “gene–culture co-evolution(ary)” strategy that theorists trace back to norms that emerged in small hunter-gatherer groups during the late Pleistocene period. These theories posit that cooperation can proliferate at the societal (i.e. beyond immediate family members) when the benefits for most individuals-as-group members exceed the individual costs of behaving altruistically. Moreover, numerous lab experiments in social psychology and economics have documented that people are also predisposed to other types of prosocial behaviors like strategic punishment of social norm violators, free riders, and non-cooperators; warm-glow giving (also known as impure altruism, which argues that people engage in altruistic acts not only to help others, but also to cultivate a self-image of being a ‘good person’); volunteering; and giving to charity. And of course, people engage in many of these altruistic acts simultaneously, like volunteering as well as making chartiable donations, both of which aim to benefit other people.
So, where does this leave us in terms of our understanding of rationality? Is rationality characterized by our more selfish desires, or by our ability to reciprocate and cooperate as gregarious members of cohesive dyads, groups, or societies? Many scholars would now argue that rationality encapsulates both self-interest and cooperation with others, as each tendency tends to predominate contingent on contextual circumstances. Ultimately nuance is in order, which I’ll illustrate by referencing two exemplars.
Milton Friedman, a Nobel laureate in economics, incisively asked talk show host Phil Donahue “is there some society you know that doesn’t run on greed?” and then satirically joked to the audience “of course none of us are greedy, it’s only the other fellow who’s greedy”. To a greater or lesser extent, we all are greedy. Are you willing to give nearly 100 per cent of your income to a stranger, save for what it takes to rent yourself a single-occupancy room and subsist on three meager meals every day? Understandably not, and that’s not a moral failing but a simple human reality. At the same time though, we humans have also internalized a mix of group norms and specific sociocultural adaptions, also parts of our human nature, that encourage prosocial behaviors like productive cooperation, benevolence, and altruism, and predispose us towards considering the well-being of others, including total strangers. Mother Teresa received the Nobel Peace Prize and was canonized for establishing the Missionaries of Charity and her tireless work on behalf of poor, disadvantaged, and downtrodden individuals around the world.
We want for ourselves but we’re also willing to share. We’re all a mix – in varying proportions – of Milton and Mother Tersea.