Poetry is the backbone of the Arab language. Omar Offendum

What It Means to Be Rational

Don’t rely on economic analysis to learn about human rationality. If we want to build just and prosperous societies, we must look elsewhere for guidance.

Imagine yourself with a newly minted MBA, in the fortunate position of having four job offers to choose from. One offers a great salary, but is located in a city that would leave you far from family and friends. The second offers a decent salary but great opportunity for advancement. The third provides what looks to be a wonderfully collegial work environment, but modest opportunity for career development. And the fourth offers really interesting and challenging work, but the prospect of very long hours. Which job would you choose? And more important, which job should you choose?

The norms of economic rationality tell us how you, as a rational decision maker, would go about picking a job. You’d create a spreadsheet, with all relevant attributes of the various jobs (eg., salary, colleagues, advancement opportunities, location, etc.) arrayed. You would then assign an importance weight to each attribute (after all, the various characteristics are not equally important to you). Then, you would assign a value to each job on each attribute (say, 10 is great and 1 is poor). Finally, since the world is an uncertain place and you are making a set of predictions about what each of the jobs will be like (your colleagues may not be as nice as they seemed; the firm may go bust before you’ve had a chance to advance, etc.), you would assign a probability that you would realize each of the values of each of the attributes at each of the jobs. Now, your work is done except for some multiplying and adding. Do the math and out pops the answer to the question, “which job should I take?”

This model of rational choice makes several assumptions. It assumes that our goal is to maximize self interest. It assumes that different dimensions of our decisions are “commensurable” (that is, comparable on a common scale—say a “utility” scale). And it assumes that we act with complete information and can meaningfully assign probabilities to every outcome.

Now, almost nobody I know has actually chosen a job (or a house, or a place to go on vacation, or where to go to university, or a car, or almost anything else) in this way, and the lesson that is usually drawn from this fact is that people are imperfectly rational creatures who make sub-optimal decisions. An entire discipline, called “behavioral economics” (a misnomer since it is really nothing but psychology applied to decisions that often have economic consequences) has developed to document the many respects in which our decisions are imperfectly rational. One of the founding fathers of the field, Nobel Prize winner Daniel Kahneman, has recently published Thinking Fast and Slow, a beautiful review of his forty years of research on suboptimal decision making. This book caps a series of others, by Richard Thaler and Cass Sunstein, Dan Ariely, Dan Gilbert, and even me, all of which have captured the attention of the public.

The findings of this discipline are new and sexy, but the basic insights behind it have been with us a long time. Almost a century ago, the great economist John Maynard Keynes wrote that we couldn’t understand the economy without understanding what he called “animal spirits”—a vivid term for psychology. And ever since then, few recommendations to fix macroeconomic problems fail to make reference to “consumer confidence” or investor “optimism.” More psychology. So we’ve known for a long time that the pristine models from economics are importantly incomplete.

What modern research tells us is that we are imperfectly rational in two different respects. One is that we do the “math” wrong; we are bad at thinking about uncertainty and at creating relevant and accurate spreadsheets in our heads. The second is more important: we often want the wrong things. We mispredict how much satisfaction, or utility, a given outcome will give us. We discount future consequences of decisions too steeply (and thus eat and spend too much, and exercise and save too little). And we mispredict how long a given decision will satisfy us. That is, we tend to ignore the fact that we get used to good things so that they provide us with satisfaction for a much shorter time than we imagine.

It is important that we know these characteristics of our decision making. This knowledge will help institutions to develop policies that nudge, if not compel, their citizens to make better decisions. But it is also important to note that all of this research has left the normative model of what a rational decision is unchallenged. The model of rational choice is fine; it’s just that people fall short. I want to suggest that the model is not fine. Virtually all of the assumptions built into it about human beings and the world are false:

It assumes that people are self-interested. Well, yes and no. Self-interest is certainly part of what moves us, but we are also interested in the welfare of others—our families, our communities, our nations, and even the world. And we are also interested in doing what’s right—even at considerable personal sacrifice and when no one is looking.

It assumes that there is a common scale of value on which everything can be compared. There isn’t. Sure, we can assign value numbers to things like salary, colleagues, being close to our families, and the like, but in doing so, we are only kidding ourselves that these numbers actually represent a common underlying metric. Tradeoffs are hard to make, and often can’t be made formulaically. And one big reason why is that it’s so hard to compare different dimensions of our decisions to each other.

It assumes that we can attach meaningful probabilities to outcomes. Sometimes we can, but life is not a roulette wheel or a series of coin flips, in which probabilities are well defined. The world is a radically uncertain place,and we deceive ourselves if we think we can always attach numbers to our uncertainty. This is a big point, not a minor technical one, because you can’t calculate how much utility to expect from a decision if you can’t quantify the likelihood of the outcome you are hoping for.

We protest against gross income inequality, and we should. We complain about unequal access to quality education, and we should. We call the actions of some financial operators greedy, and we should. There is no place within notions of economic rationality for any of these concerns, but each of them is supremely rational. As the language of economics increasingly becomes the unofficial language of our social and political institutions, the consequences of excluding concerns like the ones I just raised loom larger and larger. If we are to move toward societies of greater opportunity and justice, we need a more expansive notion of what it means to be rational than we will ever get from economics.

Read more in this debate: Deirdre McCloskey, Dan Ariely, Alberto Alemanno.


comments powered by Disqus
Most Read