Nearly every student who'd showed up to the Princeton University lecture that day was clacking away on a laptop - I'd come non-ironically armed with a legal pad and Bic pen.
And while I'd done all (okay, most) of the assigned readings for that day, the material was still hard to understand. When, later that day, I looked over my notes, there were more than a few question marks indicating where I got totally lost.
That sunny November morning, I was sitting in on a session of "The Psychology of Decision Making and Judgment," a Princeton University psychology course open to juniors and seniors, and taught by professor Eldar Shafir. I was there to interview Shafir, and I asked if I could sit in on one of classes to see firsthand what the academic experience there is like. After all, Business Insider ranked Princeton as is the top college in the US.
The lecture that day focused on prospect theory, which describes how people make decisions under risk.
Shafir explained that prospect theory emerged about 40 years ago and was developed by Nobel Prize-winning psychologist Daniel Kahneman and the late psychologist Amos Tversky. Other researchers have since contributed to the theory.
The main thing to know about prospect theory is that it poses a challenge to the longstanding assumption that humans are "rational agents." For example, a rational agent wouldn't change her preference or belief when the same exact decision is described in a slightly different way. (I'll say more about that below.)
But a human would, and prospect theory aims to describe what everyday humans do, at least most of the time.
One of the best ways to understand prospect theory - at least in my opinion - is to think about what you would do in a given situation and then think about why. Here are three such thought experiments Shafir discussed during the lecture.
Risk aversion and risk seeking
1a. Would you prefer a sure gain of $100 or an equal chance of winning $200 or nothing?
1b. Would you prefer a sure loss of $100 or an equal chance of losing $200 or nothing?
As for 1a, most people prefer the sure gain of $100 - even though, technically, the probability of winning $100 is the same in both options. That's because people are risk-averse when it comes to gains.
And as for 1b, most people prefer the gamble, or the equal chance of losing $200 or nothing - even though the probability of losing $100 is again the same in both options. That's because people are risk-seeking when it comes to losses.
This finding is in contrast to "expected utility theory," or standard economic theory, which posits that people make decisions based on their final states of wealth. If that were true, people wouldn't show a preference between a sure loss/gain and a gamble, if the probabilities were the same. Instead, people make these decisions based on gains and losses relative to their reference point.
The endowment effect
2a. You're given a mug emblazoned with the name of your university. You're asked if you would sell it for a price between $0 and $9.50. How much would you sell it for?
2b. You're asked to choose between a mug emblazoned with the name of your university and the same amount of cash, somewhere between $0 and $9.50. How much is the mug worth to you?
These were two of the conditions in an experiment published by Kahneman, along with the behavioral economists Richard Thaler and Jack Knetsch. As it turns out, the median selling price was about $7 and the median "choosing" price was $3.50. In other words, participants valued the mug twice as much when they owned it.
This psychological phenomenon is known as the endowment effect. As Shafir wrote in one of his slides for the lecture: "People do not value having a mug. They value getting or giving up "their" mug."
The endowment effect is one example of loss aversion: People are more sensitive to losing something they own than to gaining something new. In real life, having a $100 bill fall out of your pocket would probably upset you more than finding a $100 bill on the street would delight you.
Framing effects
3a. Imagine that an unusual disease will kill 600 people. You can enact program A, and 200 people will be saved. Or you can enact program B, and there will be a one-third chance that 600 people will be saved and a two-thirds chance that nobody will be saved. Which program do you choose?3b. Imagine that an unusual disease will kill 600 people. You can enact program C, and 400 people will die. Or you can enact program D, and there will be a one-third chance that nobody will die and a two-thirds chance that 600 people will die. Which program do you choose?
When faced with the situation in 3a, most people choose program A. But when faced with the situation in 3b, most people choose program D. Logically, this doesn't make a lot of sense, since people have equivalent probabilities of people surviving under programs A and C.
The fact that people choose differently depending on how the decision is described is evidence of what psychologists call a framing effect. As Kahneman and Tversky explain in a 1984 paper, the reference points in problems 3a and 3b are different.
In 3a, the reference state is that 600 people die of the disease, and the outcomes are possible gains measured by the number of lives saved. In 3b, the reference state is that no one dies of the disease and the alternatives are possible losses measured by the number of deaths.
Kahneman and Tversky write that this effect "is as common among sophisticated respondents as among naive ones, and it is not eliminated even when the same respondents answer both questions within a few minutes."
That observation calls to mind something Shafir told me when we met after the lecture: Even people who know everything there is to know about human decision-making still fall prey to cognitive biases. Being smart and educated doesn't necessarily leave you immune.
If it didn't quite inoculate me against flawed decision-making processes, learning about these psychological phenomena did help me appreciate how relatively little we understand about ourselves - and how much we have left to learn.