MHPE 494: Medical Decision Making
This session we enter the strange and fascinating world of preferences, utilities, and feelings.
Discussion of situations
What are some situations in which patient preferences are more important to the decision than diagnostic probabilities? For example, when Alan had his wisdom teeth extracted, he had the choice of general anesthesia or conscious IV sedation. Both procedures have a very good track record -- low probabilities of mortality or morbidity. Which should he choose?
Measurement methods
The Froberg & Kane article discuss a number of different ways to measure preferences. In practice, three are most common:
Rating scales
Rating scales (visual or category) are fast and easy to understand (and therefore inexpensive to administer). Unfortunately, they can be hard to interpret: the rating scale is arbitrary, ratings are highly context-dependent (the range and spacing of the set of things that you rate affects your rating), and it's difficult to compare them across people.
Time tradeoffs
The time tradeoff method is more time-consuming than rating scales, and slightly more difficult to understand. It has the advantage of providing utilities that are easily understood and comparable across people, because the utility is simply the proportion of life in the better health state that a person considers equivalent to their life expectancy in the worse health state. Some people, however, are unwilling to trade off any amount of life because they find the idea repugnant.
Standard gambles
Standard gambles are popular with the hard-core utility crowd, because they obey the axioms of utility set forth by von Neumann and Morgenstern. The utilities implicitly include risk attitude, which can be good or bad. The utilities are comparable across people. But the elicitation is both time-consuming and difficult to understand and perform for many people.
The others
Magnitude estimation is an oldie-but-goodie from the psychological scaling literature. It's rarely used in practice, however. Willingness-to-pay is the backbone of "contingent valuation" methods, which are commonly used for things like determining the value of saving 10,000 birds from a oil slick to decide whether society should pay for that. Unfortunately, it's plagued with methodological problems. Equivalence (how many people in state A are equivalent to how many in state B) is fairly rare as well.
The Prediction Problem
A major problem in all preference or utility assessment is that people are being asked to assess their preferences for health states that they have not yet experienced in most cases. That is, they must predict how they will feel about a future health state. Do people do a good job of this? (Would physicians do a better job?) Can we improve people's predictions?
Kahneman, et al. article
A recent article by Kahneman, et al. illustrates a way in which people can be seriously wrong about their predictions because the way we experience events and the way we remember experiencing them may differ. Would you rather have 60 seconds of moderate pain or 60 seconds of moderate pain followed by 30 seconds of diminishing (but still uncomfortable) pain? Everyone prefers the former a priori, but the article shows that people may remember the second experience as less painful!
One approach: SDP
The Shared Decision Program is a group that tries to offer a solution to the prediction problem by acquainting patients with the experiences of people who are in the health states that the patient may face. They present interactive videodisc interviews of, for example, women with breast cancer who've undergone different treatment regimes talking about their decisions and how they were driven by their values. They also discuss what the experience was like.
Feelings: Disappointment, regret, and omission bias
The experiences people have may not add up the way utilities are supposed to. There's a fair-sized literature on emotional aspects of decision making: postdecision disappointment and regret, and predecision anticipation of regret. When people anticipate regret, their decisions may change to reduce the probability of regret. This could be good -- perhaps what we should try to achieve is not the highest "utility" or highest life expectancy, but the best feelings about our outcomes. On the other hand, it can also be dangerous, particularly when people find bad outcomes due to action to be more regrettable than bad outcomes due to inaction. This can lead to an omission bias. In recent work, Asch, et al. (the optional article) have shown that some people refuse to vaccinate their children if the vaccination has a chance of causing death, even when that chance was lower than the chance of death for an unvaccinated child.