MHPE 494: Medical Decision Making
Markov Models
Markov models compactly represent situations in which there is an ongoing risk of a patient moving from one state of health to another. We assume that there are a set of possible health states, and specify the probability per unit of time that a patient in a given state will "transition" to each possible state. These transition probabilities can depend on the current time (for example, the chance of death increases with time due to aging, independent of health). We also need to know the utilities of the states. Utilities may also be a function of the time at which they’re entered (if, for example, utilities for health states are discounted, bad health sooner is worse than bad health later.) What we do assume, however, is that we have no memory -- how we came to this state doesn’t matter, only when we came.
Markov models are often represented using two figures:
(Actually, you can show all of the information using either a diagram or a matrix, but both are often employed because they each make certain kinds of questions and operations easier. This should sound familiar.)
To evaluate a Markov model, we can imagine a hypothetical patient who begins at some state, and then we can follow that patient until death. Each year of life, the patient gains the utility associated with the state s/he’s in (and possibly the time, if utility is discounted). Each year of life, the patient has a given probability of transitioning to a new state. When the patient’s dead, we examine his/her accumulated utility. If we repeat this simulation for a few thousand patients, we can get a pretty good idea of the total expected utility associated with a life beginning at the initial state. (We can also measure variance, confidence intervals, etc.) This approach to evaluating Markov models is called "monte carlo simulation".
Another way to think about this is to imagine a few thousand patients, and use the transition probabilities to apportion them into groups that transition into different states, adding up the total utility for each patient in each group. This is called "cohort simulation". You don’t get variance measures this way, though.
If you have two different cohorts of patients with different utilities, you run the simulation separately for each group. If utilities depend on past history, you can also create separate states associated with each past history.
If the transition probabilities don’t change with time, you can get an exact solution, without simulation, using matrix algebra.
The article also discusses Markov-cycle trees, a way to represent the information that’s more typically available clinically (e.g., the chance of death following surgery due to infection, rather than the overall chance of death following surgery.) These are like recursive decision trees, with only chance nodes.