Medicaid has been getting a lot of attention from politicians over the past few years. Democrats made expanding it a key part of Obamacare (if every state opts in, 14 million Americans will join), while Republicans have repeatedly voted to make huge cuts to the program. Supporters say Medicaid coverage improves the health and finances of poor Americans, while opponents claim it’s worse than being uninsured. And as states decide whether to expand their Medicaid programs under the new law, the debate has only gotten more intense.
We hoped that a recent study looking at what happened when Oregon expanded its Medicaid program would help settle the question of whether Medicaid works. Unfortunately the results were… complicated. If you read the Washington Post, you would have seen that “Depression rates drop for the uninsured with Medicaid coverage” while Bloomberg News announced “Medicaid coverage may not improve the health of poor in the U.S.,” and the New York Times reported “Medicaid Access Increases Use of Care, Study Finds.” All these headlines were based on the same study, and they’re all (sort of) true.
The question then is this: Can we make any conclusions about the effectiveness of Medicaid based on this study? Or is it simply, “a Rorschach test for how partisans and health care wonks view the new law”?
In the past, opponents of Medicaid have pointed to studies showing worse health outcomes for Medicaid patients after certain illnesses and medical procedures. However, these studies only show correlation, not causation– in other words there’s something that Medicaid patients have in common that’s leading to worse outcomes. Sure, it could be Medicaid, but the more likely reason is what’s known as “selection bias”:
“Medicaid enrollees tend to be sicker than uninsured patients and to have lower socioeconomic status, poorer nutrition, and fewer community and family resources. Medical and social service providers may also help the sickest or neediest patients to enroll in Medicaid — a more direct cause of selection bias. Few of these potential confounders can be completely addressed using commonly available clinical or population data.”
Studies that account for selection bias support this theory. When you compare similar populations in states with different Medicaid rules, you find that people are in better health and live longer when more people are able to enroll. Still, even these studies aren’t perfect– there could be other differences between states that the researchers haven’t accounted for.
To tell for sure whether Medicaid causes people to be in better or worse health, you’d want to do a “randomized control trial” (RCT). You’d take a big group of people, and randomly assign them to two groups: one group would be given Medicaid coverage, the other would not, and then over time you’d look for health differences between the two. The random assignment is key– that way you’d know that Medicaid was the cause of any differences, and not because one group was sicker or poorer than the other.
Which brings us to the Oregon study. Back in 2008, Oregon found itself with enough money to provide Medicaid coverage to an extra 10,000 low income adults. There were many times that number who wanted in, so the state held a lottery. Thousands of adults were randomly assigned to a group that could apply for Medicaid, thousands more couldn’t get coverage– and a natural RCT was born.
After the first year, the results were encouraging for Medicaid:
Compared with the uninsured group, those in the Medicaid sample got 30 percent more hospital care, 35 percent more outpatient care and 15 percent more prescription-drug care. There were similar gains for preventive care; mammograms were up 60 percent and cholesterol monitoring rose 20 percent. The Medicaid recipients also had fewer unpaid bills sent to collection, were 25 percent more likely to report themselves in “good” or “excellent” health, and 10 percent less likely to screen positive for depression.
The latest study, published last week in the prestigious New England Journal of Medicine, looks at the health of over 12,000 adults two years after they entered the Oregon lottery. Most people assumed it would show Medicaid improved people’s health, but (at least at first glance) the actual results were mixed. The authors found Medicaid coverage:
- Had “no significant effect of Medicaid coverage on the prevalence or diagnosis of hypertension or high cholesterol levels or on the use of medication for these conditions”;
- “Significantly increased the probability of a diagnosis of diabetes and the use of diabetes medication,” but had “no significant effect on average glycated hemoglobin levels” (aka blood sugar levels);
- Decreased the probability of a positive screening for depression;
- Increased the use of many preventive services; and
- Nearly eliminated catastrophic out-of-pocket medical expenditures.
Conservatives have seized on the first part as supposed proof that Medicaid is bad for health. Liberals, meanwhile, have focused on the finding that people with Medicaid were less likely to have crippling medical bills.
So is that it? Does the answer to whether Medicaid is effective simply depend on which part of the study you choose to focus on? Not at all.
A quick stats lesson
The confusion comes from a misunderstanding of statistics. The authors of the study say that they found no significant effect on high blood pressure and cholesterol rates, or on diabetes. That’s not the same as having no effect.
“Significant” here refers to “statistical significance.” You might remember it if you took a high school or college stats course, but if not here’s a quick refresher. Results are considered significant when there’s a better than 95% probability that your hypothesis is true; in other words, your results weren’t just a fluke. There’s a formula researchers use to calculate that probability– we won’t get into the math here, but the thing to know is that it’s based on sample size. With a very large sample, even small results can be significant, while if you have a small sample, you would need to see a big difference in the treatment group. And if the sample is too small, there’s no way to get a statistically significant result.
In medical studies there’s also something called “clinical significance”– that’s how big a change researchers would have to see to say, “Hey, this treatment worked.” It’s something they estimate before the study, based on past experience. They do this to make sure that if they get a successful result, the study will have been big enough for it to be statistically significant.
What the results really show
So let’s say that before the study, the authors decided that if 20% fewer people in the Medicaid group have high blood sugar it would show Medicaid works pretty well. Is the study large enough for that to be a statistically significant result? Just over 12,000 adults participated– that seems like a huge sample size, until you look closer. Mother Jones’ Kevin Drum did the math:
In the Oregon study, 5.1 percent of the people in the control group had elevated hemoglobin [aka blood sugar] levels. Now let’s take a look at the treatment group. It started out with about 6,000 people who were offered Medicaid. Of that, 1,500 actually signed up. If you figure that 5.1 percent of them started out with elevated hemoglobin levels, that’s about 80 people. A 20 percent reduction would be 16 people.
The problem is that even though the study showed the Medicaid group had fewer people with dangerously high blood sugar, the sample size is so small it would be almost impossible for those results to be statistically significant. Same with cholesterol and hypertension.
The Incidental Economist’s Austin Frakt calculated how big the Oregon study would have to be to get a significant result for lowering blood sugar, and found that you would need a study with 7,500 people who enrolled in Medicaid– that’s five times as many as in this study.
By the way, only the results for hypertension, cholesterol, and blood sugar had the small sample size problem, since few people have these conditions. When the researchers looked at participants’ medical bills, many in the control group reported steep medical bills, while Medicaid “nearly eliminated catastrophic out-of-pocket expenditures” in the other group. These results were easily statistically significant, meaning we can say with confidence that Medicaid works as health insurance.
As for judging Medicaid’s effect on health, Kevin Drum has it right:
Bottom line: It’s more likely that access to Medicaid did improve health outcomes than that it had zero or negative effects. It’s just that the study was too small to say that with certainty. For laymen, as opposed to stat geeks, the headline result of the Oregon study was “Possibly positive but inconclusive,” not “Had no effect.”