One night last July, I saw a woman sitting in San Francisco’s Civic Center BART station, wearing white socks with a vividly bright red pattern on them. Except that, as I got closer, I realized her legs were bare. That “red pattern” was actually a series of large open sores on her legs—which she’d attempted to bandage herself, using those free address labels you get at the post office.
Martha, whose name I’ve changed to protect her privacy, told me that she’d been in the hospital not long before (and she had the plastic bracelet to prove it), where she’d been treated for a staph infection. When they had discharged her, though, they hadn’t given her enough bandages to take with her.
I talked to her for a while and gave her what I could spare, and then I went to find someone to help. A transit cop just shook his head. But the woman in the fare booth (whom I’ll call Rachel) said, “Oh yeah, Martha is here a lot. Her father’s homeless too, and he comes to check on her. I’ll keep an eye on her.”
I could barely sleep that night. The next day, I went back to the station, with a big bag containing all the bandages, wound-care supplies and antibiotic ointments I had managed to buy at the local drug store. I couldn’t find Martha, but eventually tracked down Rachel the station agent, who told me that some paramedics had stopped by on their way to another call, and they had bandaged Martha’s legs. But Rachel was still happy to take the supplies I’d brought because they would definitely get used.
Appreciate our work?
Rewire is a non-profit independent media publication. Your tax-deductible contribution helps support our research, reporting, and analysis.
It’s pure sadism to make people like Martha suffer, just because they don’t have insurance and can’t afford full medical treatment. And even apart from the moral issues, I don’t believe it’s cost-effective, or in society’s interest, to have people with huge partially treated infections wandering around, especially given staph’s long history of developing antibiotic resistance. (A staph infection nearly killed me when I was in my early 20s, so this is especially terrifying to me.)
Now the Republicans are rushing through the Senate version of the cruel American Health Care Act, which is a huge, unprecedented rollback of health coverage for poor and disabled people. This means they’re finally admitting they have no alternative plan for covering those who depend on the current system. It also means we may finally be on the verge of settling a question that’s raged since before Ronald Reagan warned that Medicare would bring a “socialized dictatorship.” Is health care a “market,” or a public good, like clean air? Should I care if you don’t have health coverage—or is that just the consequence of a robust market economy, with winners and losers?
It’s a stark choice this time. If you believe that health care is just like any other free-market enterprise, then it’s fine for millions of poor people to lose coverage. But it’s encouraging to see most people in the United States coming together against this callous proposition.
Health care certainly looks like a business, at first glance. Doctors are entrepreneurs. There’s tons of advertising. Those of us who’ve been lucky enough to have employer-sponsored insurance may get to choose between different plans. There are sites where you can rate doctors, just like with restaurants on Yelp. And so on.
But the more you look at the health-care “industry,” the more bizarre and unlike any other economy it starts to look. It is a bewildering checkerboard: You might not even know whether the health system where you are a patient is public or private, for-profit or nonprofit. Ditto health insurers.
One of my first gigs as a journalist, years ago, was covering health care for a local paper; later, I wrote for a variety of other health publications. And I was continually astonished at the quirks of our health-care system, which looked nothing like a tidy economic model of supply and demand, or consumer-driven “market forces.” These days, I’m mostly a science fiction writer, which means I think about how fictional worlds are constructed—and it would be hard for any author to invent anything as counter-intuitive as the system we’ve been constructing since World War II.
If patients were exercising consumer choice, or providers were responding to market forces, then you’d expect the same procedures, with the same frequency, in populations with the same income and health status.
Instead, one recent report from the Dartmouth Atlas of Health Care found that a “controversial” prostate cancer screening was more than twice as likely to be given in Miami than in Pensacola, Florida. Another Dartmouth dataset shows that Medicare patients in San Francisco receive about half as much hospital care in the last two years of life as their counterparts in Los Angeles. Some primary care doctors will order twice as many CT scans as their colleagues in the same practice, maybe because of individual doctors and their relationships or preferences.
In fact, health care is frequently the opposite of a standard economic model, in that increased supply leads to more demand, a phenomenon that the Dartmouth Institute calls “supply-sensitive care.” For example, the more hospital beds in a region, the more patients might need to be hospitalized to keep them full. Sometimes, there’s a crackerjack hand surgeon in one region, and she needs to be kept busy. Or, a doctor’s office might buy a CT scanner, or rent an MRI in a truck, and they need to order lots of tests to amortize the cost. And so on.
“Patient choice”—in this case, the choice not to consent to a procedure—is never going to be effective against “supply-sensitive care,” because if your doctor says you need a CT scan or a prostate cancer screening, you’ll probably want any test that has a chance, however remote, of saving your life. As Arthur Garson, Jr. and Carolyn L. Englehard observe in their eye-opening 2008 book Health Care Half-Truths, most patients are just like a car owner who visits a mechanic and says, “Fix it.”
Health Care Half-Truths also makes a key point: “Health care” encompasses a lot more than just “medical care.” In fact, “health care” includes a whole host of social welfare programs and institutions that contribute to keeping people healthy, many of which have nothing to do with doctors. The government contributes to health in various other ways, including childhood nutrition programs, food stamps, environmental protection, and education—and sometimes spending more money on this stuff means less money spent on hospital visits.
Meanwhile, up until about ten years ago, U.S. businesses as a whole were achieving startling productivity gains, but during that same period, medical care was frustratingly getting more and more expensive. If the health–care industry really were an economy like any other, then it ought to be easy to achieve the same savings as your local widget manufacturer, through things like cost-cutting and supply-chain management. Indeed, in the 1990s, it looked for a while as though health management organizations (HMOs) were going to rein in costs, through a very business-school-style blend of utilization management and lump-sum payments to providers. Bill Clinton, for example, sang the praises of HMOs’ bundling and cost-consciousness. But now, many commentators seem to believe that HMOs achieved a one-time gain in efficiency, followed by a resumption of long-term cost growth.
Plus, whatever savings insurance companies achieved kept getting diverted into wasteful overhead and profit—one of the weirdest terms in health insurance is the “medical loss ratio” (MLR), which refers to the percentage of premiums that go toward medical care. As a health-care reporter, I was told that profitable insurance companies ought to aim for an MLR of 60 percent or less, and some Iowa plans had MLRs as low as 48 percent. These days, however, the Affordable Care Act (ACA) requires health plans to spend at least 80 percent of premiums on care and “quality improvement.”
When it comes to skyrocketing costs, there’s some evidence that better information and more coordination are what can save money. The ACA included a number of experiments in “value-based reimbursement,” which means paying for medical care based on quality rather than quantity. One of these experiments saved half a billion dollars per year. Separately, the ACA made a dent in the growth of Medicare spending by cracking down on unnecessary procedures (Here’s a detailed defense of the ACA, which I wrote when it became law.)
But even when we achieve some victories over runaway health inflation, it’s still nigh-impossible to reverse the trend. You can blame new technologies, patented drugs, or a ton of other factors—but a big part of the problem is that health care simply isn’t like the automobile industry, consumer electronics, or tennis shoes. As the Dartmouth data shows, health spending is driven by idiosyncratic local decisions, and by doctors and their relationships. And some of the biggest waste in health care comes from ill-fated attempts to constrain costs, including forcing the working poor to wait to see a doctor, and encouraging insurance-company stinginess.
Apart from the weird way that supply and demand function in health care, there’s just a reason they call it “public health.” If your neighbors are healthy, then you’re likelier to be healthy too. With the Centers for Disease Control and Prevention set to lose funding for crucial programs that monitor the spread of infectious diseases, it’s all too easy to imagine that if enough people lose access to basic health care, you might fall prey to a new pandemic.
When you come down to it, medical care really isn’t like donuts—it’s a public good, like education, quality air, or clean water, and one that should be universally available. Access to all of those things is part of what enables everyone to become a full member of society. And when any of us lack that access, including to decent medical care, all of society suffers.