Is Healthcare Too Big to Fail? Or is Failure Exactly What We Need?

Image

There is a looming challenge facing U.S. hospitals: reduce costs at the expense of devastating impacts on local communities or risk going broke.

Sam Glick

13 min read

The following article is the first of an occasional series on BRINK looking at how countries are tackling rising health costs. Oliver Wyman's Sam Glick kicked off the series with his analysis - republished below - of the current financial pressures facing U.S. hospitals and the tough choices they entail:

There is a looming challenge facing U.S. hospitals, which are being forced either to reduce costs at the expense of creating potentially devastating impacts on their local communities, or take less aggressive cost-cutting measures and risk going broke.

The backdrop to this veritable Sophie’s Choice has developed through a series of public policy and market moves to shift financial risk onto local health systems that have little experience in such areas. When the hospital is the largest employer in many towns, with financing coming from insurance companies and mutual funds, we have the makings of 2008-financial-crisis-style systemic risk.

This year, nearly one in five dollars in the U.S. will be spent on healthcare. As a percentage of GDP, this is nearly twice the global average, yet we receive no clear benefit from a significant portion of this spending. The U.S. ranks first in per capita healthcare spending, but last in the Commonwealth Fund’s assessment of health system performance in 11 major developed countries. As a society, we have a healthcare return-on-investment problem.

The U.S. healthcare challenge isn’t news. Politicians, academics, physicians, insurance executives, and countless others have been working to solve this problem for decades. Fifty years ago last summer, Lyndon Johnson signed the legislation that created Medicare and Medicaid, bringing millions of people into the healthcare system and firmly establishing the government’s role in the provision of healthcare. In 1974, Gerald Ford enacted the Employee Retirement Income Security Act (ERISA), setting clear rules for employer-provided health insurance. In 2010, “Obamacare” (the Patient Protection and Affordable Care Act, or PPACA)— which is intended to provide affordable healthcare to all U.S. citizens, allowing them the ability to choose health insurance coverage in an open, competitive insurance market—further expanded access to healthcare and provided incentives for improving outcomes and reducing costs.

Despite these efforts and many others, how Americans spend their healthcare dollars hasn’t changed in more than half a century. According to the California Healthcare Foundation, in 1960, 39 percent of U.S. healthcare spending went to hospitals, 24 percent went to physician and clinical services, 12 percent went to drugs, and the rest went to everything else. In 2014, 38 percent went to hospitals, 24 percent to physicians, and 12 percent to drugs. All we’ve done is make the pie bigger; we still carve it up the same way we did in 1960.

Why is this distribution of spending a problem? Study after study tells us that the best way to keep people healthy while simultaneously reducing cost is to shift sites of care—that is, invest in preventative measures, catch issues early, and care for people in the least-intensive way possible. If we can turn hospital stays into same-day discharges, emergency room ordeals into urgent care visits, and doctor’s appointments into telemedicine calls, we can make a big dent in the unsustainable healthcare cost trend while producing better outcomes.

For most of the history of U.S. healthcare, it has been the role of public and private health insurers to keep healthcare costs under control. They did so through creating treatment guidelines, requiring prior authorizations, shaping co-pays and deductibles to steer people to lower-cost options, and a variety of other tricks of the trade. Importantly, they also built up capabilities to pool, price, and control risk through sophisticated actuarial, underwriting, and balance sheet management techniques. Because of this, their cost control efforts only had to work in aggregate; even if a particular group of members or providers led to extraordinary costs, the insurer was unlikely to face bankruptcy.

Healthcare has unique consumption dynamics that allow these traditional insurance techniques only to go so far in controlling costs. Decision-making about the consumption of healthcare is deeply personal, and most decisions are made by patients and their doctors. The most expensive piece of healthcare equipment, so the saying goes, is a ballpoint pen. Through orders and prescriptions that they do (or don’t) write, physicians have broad control over the amount and effectiveness of healthcare spending.

When the hospital is the largest employer in many towns, with financing coming from insurance companies and mutual funds, we have the makings of 2008-financial-crisis-style systemic risk.

If physicians have control over healthcare spending and they’re the ones most qualified to make healthcare decisions, a natural solution to this problem would be to give physicians financial incentives to control healthcare costs. If physicians can consider real cost-benefit tradeoffs in making medical decisions, everyone should be better off—insurers, employers, patients, the government, and society overall.

In the 1970s, we moved in exactly this direction through the creation of health maintenance organizations (HMOs). In an HMO, physicians (or the health system of which they are a part) are given a fixed amount of money to provide care for a patient. Keeping costs low is now the delivery system’s responsibility, rather than just the insurer’s.

Despite the initial enthusiasm for HMOs (more than 80 million people were enrolled in HMOs in 2000, up from fewer than 10 million in 1980), they created new problems. There were well-publicized cases of newly incentivized hospitals and physicians keeping people from receiving the care they needed (or at least thought they needed). HMOs did work in many areas (and still do in places such as California), but membership began to wane after 2000 as consumers turned against the model and macroeconomic factors temporarily reduced the growth in healthcare costs.

Obamacare has placed a new emphasis on shifting incentives for controlling healthcare costs to physicians and hospital systems, moving beyond the basic model of the HMO, and requiring specific performance on a number of healthcare quality measures. By 2018, the Department of Health and Human Services (HHS) aims to have 50 percent of payments tied to quality measures; private insurers are quickly following suit. Simultaneously, Oliver Wyman projects that by 2018, a full 16 percent of healthcare payments will be contingent on health systems controlling costs, with that percentage continuing to rise into the 2020s.

This brings us back to thinking about sites of care and the 38 percent of healthcare dollars that currently go to hospitals. If these new incentives work as intended, what we should see is the healthcare cost trend coming down and spending being reallocated to physician’s offices, new virtual care modalities, and more effective drugs.

Moving Spending Away From Hospitals No Easy Feat

Yet moving spending away from hospitals is harder than it looks. The biggest cost in operating a hospital is labor. According to the Kaiser Family Foundation, more than 12 million people work in healthcare (twice as many as in financial services), and many of these 12 million individuals work in hospital-related jobs. To reduce hospital spending, we need to reduce labor spending, and that means eliminating jobs. When the hospital is one of the largest employers in town (as it is in cities from New York to San Diego), such labor reductions can have significant economic and political impacts.

The situation gets even more difficult. According to the American Hospital Association, 83 percent of U.S. hospitals are either not-for-profit or government-owned. This means that most hospitals in the U.S. were financed using tax-exempt bonds, with bondholders counting on hospital revenues to be repaid. And who owns these bonds? Retirees and property and casualty insurers looking for stable, low-risk income.

Now hospital systems face a conundrum: Reduce cost of care in a material way through moving services out of hospitals, potentially delivering a significant economic blow to their communities, or take more incremental measures to control costs without community impact and risk not getting paid enough by insurers and the government to cover their costs. This choice, of course, is being made against a backdrop of insurers, which are skilled at managing financial risk, shifting that risk onto these delivery systems, most of which don’t have a sophisticated risk management infrastructure in place.

To reduce hospital spending, we need to reduce labor spending, and that means eliminating jobs.

All this has the makings of an economic crisis: Risk being shifted from organizations that can manage it well to those that can’t; systemically important, undiversified community hospital systems facing significant community and political pressure not to make tough cost-reduction decisions and those hospital systems being financed largely by the nation’s insurance companies and mutual funds.

Not all hope is lost, however. There are examples of healthcare delivery systems—from Kaiser Permanente in California to Intermountain Healthcare in Utah to Inova in Virginia—that have made real investments in both population health and enterprise risk management, and those investments are paying off. Now we need other health systems to learn from them.

We must also accept the reality that for material cost to come out of healthcare, we are going to need to close hospitals and lay off employees—and that kind of creative destruction is fundamental to improving health outcomes. We also need to reconsider how we finance capital investments in healthcare, including whether tax-exempt bonds issued for the construction of buildings will continue to serve us well in the new healthcare environment.

If the 2008 financial crisis taught us anything, it’s that changing rules and poorly understood interdependencies can spell disaster for the U.S. economy. Let’s make it so that we don’t have to learn that lesson again in healthcare.

Author