National hospital rankings contradict one another

A new study shows that four companies that release national hospital rankings disagree with one another so completely that any consumer who consults more than one is sure to be confused.

The study, “National Hospital Ratings Systems Share few Common Scores and may Generate Confusion Instead of Clarity,” was published this month by Health Affairs. It compares hospital ratings done by U.S. News & World Report, Consumer Reports, the Leapfrog Group and Healthgrades.

Tim Vogus
Timothy Vogus (Vanderbilt)

“There was no organization that was rated at the top across all four,” says Timothy Vogus, associate professor of management at Vanderbilt University’s Owen Graduate School of Management and an author of the study. “That’s pretty alarming, because one would like to believe that quality is quality.”

Only three hospitals were named high-performers by three of the services. Only 10 percent of the 88 hospitals rated as a high-performer by one of the publications were rated a high-performer by any of the others. Consumer Reports and U.S. News & World Report did not agree on even one high-performer hospital.

The lack of agreement among the four systems is explainable, given that each uses radically different measures to put together their ratings. Leapfrog and Consumer Reports use transparent rating systems while U.S. News & World Report and Healthgrades use proprietary methods.

Leapfrog and Consumer Reports focus on hospital safety, though they do not define it the same way. Healthgrades’ “Top 50” and “Top 100” rankings stress hospitals that consistently perform well on patient outcomes, as measured by mortality and complication rates. U.S. News & World Report focuses on identifying the “best medical centers for the most difficult patients,” with the goal of helping consumers determine which hospitals provide the best care for the most serious or complicated medical conditions and procedures.

“While the lack of agreement among these rating systems is largely explained by their different foci and measures, these differences are likely not clear to many physicians, patients and purchasers,” says the Health Affairs article. “The complexity and opacity of the ratings is likely to cause confusion and information overload rather than driving patients and purchasers to higher quality, safer care.”

Vogus says that under current conditions, using a term such as “best hospitals” is “too vague.”

“[rquote]What I think is important is that we gain some real clarity and transparency about what each of these services is actually measuring, so when people consult them they are making informed choices[/rquote].”

Vogus is part of a safety panel that advises Leapfrog on what factors should comprise their safety score. Other authors of the paper are:

  • J. Matthew Austin, assistant professor at the Armstrong Institute for Patient Safety and Quality, Johns Hopkins Medicine.
  • Ashish K. Jha, professor of health policy and management at the Harvard T.H. Chan School of Public Health.
  • Peter J. Pronovost, professor of anesthesiology and critical care medicine, surgery, and health policy and management at Johns Hopkins University. He is also the director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins Medicine.
  • Patrick S. Romano, professor of medicine and pediatrics at the University of California, Davis, School of Medicine.
  • Sara J. Singer, associate professor of health care management and policy in the Department of Health Policy and Management at the Harvard T.H. Chan School of Public Health.
  • Robert M. Wachter, professor and associate chair in the Department of Medicine at the University of California, San Francisco, where he holds the Benioff Endowed Chair in Hospital Medicine.