Back in the 1990’s the Niagara Health Quality Coalition was looking for measures for public reporting of hospital quality. At that time we recognized that mortality rates, with the exception of mortality rates for a few specific conditions, were poor indicators of quality of care. Despite reviews by physicians well-respected in the patient safety field (Iezzoni 1997, Shojania 2008, Lilford 2010) standardized hospital-wide mortality rates continue to be used often as a measure of quality of care.
Now a new study has attempted to determine whether there is a correlation between hospital-wide standardized mortality rates and the more important measure - avoidable deaths. Hogan and colleagues (Hogan 2015) did case record reviews of deaths in 34 English hospital trusts and determined rates of avoidable deaths, then compared these to two commonly used measures - the hospital standardized mortality ratio (HSMR) and the summary hospital level mortality indicator (SHMI).
The proportion of avoidable deaths was actually quite low (3.6%). More importantly there was only a weak, non-statistically significant correlation with the HSMR and SHMI.
The editorial accompanying the Hogan paper (Doran 2015) concludes that evidence is mounting that there is no future for summary mortality rates. They note that no matter how carefully they are risk adjusted there is variation both in risk across hospitals and performance within hospitals, and variation in the availability of alternative places to die. And, now that they have also been shown to not correlate with avoidable deaths we should see the “death of death rates”.
So should we instead switch to the methodology used by Hogan and colleagues and focus on avoidable death rates? Ideally we would. But medical chart review is labor intensive (and the electronic medical record actually does little to expedite such review of records for quality and avoidability of death). Moreover, despite multiple measures to reduce variability of the reviewer’s conclusions, the inter-rater reliability in the Hogan study was modest at best. And the very low rates of avoidable deaths found would require large numbers to render statistically significant comparisons.
We’ve often discussed use of trigger tools to help identify cases of suboptimal care (see our columns on trigger tools listed below). The concept is that the trigger tool flags cases that have a high likelihood of errors or suboptimal care and then more intensive chart review takes place. These are wonderful tools to use in a hospital to identify vulnerabilities and areas in need of improvement. But it would be difficult to compare hospitals using these because the frequency with which the triggers occur varies considerably across hospitals.
So mortality rates are still useful measures in randomized controlled trials of drugs, devices, or surgical procedures. And they are useful for comparing outcomes for conditions in which very rich datasets are available (eg. coronary artery bypass). But using hospital-wide mortality rates based on administrative data is not helpful. It may inappropriately flag some hospitals as “quality outliers” and as Doran et al put it “hazardous hospitals lurking inside the funnel are assumed to be safe”.
Some of our prior columns on trigger tool methodology:
Iezzoni LI. Assessing quality using administrative data. Ann Intern Med 1997;
Shojania KG, Forster AJ. Hospital mortality: when failure is not a good measure of success. CMAJ 2008; 179: 153-157
Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010; 340: c2016
Hogan H, Zipfel R, Neuburger J, et al. Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis. BMJ 2015; 351: h3239
Doran T, Bloor K, Maynard A. The death of death rates? Using mortality as a quality indicator for hospitals. BMJ 2015; 351: h3466
Print “PDF version”