The landmark 1999 Institute of Medicine paper “To Err is Human” is considered by many to have launched the “new era” of patient safety. That paper set a challenge goal of 50% reduction in harm due to medical care over the next 5 years. No formal measurements have been available to determine the exact progress made to date but most in the patient safety field think any nation-wide gains have been modest despite our ever-increasing knowledge of individual and bundled interventions that have proven efficacy.
A recent report (Levinson 2010) showed that one in every seven Medicare patients who is hospitalized experienced adverse events during their hospital stays, up to 44% being potentially preventable.
Now a new study (Landrigan 2010) confirms that there has been little improvement overall over a long time frame. That study, done on data from 10 North Carolina hospitals used IHI’s global trigger tool (see our October 30, 2007 Patient Safety Tip of the Week “Using IHI's Global Trigger Tool”) to estimate rates of patient harm and preventable harm over a 6-year period. They found essentially no reduction in harm over that period. They had chosen North Carolina because many hospitals there had participated in patient safety collaboratives and North Carolina had a much higher percentage of hospitals participating in IHI’s patient safety programs.
While the IHI global trigger tool measures events somewhat different than in the studies that formed the basis of the IOM report, the global trigger tool methodology is “doable” with limited resources and provides a more reliable comparison over time.
Editorials by BobWachter and Michael Millenson offer some insight into the reasons why we have not. Wachter had previously always felt that, despite the lack of adequate measurement in patient safety, significant progress was being made. However, he now acknowledges that the methodology used by Landrigan et al really shows that our progress in patient safety has been disappointing. Millenson spares no words and attributes the lack of progress to the “three I’s”: invisibility, inertia and income. He argues that the widespread occurrence of adverse events and medical errors still seems to largely fly under everyone’s radar and that inertia continues to keep healthcare workers from doing things for which there is good evidence. He highlights the national average 40% compliance with hand hygiene guidelines as evidence of lack of commitment on the part of healthcare workers.
Landrigan et al. note that patient safety interventions such as CPOE (computerized physician order entry) and reductions in resident work hours should have resulted in improvements in harm due to medical treatment. But both of those interventions also have downsides. With the exception of a few hospitals that have developed their own CPOE systems in-house and integrated them with patient safety programs, most CPOE systems are still very rudimentary, lack sophisticated clinical decision support, and have been associated with unintended consequences that have largely cancelled out any improvements (see our multiple columns dealing with unintended consequences). And another new study on the impact of healthcare IT systems on quality (Jones 2010) has also shown mixed and modest results.
And the reduction in housestaff hours has the downside of increasing the number of handoffs that occur. We’ve often asked ourselves “Would I rather be cared for by a sleepy resident who knows me or by a wide awake floating resident who knows nothing about me?”. Fatigue is a major concern, not just among housestaff, but among all healthcare workers. But too many of our interventions reduce the critical communication that must take place between healthcare workers of all types.
Technological advances have also been double-edged swords. While promising significant improvements in various aspects of patient care, they also have their downsides. Our column above “ECRI’s Top 10 Health Technology Hazards for 2011”includes among ECRI’s list of technology hazards many of the technologies we have introduced to improve patient safety (eg. healthcare IT, various alarms, PCA pumps, etc.).
We are also to blame in that many of the patient safety interventions we have implemented have turned out to adversely affect patient outcomes. In retrospect, the evidence for some of these interventions was “soft”. We hyped perioperative beta blockers for just about anyone undergoing surgery. Now we’ve learned that they may actually increase mortality. We overdid it on prophylactic use of agents for gastric acid suppression. Now we realize they may have played a role in development of C. difficile infections and may even have increased the risk for ventilator-associated pneumonia (VAP), the very condition for which they were commonly being used. We made antibiotics within 4 hours for community-acquired pneumonia a quality and pay-for-performance standard, only to see many patients who turned out not to have pneumonia being unnecessarily treated with antibiotics. Even the SCIP (Surgical Care Improvement Project) showed little impact of adherence to individual practices on patient outcomes (see our August 2010 What’s New in the Patient Safety World column “SCIP: Disappointing Outcomes on SSI’s. What’s Next?”).
But we still go back to the basic premise of John Nance’s “Why Hospitals Should Fly” (see our June 2, 2009 Patient Safety Tip of the Week “Why Hospitals Should Fly…John Nance Nails It!”) that healthcare has failed to produce significant patient safety improvement because we have failed to change the culture. Perhaps the biggest reason we have failed to truly develop a culture of safety is that many of the very interventions we espouse actually take away from face-to-face time with patients. That clearly is the case with many of the healthcare IT initiatives. The Jones paper notes that IT-related initiatives often compete with other quality improvement initiatives for resources. And many of the other administrative burdens we place on healthcare workers compete for their most valuable asset: time.
Wachter, in his editorial above, also stresses we need to focus more on culture change. Lori Paine and Peter Pronovost and colleagues at Johns Hopkins (Paine 2010) have demonstrated how a number of patient safety programs may lead to rapid improvements in the culture of safety. But the next critical step remains demonstrating that such improvements in culture actually translate to improvement in patient outcomes.
We’ve still made only limited initiatives at introducing patient safety and culture of safety into our professional school training. How often do you see nursing, medical and pharmacy students actually working together as teams in any significant fashion until much later in their training?
While the IOM report is often referred to as “seminal” or “eye opening”, the Landrigan report is really a wake-up call. We clearly are not making the impact we should be.
Christopher P. Landrigan CP, Parry GJ, Bones CB, et al. Temporal Trends in Rates of Patient Harm Resulting from Medical Care. N Engl J Med 2010; 363: 2124-2134
Levinson DR. Adverse Events in Hospitals: National Incidence Among Medicare Beneficiaries. Washington, DC: US Department of Health and Human Services, Office of the Inspector General; November 2010. Report No. OEI-06-09-00090
Wachter R. Could It Be That Patients Aren’t Any Safer? Wachter’s World (blog)
Nov. 25, 2010
Millenson M. Why We Still Kill Patients: Invisibility, Inertia, And Income. Health Affairs Blog December 6th, 2010
Jones SS, Adams JL, Schneider EC, Ringel JS, McGlynn EA. American Journal of Electronic Health Record Adoption and Quality Improvement in US Hospitals.
Managed Care 2010; 16(12 Spec No.): SP64-SP71 Published Online: December 22, 2010
Nance, John J. Why Hospitals Should Fly: The Ultimate Flight Plan to Patient Safety and Quality Care. Bozeman MT: Second River Healthcare Press. 2008
Paine LA, Rosenstein BJ, Sexton JB, et al. Assessing and improving safety culture throughout an academic medical centre: a prospective cohort study.
Qual Saf Health Care 2010; 19: 547-554