What’s New in the Patient Safety World

April 2014

Checklists Don’t Always Lead to Improvement



Among our numerous columns dealing with use of checklists in healthcare and other industries the WHO Surgical Safety Checklist has probably attracted the most attention. Our July 1, 2008 Patient Safety Tip of the Week “WHO’s New Surgical Safety Checklist” described the tool and provided the link to download the checklist tool and instructions how to use it.  We also discussed checklist design and use in our September 23, 2008 Patient Safety Tip of the Week “Checklists and Wrong Site Surgery”.


In our January 20, 2009 Patient Safety Tip of the Week “The WHO Surgical Safety Checklist Delivers the Outcomes” we discussed the striking improvements in patient outcomes following implementation of the WHO Surgical Safety Checklist at hospitals in eight different countries. Haynes and colleagues (Haynes 2009) demonstrated that mortality at 30-days post-op decreased from 1.5% before introduction of the checklist to 0.8% after. Rate of any complication decreased from 11% to 7%. Both these outcomes were highly statistically significant. That’s a relative risk reduction of approximately 36% for mortality and major morbidity!


That striking improvement in outcomes occurred even without complete adherence to all items on the checklist. We discussed the debate as to whether the striking improvement was attributable to use of the checklist per se or to the change in “culture” that accompanied use of the checklist.


But recent widespread adoption of a surgical checklist in over 100 hospitals in Ontario, Canada failed to demonstrate significant reductions in adjusted rates for mortality or complications (Urbach 2014). The rate of any complication decreased from 3.86% to 3.82% and mortality at 30-days post-op decreased from 0.71% to 0.65% in Canadian study, neither being statistically significant. There was also no significant changes in rates of hospital readmission and emergency department visits within 30 days after discharge. This result was surprising, especially since self-reported compliance with the checklist was over 90% at almost all participating hospitals.


In an accompanying editorial Lucian Leape discusses potential reasons for lack of evidence of improvement in Ontario hospitals after implementation of the checklist (Leape 2014). He suspects the most likely reason was that the checklist was not accurately used, despite the reported 98% compliance in the study. He also notes that since the checklist was adopted unmodified in 90% of hospitals the local buy-in and team building that are needed for true adoption were likely lacking. He also feels that the 3-month period was probably far too short for significant improvement to have occurred.


We have several of our own thoughts on why the Ontario study failed to show improvement after implementation of the checklist and, thus, differed from the prior studies. One is that the results of the original study (Haynes 2009) were almost too good to be true. The school of thought popularized by John Ioannidis says that “if it sounds too good to be true, it probably is not true” (Ioannidis 2005, Pereira 2012) and might apply here. That school concludes that most large treatment effects emerge from small studies, and when additional trials are performed, the effect sizes become typically much smaller. That is a distinct possibility here, given that the Ontario study was so much bigger than any of the other studies.


But the Haynes study was not the only one to demonstrate striking improvement in outcomes after implementation of a surgical checklist. Improvements of a similar magnitude were seen after implementation of the SURPASS checklist (de Vries 2010). That checklist is a very comprehensive checklist that deals with the entire surgical pathway, including pre- and post-operative care as well as events within the OR (see our November 30, 2010 Patient Safety Tip of the Week “SURPASS: The Mother of All Checklists”). After implementation of SURPASS the number of complications per 100 patients dropped from 27.3 to 16.7 and in-hospital mortality dropped from 1.5% to 0.8%. Note that outcomes at several comparable hospitals considered “control” hospitals did not change. And no other significant programs were introduced at the time, further suggesting that the improvements were due to implementation of the checklist. Moreover, complication rates were significantly lower in those patients for whom 80% or more of the checklist items were completed. But the authors are quick to note that the benefits of the checklist implementation are not just due to the checklist but also due to the development of a “culture of safety” that results from such implementation. Also of note was that these improvements occurred at hospitals which already had relatively high levels of quality of care.


Another study (van Klei 2012) demonstrated a 15% reduction in adjusted mortality rates after implementation of WHO’s Surgical Safety Checklist and showed outcomes were better in those with full checklist completion compared to those with partial completion or noncompliance.


Another group implemented both team training and a comprehensive surgical checklist and demonstrated significant reduction of 30-day morbidity (Bliss 2012). Overall adverse event rates decreased from 23.60% for historical control cases and 15.90% in cases with only team training, to 8.20% in cases with checklist use.


A recent systematic review of the impact of surgical checklists (Treadwell 2014) noted that 10 of 21 studies on implementation of surgical checklists included data on outcomes. Outcomes from those reporting were generally favorable, showing decreases in both inhospital mortality and complication rates.


So did all these prior studies overestimate the impact of checklist implementation on patient outcomes? Only time will tell.


Another possible reason for the difference between the Ontario study and the other studies has to do with difference in the baseline (historical) levels of mortality and complications. While it’s not clear that exactly the same things were being measured across studies, the rate of any complication decreased from 11% to 7% in Haynes study compared to 3.86% to 3.82% in Ontario study. Mortality at 30-days post-op decreased from 1.5% before introduction of the checklist to 0.8% after in Haynes study compared to 0.71% to 0.65% in Ontario study. So was it that the Ontario hospitals were already performing at a relatively high level and had less room to improve? The study on the SURPASS checklist outcomes (de Vries 2010) showed improvements comparable to those in the Haynes study and was done at hospitals said to be already performing at high levels (note, however, that the mortality rates in the Ontario study were lower than that in the SURPASS study even after the latter showed improvement).


Another consideration is whether the 3-month measurement timeframe (both before and after implementation) in the Ontario study was simply too short to demonstrate improved outcomes. Well, that was the same timeframe used in the SURPASS study and they demonstrated striking improvement.


Our last consideration has to do with Lucian Leape’s comments about the fact that the checklist was seldom modified at the local level in the Ontario hospitals. That suggests that the “culture” aspect of using checklists never developed. In fact, the Ontario hospitals were basically mandated to use a checklist. We all know that our medical staffs don’t like to have anything “foisted upon them” from the outside. So there may well have been a difference compared to the hospitals that voluntarily (and usually enthusiastically) adopted the checklists in some of the other studies.


The systematic review by Treadwell et al. (Treadwell 2014) cautions that the association between checklists and improved outcomes does not necessarily imply causation. First, they note that checklists are often implemented as part of a multifaceted strategy to improve care. They also note there may be reporting bias (i.e. perhaps only those with positive outcomes reported outcome data). And, third, it’s possible that not all surgical checklists are beneficial.


It’s important to keep in mind that none of these studies was a randomized controlled trial (RCT) and there are several practical barriers to doing such an RCT. They all have before/after observational designs and it is conceivable that factors other than just the checklist are important. Indeed, we strongly suspect that the change in culture is probably more important that the checklist per se. Developing checklists is not enough. You need to involve your staff in development of those checklists and educate all staff in their importance and implementation. You need to audit the use of and adherence to the checklists you develop. The audit should be done for anything you develop a checklist for, not just a safe surgery checklist.


Don’t let the Ontario study dissuade you on the importance of checklists. Checklists are some of the most valuable tools we have available in quality improvement and patient safety. They are simple and save time in the long run. Most take only minutes to complete. They are also the least expensive of all tools. All the items in the WHO Surgical Safety Checklist have negligible financial costs. None of the favorable studies above published the likely financial savings resulting from the improvements but they would obviously be substantial. The ROI on checklists is incredibly high, both in human terms and financial terms.




Some of our prior columns on checklists:







Haynes AB, Weiser TG, Berry WR, et al. for the Safe Surgery Saves Lives Study Group. A Surgical Safety Checklist to Reduce Morbidity and Mortality in a Global Population. N Engl J Med. Online First January 14, 2009 (DOI: 10.1056/NEJMsa0810119), in Print January 29, 2009




WHO Surgical Safety Checklist




Urbach DR, Govindarajan A, Saskin R, et al. Introduction of Surgical Safety Checklists in Ontario, Canada. N Engl J Med 2014; 370: 1029-1038




Leape LL. The Checklist Conundrum (editorial). N Engl J Med 2014; 370:1063-1064




Ioannidis JP. Why most published research findings are false. PLoS Med 2005; 2(8): e124




Pereira TV, Horwitz RI, Ioannidis JPA. Empirical Evaluation of Very Large Treatment Effects of Medical Interventions. JAMA 2012; 308(16): 1676-1684




de Vries EN,  Prins HA, Crolla RMPH, et al. for the SURPASS Collaborative Group. Effect of a Comprehensive Surgical Safety System on Patient Outcomes. N Engl J Med 2010; 363: 1928-1937




van Klei WA, Hoff RG van Aarnhem EEH et al. Effects of the Introduction of the WHO “Surgical Safety Checklist” on In-Hospital Mortality: A Cohort Study. Ann Surg 2012; 255: 44-49




Bliss LA, Ross-Richardson CB, Sanzar LJ, et al. Thirty-Day Outcomes Support Implementation of a Surgical Safety Checklist. J Am Coll Surg 2012; 215: 766-776




Urbach DR, Govindarajan A, Saskin R, et al. Introduction of Surgical Safety Checklists in Ontario, Canada. N Engl J Med 2014; 370: 1029-1038




Leape LL. The Checklist Conundrum (editorial). N Engl J Med 2014; 370:1063-1064




Treadwell JR, Lucas S, Tsou ay. Surgical checklists: a systematic review of impacts and implementation. BMJ Qual Saf 2014; 23: 299-31






Print “PDF version










Tip of the Week Archive


What’s New in the Patient Safety World Archive