We’ve often discussed the issue of test results pending at discharge (see our March 1, 2011 Patient Safety Tip of the Week “Tests Pending at Discharge” and the list of other columns below). In our August 21, 2012 Patient Safety Tip of the Week “More on Missed Followup of Tests in Hospital” we discussed barriers to using computer technology to alert physicians to test results pending at discharge. One barrier was that the physician had to remember to log into a site to see if any test results had come back. A second barrier was that the list of pending tests was often long, including many results that were not deemed important. That, of course, led to alert fatigue when we attempted to alert physicians they had results when they logged on to CPOE.
In that column we noted a potential solution developed by Dalal and colleagues (Dalal 2012) that utilized automated email alerts as pending test results came in after discharge. Their preliminary results were encouraging. That group now reports on longer term results (Dalal 2013). Their methodology was a cluster randomized controlled trial. Intervention attending physicians and PCPs were significantly more aware of results of tests that were pending at discharge (76% vs 38% for attending physicians and 57% vs 33% for PCP’s). Attending physicians tended to be more aware when the test results were actionable TPAD results. Satisfaction with the system was high for both attendings and PCP’s.
This is an excellent and practical intervention for helping to avoid significant test results from falling through the cracks at a key transition of care.
See also our other columns on communicating significant results:
Dalal AK, Schnipper JL, Poon EG, et al. Design and implementation of an automated email notification system for results of tests pending at discharge. J Am Med Inform Assoc 2012; 19(4): 523-528
Dalal AK, Roy CL, Poon EG, et al. Impact of an automated email notification system for results of tests pending at discharge: a cluster-randomized controlled trial. J Am Med Inform Assoc 2013; Published Online First: 23 October 2013
We’ve done several columns on early warning systems or scores using physiologic parameters to help identify patients with early clinical deterioration (see list at the end of today’s column). Most have been used in adult medical/surgical populations and have utilized the MEWS (modified early warning score) or in pediatric populations utilizing PEWS (the pediatric early warning score).
In the UK a modified early obstetric warning system (MEOWS) has been used in obstetric inpatients to track maternal physiological parameters, and to aid early recognition and treatment of clinical deterioration after recommendation by the 2003–2005 Confidential Enquiry into Maternal and Child Health report (Lewis 2007).
MEOWS tracks the following parameters:
· blood pressure
· heart rate
· respiratory rate
· oxygen saturation
· conscious level
· pain scores
Trigger thresholds are set for “red” (most serious) and “yellow” (somewhat less serious) triggers and algorithms describe what responses are indicated for each type trigger. The MEOWS was subsequently validated in 676 consecutive obstetric adminssions at one UK hospital (Singh 2012). 30% of the patients triggered and 13% had morbidity, which included things like haemorrhage, hypertensive disease of pregnancy, and suspected infection. The MEOWS was 89% sensitive and 79% specific, with a positive predictive value 39% and a negative predictive value of 98%. The authors suggest that MEOWS is a useful bedside tool for predicting morbidity. They attributed the high sensitivity to the fact they used morbidity rather than mortality or ICU admissions as their primary end point because the latter are relatively rare in obstetric populations (in fact, there were no admissions to the intensive care unit, cardiorespiratory arrests or deaths during their study period). They suggest that adjustment of the trigger parameters may improve positive predictive value.
So MEOWS has been used in UK obstetrical settings for about a decade now. A recent study, however, has called into question its practical usefulness, impact on workload, and overall adherence (Mackintosh 2014). The Mackintosh study actually provides excellent insight into the contextual, cultural, hierarchical, organizational and practical barriers to implementing EWS programs. They did interviews and chart reviews on maternity wards at 2 UK hospitals, both of which presumably had guidelines which included use of the MEOWS. At one hospital the MEOWS chart was used in only 22% of postnatal cases.
The authors point out that a big barrier was lack of a sound evidence base for MEOWS and fact that it was often perceived as being foist upon staff by outside forces for “political” and economic reasons despite lack of demonstration that it actually improved outcomes. Many staff apparently felt that MEOWS might have some value in high-risk cases but that it was of little value in the much more prevalent “healthy” pregnancies and deliveries. It was felt that the potential gains from MEOWS in this “healthy” population would likely be small. Many of the midwives felt that it resulted in a number of unnecessary interventions and took away some of their autonomy and clinical judgment.
A major problem at one of the hospitals was that even when the algorithm called for the midwives to “call a doctor”, those doctors often either did not respond or did not listen to the midwives. Lack of timely responsiveness by such physicians obviously is a significant barrier to implementation of a system like MEOWS that requires escalation of care based upon physiological triggers.
Part of the problem also had to do with workload and efficiency. Midwives might be required to enter vital signs in up to 4 separate places.
Interestingly, they found that obstetrical units were often treated differently by the organization than med/surg units, a feeling often embraced by the obstetrical units themselves.
As an aside, both the Mackintosh and Singh papers make mention of respiratory rate being a vital sign often incompletely monitored. Given our numerous columns on the hazards of opioid-induced respiratory depression, one might argue that respiratory rate could be extremely valuable in detecting patients at risk of clinical deterioration in a population of patients who may be receiving opioids.
There were, however, some positive impressions of the MEOWS program. Many staff commented on how the MEOWS charts made things like vital signs visually apparent, compared to previously being “hidden” in the medical record, and hence making it easier to see trends. And they were able to cite individual cases where early recognition of deterioration likely provided interventions necessary to prevent adverse outcomes.
The Mackintosh paper could really be a sociology paper about change management in general since so many of the barriers noted are encountered almost any time we look to change the way things are done. But the insights point out that despite the MEOWS concept being “simple” and “intuitively attractive” one needs to look hard at the evidence base and demonstrate that it actually improves outcomes, not just creates more work. (Note that studies like the Singh study demonstrated that MEOWS can identify patients at risk of deterioration but was not designed to show that implementation of MEOWS improved actual patient outcomes). Programs where we can clearly demonstrate a positive impact after implementation are much easier to maintain. Some very good lessons here!
Some of our other columns on MEWS or recognition of clinical deterioration:
· February 26, 2008 “Nightmares: The Hospital at Night”
· April 2009 “Early Emergency Team Calls Reduce Serious Adverse Events”
· December 15, 2009 “The Weekend Effect”
· December 29, 2009 “Recognizing Deteriorating Patients”
· February 22, 2011 “Rethinking Alarms”
· March 15, 2011 “Early Warnings for Sepsis”
· October 18, 2011 “High Risk Surgical Patients”
· March 2012 “Value of an Expanded Early Warning System Score”
· September 11, 2012 “In Search of the Ideal Early Warning Score”
· May 2013 “Ireland First to Adopt National Early Warning Score”
· September 17, 2013 “First MEWS, Now PEWS”
Lewis G (ed.) Saving Mothers’ Lives: Reviewing maternal Deaths to make Motherhood Safer 2003–2005. The Seventh Confidential Enquiry into Maternal Deaths in the United Kingdom. London: CEMACH, 2007
Singh S, McGlennan A, England A, Simons R. A validation study of the CEMACH recommended modified early obstetric warning system (MEOWS). Anesthesia 2012; 67(1): 12–18 Article first published online: 9 NOV 2011
Mackintosh N, Watson K, Rance S, Sandall J. Value of a modified early obstetric warning system (MEOWS) in managing maternal complications in the peripartum period: an ethnographic study. BMJ Qual Saf 2014; 23(1): 26-34 Published Online First: 18 July 2013 doi:10.1136/bmjqs-2012-001781
In our March 2013 What’s New in the Patient Safety World column “Diagnostic Error in Primary Care” we highlighted a study by Singh and colleagues (Singh 2013) on diagnostic errors in primary care that used a trigger tool methodology. Now Singh and colleagues (Murphy 2014) have gone a step further and used trigger tool methodology to identify cases in ambulatory care where certain red flags for possible cancers have not been acted upon.
The beauty of trigger tool methodology (See our Patient Safety Tips of the Week for October 30, 2007 “Using IHI's Global Trigger Tool” and April 15, 2008 “Computerizing Trigger Tools”) is that it allows you to screen a large number of records that potentially have the item you are really looking for. Then in the subset of records that contain the “trigger” you can then do manual chart review to verify details. It basically streamlines what would otherwise be a very labor- and time-intensive process that would not otherwise be practical.
One of the diagnostic errors we’ve talked about most frequently is the missed diagnosis due to failure to follow up on test results (see the list of our prior columns on this topic at the end of today’s column). The Murphy study identified 4 markers (or “red flags”) of potential cancer (elevated PSA level, positive fecal occult blood test, hematochezia, and iron deficiency anemia) to be used as the “triggers” and then did data mining of almost 300,000 electronic patient medical records at two facilities looking for these triggers. They also applied an algorithm to exclude instances where the marker had already been followed up, or where the patient already had a known cancer, or where the patient had a known terminal illness. They then did a random chart audit of records identified by the above process to see how often those markers had not, in fact, been appropriately followed up. The positive predictive value (PPV) for each of these triggers was in the 60-70% range. They estimate that this system could pick up 1048 instances of delayed or missed followup at their facilities.
Note that the time interval set in their trigger tool was important. If the interval is too short many cases where followup had already been planned would be falsely identified as outliers. If the interval is too long, the delay in followup might lead to progression of a cancer.
Their findings came from retrospective application of the trigger tool to a patient population. But the real potential value would be to apply the tool prospectively and identify cases in which intervention could still be taken to identify cancer sooner.
The authors expect that the tool may yet be further refined. In many cases where the trigger was positive, chart review revealed information in the free text areas of charts that indicated appropriate action had already been taken or scheduled. They speculate that use of tools such as natural language processing might be able to identify those cases, further narrowing the number of charts identified by the trigger tool to an even more manageable number of cases needing manual chart review.
The accompanying editorial (Schiff 2014) notes how use of such electronic screens can be used to identify other potential diagnostic errors. By linking laboratory and pharmacy databases, Schiff and colleagues uncovered patients who did not undergo follow-up for abnormal TSH results, helping to avoid failure or delay in diagnosing hypothyroidism (Schiff 2005).
In our April 15, 2008 Patient Safety Tip of the Week “Computerizing Trigger Tools” we speculated how computerized trigger tools might be used to identify opportunities to intervene before harm came to patients. The Murphy study has pushed that one step closer to reality.
Some of our prior columns on trigger tool methodology:
· October 30, 2007 “Using IHI's Global Trigger Tool”
· April 15, 2008 “Computerizing Trigger Tools”
· January 2011 “No Improvement in Patient Safety: Why Not?”
· May 2011 “Just How Frequent Are Hospital Medical Errors?”
· March 2013 “Diagnostic Error in Primary Care”
· January 2014 “Trigger Tools to Prevent Diagnostic Delays”
Some of our prior columns on diagnostic error:
· September 28, 2010 “Diagnostic Error”
· November 29, 2011 “More on Diagnostic Error”
· May 15, 2012 “Diagnostic Error Chapter 3”
· August 12, 2008 “Jerome Groopman’s “How Doctors Think”
· August 10, 2010 “ ”
· January 24, 2012 “Patient Safety in Ambulatory Care”
· October 9, 2012 “Call for Focus on Diagnostic Errors”
· March 2013 “Diagnostic Error in Primary Care”
· May 2013 “Scope and Consequences of Diagnostic Errors”
· August 2013 “Clinical Intuition”
· January 2014 “Trigger Tools to Prevent Diagnostic Delays”
· And our review of Malcolm Gladwell’s “Blink” in our Patient Safety Library
See also our other columns on communicating significant results:
Singh H, Giardina TD, Meyer AND, et al. Types and Origins of Diagnostic Errors in Primary Care Settings. JAMA Intern Med 2013; published online February 25, 2013
Murphy DR, Laxmisan A, Reis BA, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf 2014; 23: 8-16 Published Online First: 19 July 201
Schiff GD. Diagnosis and diagnostic errors: time for a new paradigm. BMJ Qual Saf 2014; 23: 1-3 Published Online First: 19 September 2013
Schiff GD, Kim S, Krosnjar N, et al. Missed hypothyroidism diagnosis uncovered by linking laboratory and pharmacy data. Arch Intern Med 2005; 165: 574
We’ve long been strong advocates of using pre-op “huddles” (also known as preoperative briefings) and post-op debriefings as patient safety tools. And don’t forget: huddles are not just for the OR! Our December 9, 2008 Patient Safety Tip of the Week “Huddles in Healthcare” also discussed how huddles and briefings can be useful in a variety of healthcare situations, not just the preoperative one. Such huddles have benefits far beyond just remembering things that need to be done. The mere performance of the briefings and debriefings fosters a sense of belonging to teams, empowerment for all members, and better communication. These lead not only to a culture of safety but they also significantly improve job satisfaction for all involved.
Preoperative briefings and postoperative debriefings are tools we have strongly recommended since we first began talking about the TeamSTEPPS™ training program back in 2007 (see our our May 22, 2007 Patient Safety Tip of the Week “More on TeamSTEPPS™” and our March 2009 What’s New in the Patient Safety World column “Surgical Team Training”). Briefings and debriefings are also core components of many of the crew resource management programs such as the VA’s Medical Team Training Program (see our January 11, 2011 Patient Safety Tip of the Week “NPSA (UK) ‘How to Guide’: Five Steps to Safer Surgery”).
Checklists have been utilized more often for the preoperative briefings or huddles than for postop debriefings. We previously noted a study by Lingard et al (Lingard 2008) that used a checklist to structure short team briefings and documented reduction in the number of communication failures. Another group (Paull 2010) demonstrated that implementation of preoperative checklist-driven briefings was associated with increased compliance with antibiotic prophylaxis and DVT prophylaxis. Our April 2012 What’s New in the Patient Safety World column “Operating Room Briefings and Debriefings” highlighted a study (Bandari 2012) that demonstrated how structured tools for OR briefings and debriefings can identify a whole host of patient safety issues. The online version of the article provides copies of the tools used. Our December 9, 2008 Patient Safety Tip of the Week “Huddles in Healthcare” discussed an article by Nundy and colleagues at Johns Hopkins (Nundy 2008) that found a very simple format for pre-operative briefings led to a 31% reduction in unexpected delays in the OR and a 19% reduction in communication breakdowns that lead to delays. Other examples of checklists for the preoperative briefings may be found on either the NHS Patient Safety First website or the VA website. Video examples of preoperative briefings may also be found at the NHS website or the VA website.
Now the patient safety group at Johns Hopkins (Fabian 2013) has developed a tool to audit preoperative briefings and finds something we have also encountered: any checklist for the pre-op huddle should be customized for the particular service in which it is being utilized.
Their tool was developed with 4 domains: briefing logistics, briefing basics, specific briefing content, and briefing participation. The audits with the tool showed wide variation across surgical services in both content of the briefings and participation in the briefings. They conclude that the variation across service lines suggests the need for service-specific customization of the briefing tool in surgery.
We often implement interventions that make sense and have been demonstrated to improve outcomes but then we neglect to ensure that the intervention is being used and actually working. The pre-op briefing is one such intervention. In our July 31, 2012 Patient Safety Tip of the Week “Surgical Case Duration and Miscommunications” we noted an article (Gillespie 2012) on factors involved in prolonging surgery. They noted that preoperative briefings occurred in only 12.5% of cases, despite that practice having been “mandated” at the study hospital.
The tool in the current study from Hopkins is a practical way to audit your briefing implementation.
In our April 2012 What’s New in the Patient Safety World column “Operating Room Briefings and Debriefings” we listed many of the issues that might be discussed in a pre-op briefing/huddle. The pre-op briefing should be kept as simple as possible. Anticipate things and try to discuss the most serious things that might happen, but don’t make the process so complex and long that team members lose their attention. A typical pre-op huddle or briefing ordinarily takes no more than 3-4 minutes. We’ve found that simple checklists help the team complete those briefings. But we also noted that checklists that are too complicated are not good. We do have a tendency to add too many things to the checklists. Generally you should keep checklists to fewer than 10 items. Checklists should also be reviewed and revised as needed. Items that are not providing useful information can be deleted.
So don’t just “mandate” pre-op briefings. Make sure they are useful to the services you design them for, that they are used, and that they improve care.
See our prior columns on huddles, briefings, and debriefings:
· April 9, 2007 “Make Your Surgical Timeouts More Useful”
· May 22, 2007 “More on TeamSTEPPS™”
· December 9, 2008 “Huddles in Healthcare”
· March 10, 2009 “Prolonged Surgical Duration and Time Awareness”
· January 11, 2011 “NPSA (UK) ‘How to Guide’: Five Steps to Safer Surgery”
· March 2009 “Surgical Team Training”
· April 2012 “Operating Room Briefings and Debriefings”
· July 31, 2012 “Surgical Case Duration and Miscommunications”
Lingard L, Regehr G, Orser B, et al. Evaluation of a Preoperative Checklist and Team Briefing Among Surgeons, Nurses, and Anesthesiologists to Reduce Failures in Communication. Arch Surg, Jan 2008; 143: 12-17
Paull DE, Mazzia LM, Wood SD, et al. Briefing guide study: preoperative briefing and postoperative debriefing checklists in the Veterans Health Administration medical team training program. Am J Surg 2010; 200(5): 620-623
Bandari J, Schumacher K, Simon M, et al. Surfacing Safety Hazards Using Standardized Operating Room Briefings and Debriefings at a Large Regional Medical Center. The Joint Commission Journal on Quality and Patient Safety 2012; 38(4): 154-160
Nundy S, Mukherjee A, Sexton JB, et al. Impact of Preoperative Briefings on Operating Room Delays: A Preliminary Report. Arch Surg 2008; 143(11): 1068-1072
NHS Patient Safety First. video demonstrating sample pre-op briefings
NHS Patient Safety First. Quick guide to briefing and debriefing.
Veterans Health Administration. Preoperative Briefing Guide for Use in the Operating Room.
Veterans Health Administration. Postoperative Briefing Guide for Use in the Operating Room.
Veterans Health Administration. Preoperative Briefing Video.
Fabian M. Johnston FM, Tergas AI, Bennett JL, et al. Measuring Briefing and Checklist Compliance in Surgery: A Tool for Quality Improvement. American Journal of Medical Quality 2013; first published on November 22, 2013 as doi:10.1177/1062860613509402
Gillespie BM, Chaboyer W, Fairweather N. Factors that influence the expected length of operation: results of a prospective study. BMJ Qual Saf 2012; 21(1): 3-12 Published Online First: 14 October 2011 doi:10.1136/bmjqs-2011-000169