May 15, 2012 Diagnostic Error Chapter 3
As we have tried to cover more patient safety topics related to ambulatory care we keep coming back to diagnostic error. Note that we try to avoid the term “cognitive error” that often appears in the literature as synonymous with diagnostic error. But cognitive error implies human error whereas diagnostic error is much more encompassing of the interaction between system-related factors and human-related factors. And we’ll see that system-related errors are just as important in leading to errors that result in adverse patient outcomes. Care in the ambulatory setting differs greatly from that in inpatient and long-term care settings because ambulatory care is typically disseminated in both time and place. That is, the interaction with the typical patient takes place over weeks or months in the outpatient setting whereas it takes place over several days in the hospital. And, whereas in the hospital most of the players are co-located within the walls of the hospital, on the outpatient side the various caregivers are seldom in one location. Add to that the fact that medical record systems on the outpatient side are often not interconnected with each other or with the hospital IT systems. And further add to that patient-level factors. Remember, the patient is “captive” while an inpatient and can’t fail to keep an appointment like they can on the ambulatory side. Also, research has shown that patients are more likely to be compliant with statin therapy (and other medications) when it is begun in a hospital after an acute event (the “teachable moment”) than they are if it is started in ambulatory care for primary prevention. So ambulatory care presents its own set of barriers and challenges that can enable diagnostic errors.
We’ve already done several columns on diagnostic error, including our Patient Safety Tips of the Week for September 28, 2010 “Diagnostic Error” and November 29, 2011 “More on Diagnostic Error” (hence today’s title “Diagnostic Error Chapter 3”). But several outstanding resources have become available since our last column in November 2011.
The ECRI Institute put on a great webinar in December 2011 on best practices for preventing missed, delayed, or incorrect diagnoses and makes it available for free on their website (ECRI 2011a). They confirm that diagnostic errors rank second in malpractice claims (after obstetrical events) and stress the importance of breakdowns in communication in most such events. They use case examples to illustrate many of the major factors contributing to diagnostic errors and provide solid recommendations to help minimize the risk of such events.
For missed diagnoses, cancer or cardiovascular diseases predominate. They present a case of missed breast cancer in a patient who had presented with breast pain and discharge and another about a missed colon cancer. Lessons learned include the need to take all symptoms seriously, follow them up to resolution, and consider rare presentations. They highlight the problem of missed appointments and stress documentation issues, including the importance of documenting family history and documenting all the systems reviewed and their findings. They note you should train your staff to stress the importance of cancer screening. They also stress the importance of revisiting diagnosis if a patient’s symptoms have failed to resolve.
Continuity of care issues are often problematic, particularly in teaching hospital settings. They discuss a case where a patient with a superficial arterial occlusion was seen by 7 different providers over a 5-week period before the correct diagnosis was made. In this case a resident originally focused on the patient’s prior history of lumbar disc disease. Subsequent residents seeing the patient probably relied on that initial evaluation rather than using their own history and revisiting the diagnosis.
They also provide the classic case of the missed aortic dissection, misdiagnosed as acute coronary syndrome. Avoiding premature closure, where one diagnosis is settled on before others have been adequately excluded, is of paramount importance.
When it comes to test tracking, one of our most frequent topics, they refer back to a webinar they had done in April 2011 (ECRI 2011b). In test tracking it is important to determine who is accountable for following up on test results, making sure the patient is compliant with getting the testing, and documenting all interventions or attempts to intervene.
Especially important is not leaving test results on voicemail. Though that was in the context of critical abnormal test results, that applies equally to all test results. HIPAA issues aside, you may not be able to verify that the patient ever actually heard the voicemail message or understood it. You also should have policies in place dealing with what to do if a patient cannot be reached (such as knowing when to contact police or other parties to help locate the patient).
During follow-up visits it is important to go back and inquire about symptoms discussed on prior visits and ensure they have resolved. Making sure patients keep follow-up visits or attend any referrals is very important. You should always document any communications that take place, even when there is no face-to-face visit.
And what about the rare diagnosis? They note failure to consider a diagnosis is one of the main causes of missed diagnosis. Using clinical decision support tools may be helpful in at least considering unusual or rare diagnoses. A second review of test results may be useful. And always considering the worst case scenario is a good practice.
Engaging the patient is important. Making sure they know why you are ordering a test or making a referral and emphasizing the need for follow-up are important considerations. Documenting such discussions and informed consent discussions are also important. Phone calls, particularly those after hours or on weekends, often go undocumented so having systems in place to capture those conversations are critical. They also provide examples of things you might say to help facilitate patient compliance with testing and referrals and follow-ups.
Another example case involves an on-call physician receiving a call from a patient who had been started on a new antihypertensive medication now complaining of “weakness”. Three days later the patient had an embolic stroke from atrial fibrillation. Even if the complaints on the phone call were unrelated to the subsequent stroke, the immediate attribution of symptoms to the new medication without subsequent questioning put this physician at-risk for the subsequent events.
The last case was one of a missed cervical cancer in which several issues, including failure to follow-up an inadequate Pap smear, contributed.
A recent paper on diagnostic errors in primary care (Ely 2012) also noted that diagnostic errors were often preceded by common symptoms and common, relatively benign initial diagnoses. The three most common lessons learned in their review were (1) consider diagnosis X in patients presenting with symptom Y (2) look beyond the initial, most obvious diagnosis and (3) be alert to atypical presentations of disease. The authors note how mental shortcuts and cognitive biases such as anchoring, premature closure, and diagnostic momemtum frequently lead to diagnostic errors. Broadening the differential diagnosis and always considering the “don’t miss” diagnoses were important themes. They recommend de-biasing strategies (see Croskerry discussion below) such as diagnostic timeouts and use of checklists, noting that these strategies have still not been well developed with an evidence base.
Pat Croskerry, whose work we highlighted in our November 29, 2011 Patient Safety Tip of the Week “More on Diagnostic Error” points out a host of reasons that diagnostic error has not been a prime focus of the patient safety movement (Croskerry 2012). He again emphasizes that we spend most of our time in the “intuitive” rather than the “rational” decision-making mode. In the intuitive mode failed heuristics and cognitive and affective biases are widespread. He points to numerous studies that have demonstrated most diagnostic errors occur in relation to common, well-known illnesses and thus lack of knowledge is not the most likely cause. He provides a clinical example of a case of missed pulmonary embolism in which both emotional and cognitive biases prevented the clinician from recognizing the correct diagnosis. The gist of his arguments to reduce diagnostic error is therefore to focus on ways to debias our thinking.
Situational awareness (see our May 8, 2012 Patient Safety Tip of the Week “Importance of Nontechnical Skills in Healthcare”) is also important in avoiding diagnostic errors or delays in diagnosis. While we most often discuss situational awareness in rapidly evolving situations, it is also important in more chronic circumstances and settings. Hardeep Singh and colleagues (Singh 2012a) reviewed a population of patients with colorectal and lung cancers and found errors in about a third of cases of both types of cancer. They applied a framework of situational awareness and noted one of four levels of situational awareness often lacking: information perception, information comprehension, forecasting future events, and choosing the appropriate action based on the above three. An example under information perception might be that a positive fecal occult blood test was missed so a colonoscopy was not scheduled. At the information comprehension level, a positive fecal occult blood test may have been ascribed to other reasons (eg. hemorrhoids). At the forecasting level, a provider failed to foresee that a patient might fail to keep an appointment before the provider left for an extended leave. And under choosing appropriate action, an example was lack of a sense of urgency in responding to a clue like microcytic anemia. The authors discuss some potential interventions that might be helpful, both at the individual provider level and at the system level. One is fostering a sense of dynamic skepticism, a term borrowed from aviation, that is continuous questioning of the validity of previous assumptions based on constantly evaluating incoming data. At the system level, putting data in a format that might better highlight clues may help (eg. putting weights in graphic form so that the EHR might help a provider notice weight loss).
Another recent paper (Zwaan 2012) looked at diagnostic reasoning in five Dutch hospitals. They used the term “suboptimal cognitive acts” (SCA’s) rather than cognitive errors to identify faults in diagnostic reasoning that had a low threshold and correlated these with diagnostic error and patient harm. They found an average of 2.6 SCA’s per patient record reviewed and found that SCA’s were more frequent in cases with diagnostic error and those with patient harm. However, they did note that in almost 20% of cases with diagnostic error or patient harm, there were no SCA’s. They classified SCA’s in Reason’s taxonomy of unsafe acts and found 62% of SCA’s were intended acts, 58% mistakes, and 49% violations. Unintended actions accounted for 26% of SCA’s, with 14% being “slips” and 12% “lapses”. They found that most SCA’s occurred during data gathering stages and in cases where patient harm occurred the SCA’s were often related to laboratory testing (including unnecessary testing). Harm, generally, was not severe. The article provides lots of good examples of the SCA’s, with examples in each of the Reason taxonomy categories. They also have a discussion of why no harm occurred in some cases despite SCA’s.
Another study from the Netherlands (Mamede 2012) looked at diagnostic reasoning in medical students and internal medicine residents. They had previously noted that salient distracting features are a major contributor to diagnostic errors, particularly when in the non-analytic reasoning mode. They showed that reflective reasoning led to significantly more correct diagnoses. Interestingly, students did not benefit from reflective reasoning. The implication is that certain salient features may attract a physician’s attention and misdirect the diagnostic reasoning process. Reflective reasoning may help overcome the influence of these distracting features.
Patient-level factors frequently contribute to diagnostic errors as well. Assessment of a patient’s health literacy is also important since we need to make sure our patients (or their caregivers) understand the importance of the test, the referral, or the treatment.
Identification of diagnostic errors remains problematic for a number of reasons Singh and colleagues (Singh 2012b) used the trigger tool concept in attempt to identify diagnostic errors in primary care practices. Trigger #1 was a primary care visit followed by an unplanned hospitalization within 14 days. Trigger #2 was a primary care visit followed by one or more unplanned visits within 14 days. In charts identified by Trigger #1 the positive predictive value for diagnostic error was 20.9% and for Trigger #2 5.4%. Both were higher than the PPV for control charts (2.1%). Though these are modest values, they are obviously better at identifying charts having diagnostic errors than would be obtained from random chart review, thus constituting a promising methodology for future study of diagnostic error.
As before, there has been little in the way of rigorous evaluation of suggested interventions to minimize diagnostic error. A recent review of over 140 articles in the literature on cognitive interventions (Graber 2012) noted that most had either not been formally tested or have only been tested in “artificial” settings. A companion article by the same group (Singh 2012c) looked at system-related interventions to reduce cognitive errors and noted a lack of scientific rigor in most studies. In our November 29, 2011 Patient “Safety Tip of the Week “More on Diagnostic Error” we noted an article by Ely and colleagues (Ely 2011) suggesting use of checklists to help avoid diagnostic errors and an article by Schiff and Bates (Schiff 2010) proposing a number of ways that electronic health records might be used to improve diagnostic accuracy and prevent diagnostic error.
Given the rapid increase in the number of publications on diagnostic error in recent years, you can expect some of the potential interventions noted above will begin to be tested in more rigorous studies in the near future.
Some of our prior Patient Safety Tips of the Week on diagnostic error:
· September 28, 2010 “Diagnostic Error”
· May 29, 2008 “If You Do RCA’s or Design Healthcare Processes…Read Gary Klein’s Work”)
· August 12, 2008 “Jerome Groopman’s “How Doctors Think”)
· August 10, 2010 “It’s Not Always About The Evidence”
·
November 29,
2011 “More
on Diagnostic Error”
· And our review of Malcolm Gladwell’s “Blink” in our Patient Safety Library
References:
ECRI Institute. Best Practices For Preventing Missed, Delayed, or Incorrect Diagnoses (webinar). December 2011.
http://bphc.hrsa.gov/ftca/riskmanagement/webinars/webinarbestpractices.html
ECRI Institute Webinar: Getting on the Right Track: Tracking Test Results, No-Show Appointments, and Hospital Visits.
April 13 & 14, 2011
http://bphc.hrsa.gov/ftca/riskmanagement/webinars/getting_on_the_right_track.html
Ely JW, Kaldjian LC, D'Alessandro DM. Diagnostic Errors in Primary Care: Lessons Learned. J Am Board Fam Med 2012; 25: 87-97
http://www.jabfm.org/content/25/1/87.full.pdf+html?sid=4db5c429-f3a7-4d3e-9488-8d9a15ed64eb
Croskerry P. Perspectives on Diagnostic Failure and Patient Safety. Healthcare Quarterly 2012; 15(Special Issue): 50-56
http://www.longwoods.com/content/22841
Singh H, Giardina TD, Petersen LA, et al. Exploring situational awareness in diagnostic errors in primary care. BMJ Qual Saf 2012; 21: 30-38 Published Online First: 2 September 2011 doi:10.1136/bmjqs-2011-000310
http://qualitysafety.bmj.com/content/21/1/30.full.pdf+html?sid=5d081594-73af-4b7b-9bee-653649a42a84
Zwaan L, Thijs A, Wagner C, et al. Relating Faults in Diagnostic Reasoning With Diagnostic Errors and Patient Harm. Academic Medicine 2012; 87(2): 149-156, February 2012
Mamede S, Splinter TAW, van Gog T, et al. Exploring the role of salient distracting clinical features in the emergence of diagnostic errors and the mechanisms through which reflection counteracts mistakes. BMJ Qual Saf 2012; 21:295-300 doi:10.1136/bmjqs-2011-000518
http://qualitysafety.bmj.com/content/21/4/295.abstract
Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf 2012; 21: 93-100 Published Online First: 13 October 2011 doi:10.1136/bmjqs-2011-000304
http://qualitysafety.bmj.com/content/21/2/93.full.pdf+html
Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf 2012; published online ahead of print 27 April 2012 doi:10.1136/bmjqs-2011-000149
http://qualitysafety.bmj.com/content/early/2012/04/26/bmjqs-2011-000149.short?g=w_qshc_ahead_tab
Singh H, Graber ML, Kissam SM, et al. System-related interventions to reduce diagnostic errors: a narrative review. BMJ Qual Saf 2012; 21: 160-170 Published Online First: 30 November 2011 doi:10.1136/bmjqs-2011-000150
http://qualitysafety.bmj.com/content/21/2/160.abstract?sid=5d081594-73af-4b7b-9bee-653649a42a84
Ely JW, Graber M, Croskerry P. Checklists to reduce diagnostic errors. Academic Medicine 2011; 86(3): 307-313
Schiff GD, Bates DW. Can Electronic Clinical Documentation Help Prevent Diagnostic Errors? NEJM 2010; 362(12): 1066-1069
http://www.nejm.org/doi/pdf/10.1056/NEJMp0911734
http://www.patientsafetysolutions.com/