This month we saw a host of articles and resources dealing with diagnostic error. The category of diagnostic error is the one most frequently involved malpractice claims and settlements. In fact, claims for diagnostic error surpass all other claims combined. Yet there is very little focus on diagnostic error in patient safety programs today.
Bob Wachter (Wachter 2008) and Peter Pronovost (Newman-Toker 2009), two of the most widely recognized proponents in the patient safety field, both identified diagnostic error as a neglected area for investigation and intervention that could become the next big target in patient safety. And that appears to be the case. Wachter, in a new Health Affairs article (Wachter 2010) presents a number of logical arguments as to why that is the case and discusses both barriers and potential solutions. And a review article by the Pennsylvania Patient Safety Authority (PPSA 2010) reviews both the scantly reported diagnostic errors and the literature on diagnostic errors.
Wachter notes that autopsy series over the years have consistently demonstrated missed findings that could have impacted on care of the patient in about 10% of all autopsies. Unfortunately, in the era of high tech imaging too many physicians think (obviously erroneously) that there is little to be learned from autopsies. Actually, we think the real reason for fewer autopsies is the fear, in our litiginous society, that something potentially treatable will be found! But the point is that diagnostic errors are very common. Unfortunately, we have no good current ways of measurement for diagnostic errors.
Wachter points out that diagnostic errors are more common than medication errors but even the landmark IOM Report “To Err is Human” mentions diagnostic errors only twice but medication errors 70 times. He notes that we have tended to focus on those sorts of medical errors where easy system fixes are likely and that it is much more difficult to address diagnostic errors, where cognitive processes are primarily involved.
We have previously discussed the cognitive processes and decision making processes that healthcare workers use. We have discussed the work of people like Gary Klein (see our May 29, 2008 Patient Safety Tip of the Week “If You Do RCA’s or Design Healthcare Processes…Read Gary Klein’s Work”) on pattern recognition and recognition-primed decision making that typically takes place in more acute scenarios and the work of Jerry Groopman (see our August 12, 2008 Patient Safety Tip of the Week “Jerome Groopman’s “How Doctors Think”) on the day-to-day thinking that takes place in interacting with patients. Both types of cognitive approaches have their upsides and downsides but both also tend to fall into similar cognitive error traps.
The new PPSA review discusses many of the same cognitive biases we discussed in our review of Jerry Groopman’s book, including the availability bias, confirmation bias (and its corollary – dismissing contrary evidence), anchoring and others, such as premature closure, context errors, and satisficing (accepting any satisfactory solution rather than an optimal one). And it talks about communication issues across the continuum of care. But, importantly, it emphasizes that system-related factors (remember: the system is usually much easier to change than the human factors) do commonly contribute to diagnostic errors and that strategies to minimize those may reduce diagnostic errors. Such system-related factors include things like specimen labeling, communication of abnormal results to physicians, communication of revised reports to physicians, physician followup with patients, and managing the patients across transitions of care.
One of our most common cognitive errors leading to diagnostic errors is “anchoring”, where we latch onto a single possibility and fail to look for alternatives. We then look for information that tends to confirm our first diagnosis (confirmation bias) and tend to ignore evidence that might controvert that diagnosis. Closely related is “premature closure” where we limit the differential diagnoses to too narrow a list and fail to consider alternatives. We’ve mentioned anchoring previously and it becomes a more significant problem once a diagnosis or other decision has been declared publicly. Many of you have done an exercise in executive training where a scenario is presented in which you must state a position publicly. You are then given a bit of disconfirming evidence and a chance to change your decision. Almost no one changes their decision! (The scenario is actually a poorly disguised parallel of the Challenger disaster). Another example is when we point out that a geriatric patient is on a drug on Beers’ list. The physician almost never takes that patient off the drug but may in the future be less likely to prescribe that drug in other geriatric patients.
Another phenomenon coloring our thinking is the “availability” phenomenon. This is where the most recent or most memorable cases from the past narrow our thinking about a current patient. We all know how a previous bad experience with use of a medication may influence us not to use it again, even when we know the medical evidence tells us we should use it (one of the reasons so many patients with atrial fibrillation are never placed on coumadin). The same obviously applies to diagnosis. We tend to think of a patient who had presented with a similar set of symptoms and may focus on the diagnosis that earlier patient or patients had.
Both Bob Wachter and Jerry Groopman recommend stepping back and saying “What am I missing?”. Alternatively, ask yourself “What is the worst thing this could be?”. Both questions may help the clinician refocus and avoid anchoring, premature closure, and other cognitive biases.
Many of the biases described above do not occur in isolation. Rather they tend to often work in conjunction with each other to lead to erroneous clinical diagnoses. Mamede and colleagues (Mamede 2010) point out that we may use the “availability” heuristic (tendency to consider things most easily recalled), then use confirmation bias and fail to look for disconfirming evidence, to become “anchored” in our first diagnosis and fail to consider alternatives. They went on to design a study that had internal medicine residents review several case histories and then review similar ones in non-analytic reasoning (pattern recognition-type) setting and again in a reflective reasoning setting. They demonstrated that availability bias did indeed influence their diagnoses when in the non-analytic reasoning mode and that more senior residents were more likely to demonstrate that bias. Importantly, the fact that the correct diagnoses were often arrived at during the subsequent “reflective reasoning” setting offers hope that anything that might shift thinking into that mode may help reduce diagnostic errors.
It has been difficult in the past to get clinicians to admit to diagnostic errors they have made. The PPSA article actually speaks about the overconfidence that clinicians have in their diagnostic capabilities and attributes some of that overconfidence to the fact they often get no feedback about when their diagnoses are wrong. But a couple recent studies, based on anonymous surveys, actually shows clinicians are now more cognizant of diagnostic errors they make. Schiff and colleagues (Schiff 2009) asked their colleagues in internal medicine to report 3 cases of diagnostic errors and found that they readily reported such (they were involved in about a third of those and observed the others) and were quite willing to share insights into both the seriousness and likely causes of such errors. And Singh and colleagues (Singh 2010) found that over half of pediatric respondents to a similar questionnaire admitted to making diagnostic errors at least once to twice a month. And many of the errors in both studies were considered by the respondents to be serious or actually caused harm to patients. In the Schiff study, some of the more common missed or delayed diagnoses involved conditions such as stroke, MI, pulmonary embolism, and various cancers. In the Singh study, the most common error was diagnosing viral illnesses as bacterial but they also found misdiagnoses of appendicitis, medication side effects, and psychiatric disorders. The Schiff study found that failure to order, report or followup laboratory or radiology studies was the most frequent factor contributing to diagnostic error. Discounting or overweighing alternative diagnoses was another frequent contributing factor. The Singh study also points out the relative lack of training on diagnostic errors that occurs in typical medical schools and residency programs.
Respondents in the Singh study noted 2 strategies to be likely to help prevent diagnostic errors: use of electronic medical records and closer followup of patients. Schiff and Bates (Schiff & Bates 2010), while admitting the deficiencies of current electronic medical records, also point out the potential of electronic documentation to significantly reduce diagnostic errors. They describe a number of functionalities that must be incorporated into the electronic medical record in order to be successful in that goal. EMR’s have not only the ability to collect all the necessary patient information into one accessible place but also to filter it and display it in ways to make it relevant to the diagnostic process. EMR’s also can be structured to allow better tracking over time so as to improve recognition of changes. They are especially likely to be helpful in tracking test results and preventing tests from slipping through the cracks, a problem we have highlighted in several columns. They should be helpful in creating and maintaining problem lists and, through well-designed clinical support tools, be capable of providing prompts to key questions that need to be asked. The authors point out that it will be critical for clinicians to play a key role in the redesign of these EMR systems to incorporate these functionalities in a way that is not intrusive and does not distract the clinician from the important interaction with the patient. They also point out that the EMR can be used for providing feedback to clinicians so they can get a better perception of how often diagnostic errors occur and the factors contributing to those.
Equally thoughtful is a recent blog by Gordon Schiff (Schiff 2010), an investigator well known in the nascent field of diagnostic error. He expands upon Bob Wachter’s premises and notes that the “systems vs. cognitive” polarity may be artificial. He notes that many of the potential IT solutions are currently poorly developed and he, like Wachter, suggests that as IT interventions to prevent or mitigate diagnostic errors become evidence-based they should be incorporated into “meaningful use” requirements. He also makes a case for empowering patients and getting them involved in preventing diagnostic errors. He notes that the third AHRQ-sponsored International Conference on Diagnostic Errors in Medicine will take place in Toronto in October.
The PPSA review also provides a couple nice tools to help clinicians identify and avoid diagnostic errors. One is a chart audit tool to help identify errors adopted from the article by Schiff et al (Schiff 2009). The other is a simple checklist the clinician can use to help focus the things he/she needs to do to in each case avoid diagnostic errors.
More focus on diagnostic decision making is in our medical schools and residency programs is needed. Many of our medical schools already utilize simulations involving trained actors to improve our interviewing skills and diagnostic skills. Our August 10, 2010 Patient Safety Tip of the Week “It’s Not Always About The Evidence” discussed “contextual errors” and provided examples of how simulation exercises can be used to point out how contextual “red flags” may be missed, resulting in erroneous care.
Involving patients to help avoid diagnostic errors is potentially very valuable. You can remind them of things like “if you have not heard your test results from me by next week, make sure you call me” or “if this medication has not produced the desired effect within 2 weeks, call me so we can consider alternatives”. But one of the problems we have in involving patients is our fear of provoking undue anxiety. For example, a neurologist may begin a search for a peripheral nerve lesion to explain a patient’s sensory or motor symptoms, even though his/her differential diagnoses may include central nervous system possibilities like multiple sclerosis. If we tell a patient that multiple sclerosis is in the differential diagnosis, it may conjure up terribly negative images for that patient. We often also try to look for diagnoses that are most positive for our patients (“affective bias”). Jerry Groopman suggests one way a patient can combat the anchoring and availability phenomena is to simply ask the physician “What’s the worst thing this could be?” or “What else could this be?” or “Could there be two things going on?” or “Is there anything in my history or exam or lab tests that is at odds with the working diagnosis?”. These simple, harmless questions presented in a nonconfrontational manner can influence a physician to reassess.
Keep in mind that we can make the same sorts of cognitive errors when doing our root cause analyses (RCA’s). Anchoring, availability bias, confirmation bias, and satisficing are common mistakes we make that may prevent us from coming up with the best solutions in RCA’s.
And don’t forget that the same cognitive biases that affect our healthcare lives may also impact our decision-making processes in our day-to-day lives!
References:
Wachter RM. Why Diagnostic Errors Don’t Get Any Respect… And What Can Be Done About It. Wachter’s World (blog). June 2, 2008
Newman-Toker DE, Pronovost PJ. Diagnostic Errors—The Next Frontier for Patient Safety. JAMA. 2009; 301(10): 1060-1062
http://jama.ama-assn.org/cgi/content/short/301/10/1060
Wachter RM. Why Diagnostic Errors Don’t Get Any Respect… And What Can Be Done About Them. Health Affairs 2010; 29(9): 1605-1610
http://content.healthaffairs.org/cgi/content/abstract/29/9/1605
Pennsylvania Patient Safety Authority (PPSA). Diagnostic Error in Acute Care. Pa Patient Saf Advis 2010 Sep;7(3):76-86
http://patientsafetyauthority.org/ADVISORIES/AdvisoryLibrary/2010/Sep7%283%29/Pages/76.aspx
Groopman J. How Doctors Think. Boston: Houghton Mifflin, 2007 (Mariner Books 2008)
Mamede S, van Gog T, van den Berge K, et al. Effect of Availability Bias and Reflective Reasoning on Diagnostic Accuracy Among Internal Medicine Residents. JAMA. 2010;304(11):1198-1203
http://jama.ama-assn.org/cgi/content/abstract/304/11/1198
Schiff GD, Hasan O, Kim S; et al. Diagnostic Error in Medicine: Analysis of 583 Physician-Reported Errors. Arch Intern Med, Nov 2009; 169: 1881 – 1887
Singh H, Thomas EJ, Wilson L, et al. Errors of Diagnosis in Pediatric Practice: A Multisite Survey. Pediatrics, Jul 2010; 126: 70 – 79
Schiff GD, Bates DW. Can Electronic Clinical Documentation Help Prevent Diagnostic Errors? N Engl J Med 2010; 362: 1066-1069
http://www.nejm.org/doi/pdf/10.1056/NEJMp0911734
Schiff G. Respecting And Reflecting On Diagnostic Errors. Health Affairs Blog 2010. September 16, 2010
http://healthaffairs.org/blog/2010/09/16/respecting-and-reflecting-on-diagnostic-errors/
The Diagnostic Error in Medicine
3rd International Conference
October 25-27, 2010
Sheraton Centre Toronto Hotel
Toronto, Ontario, Canada
http://www.smdm.org/diagnostic_errors/2010DEM.shtml
DEER Taxonomy Chart Audit Tool
http://patientsafetyauthority.org/EducationalTools/PatientSafetyTools/diagnosis/Documents/audit.pdf
Pennsylvania Patient Safety Authority. A Physician Checklist for Diagnosis.
http://www.patientsafetysolutions.com
Patient
Safety Tip of the Week Archive
What’s New in the Patient Safety World Archive