In our March 2013 What’s New in the Patient
Safety World column “Diagnostic
Error in Primary Care” we highlighted a study by Singh and colleagues (Singh
2013) on diagnostic errors in primary care that used a trigger tool
methodology. Now Singh and colleagues (Murphy 2014)
have gone a step further and used trigger
tool methodology to identify cases in ambulatory care where certain red flags
for possible cancers have not been acted upon.
The beauty of trigger tool methodology (See our Patient Safety Tips of the Week for October 30, 2007 “Using IHI's Global Trigger Tool” and April 15, 2008 “Computerizing Trigger Tools”) is that it allows you to screen a large number of records that potentially have the item you are really looking for. Then in the subset of records that contain the “trigger” you can then do manual chart review to verify details. It basically streamlines what would otherwise be a very labor- and time-intensive process that would not otherwise be practical.
One of the diagnostic errors we’ve talked
about most frequently is the missed diagnosis due to failure to follow up on
test results (see the list of our prior columns on this topic at the end of
today’s column). The Murphy study identified 4 markers (or “red flags”) of
potential cancer (elevated PSA level, positive fecal occult blood test,
hematochezia, and iron deficiency anemia) to be used as the “triggers” and then
did data mining of almost 300,000 electronic patient medical records at two
facilities looking for these triggers. They also applied an algorithm to
exclude instances where the marker had already been followed up, or where the
patient already had a known cancer, or where the patient had a known terminal
illness. They then did a random chart audit of records identified by the above
process to see how often those markers had not, in fact, been appropriately
followed up. The positive predictive value (PPV) for each of these triggers was
in the 60-70% range. They estimate that this system could pick up 1048
instances of delayed or missed followup at their facilities.
Note that the time interval set in their
trigger tool was important. If the interval is too short many cases where
followup had already been planned would be falsely identified as outliers. If
the interval is too long, the delay in followup might lead to progression of a
cancer.
Their findings came from retrospective application
of the trigger tool to a patient population. But the real potential value would
be to apply the tool prospectively and identify cases in which intervention
could still be taken to identify cancer sooner.
The authors expect that the tool may yet be
further refined. In many cases where the trigger was positive, chart review
revealed information in the free text areas of charts that indicated
appropriate action had already been taken or scheduled. They speculate that use
of tools such as natural language processing might be able to identify those
cases, further narrowing the number of charts identified by the trigger tool to
an even more manageable number of cases needing manual chart review.
The accompanying editorial (Schiff 2014) notes
how use of such electronic screens can be used to identify other potential
diagnostic errors. By linking laboratory and pharmacy databases, Schiff and
colleagues uncovered patients who did not undergo follow-up for abnormal TSH
results, helping to avoid failure or delay in diagnosing hypothyroidism (Schiff
2005).
In our April 15,
2008 Patient Safety Tip of the Week “Computerizing
Trigger Tools” we speculated how computerized trigger tools might be used
to identify opportunities to intervene before harm came to patients. The Murphy
study has pushed that one step closer to reality.
Some of our
prior columns on trigger tool methodology:
· October 30, 2007 “Using IHI's Global Trigger Tool”
· April 15, 2008 “Computerizing Trigger Tools”
· January 2011 “No Improvement in Patient Safety: Why Not?”
·
May 2011 “Just
How Frequent Are Hospital Medical Errors?”
·
March 2013 “Diagnostic
Error in Primary Care”
·
January 2014 “Trigger
Tools to Prevent Diagnostic Delays”
Some of our prior columns on diagnostic error:
· September 28, 2010 “Diagnostic Error”
·
November 29,
2011 “More
on Diagnostic Error”
·
May 15, 2012 “Diagnostic
Error Chapter 3”
· May 29, 2008 “If You Do RCA’s or Design Healthcare Processes…Read Gary Klein’s Work”
· August 12, 2008 “Jerome Groopman’s “How Doctors Think”
· August 10, 2010 “It’s Not Always About The Evidence”
·
January 24,
2012 “Patient
Safety in Ambulatory Care”
·
October 9, 2012 “Call
for Focus on Diagnostic Errors”
·
March 2013 “Diagnostic
Error in Primary Care”
·
May 2013 “Scope
and Consequences of Diagnostic Errors”
·
August 2013 “Clinical
Intuition”
·
January 2014 “Trigger
Tools to Prevent Diagnostic Delays”
· And our review of Malcolm Gladwell’s “Blink” in our Patient Safety Library
See also our other columns on communicating
significant results:
References:
Singh H, Giardina TD, Meyer AND, et al. Types and Origins of Diagnostic Errors in Primary Care Settings. JAMA Intern Med 2013; published online February 25, 2013
http://archinte.jamanetwork.com/article.aspx?articleid=1656540
Murphy DR, Laxmisan A, Reis BA, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf 2014; 23: 8-16 Published Online First: 19 July 201
http://qualitysafety.bmj.com/content/23/1/8.full.pdf+html
Schiff GD. Diagnosis and diagnostic errors: time for a new paradigm. BMJ Qual Saf 2014; 23: 1-3 Published Online First: 19 September 2013
http://qualitysafety.bmj.com/content/23/1/1.full
Schiff GD, Kim S, Krosnjar N, et al. Missed hypothyroidism diagnosis uncovered by linking laboratory and pharmacy data. Arch Intern Med 2005; 165: 574
http://archinte.jamanetwork.com/article.aspx?articleid=486445
Print “PDF
version”
http://www.patientsafetysolutions.com/