Consider the following radiology report:
Doe, John DOB: 03/18/48
MR# 555555 Date of report: July 7, 2009
Date of study: July 7, 2009
Study: Plain radiographs Chest, PA and lateral views
Clinical information: R/O pneumonia
Findings: PA and lateral views of the chest were of good quality with adequate inspiratory effort. There is no pulmonary infiltrate or evidence of pneumonia. There is no pleural effusion or thickening. There is a slight right perihilar fullness but no discrete mediastinal abnormality is seen. There is no deviation of the trachea. In the right upper lobe there is a poorly defined 0.5 centimeter density with rounded margins without calcification. The heart is of normal size and shape. No abnormalities of the great vessels are noted. No bony abnormalities are noted.
Impression: No evidence of pneumonia. Possible solitary nodule right upper lobe. Suggest further evaluation with CT scan of chest.
Report Dictated by: Joseph B. Smith, M.D.
Report signed electronically 7/7/09 JBS
Reports like this go out from radiology departments every day. The request could have come in from a med/surg inpatient unit, the emergency department, a physician’s office or the clinic. The limited clinical information is not uncommon. Physicians (and probably more importantly the systems we use for communicating) are notorious for failing to provide radiologists with adequate clinical information. Often the physician simply says “get a chest X-ray” and a secretary fills out the requisition. But that’s not the primary theme of this column. The issue is “What happens with this report?”.
We’ve talked about this issue in prior columns about results of significant clinical findings slipping through the cracks. We discussed these extensively in our May 1, 2007 Patient Safety Tip of the Week “The Missed Cancer” and our February 12, 2008 Patient Safety Tip of the Week “More on Tracking Test Results”. We talked about some of the solutions that various organizations and physicians have put in place to ensure reports like this don’t slip through the cracks. Remember, there should be two systems in place here: one in the radiology department to ensure this message gets to the person who needs to know and one with the ordering physician that ensures the physician always identifies results of tests ordered. We’ve talked about 2 types of system: paper and electronic. And some findings would require both. Actually, there should be a 3rd system in place as well: one with the patients themselves. The educated patient should always ask the provider “when should I expect the result to be available?” and then contact the provider if they have not heard those results within a reasonable period of time.
This month an excellent paper (Singh et al 2009) studied the outcomes of an electronic alert system that conveyed such abnormal imaging results. It both quantified the magnitude of such alerts either not being acknowledged or not followed up on, and showed some very interesting findings. They looked at imaging studies done in a VA system. About 1% of all radiographs, CT scans, MRI scans, ultrasounds, and mammograms in that system generated electronic alerts because of abnormalities seen on the studies. The authors queried the electronic medical record to see if the alerts had been “acknowledged” by the ordering provider and examined patient records to see if any followup actions had been taken by 4 weeks after the alert had been issued. Phone calls were made to the ordering physician if no actions had taken place by 4 weeks.
18% of the alerts went unacknowledged and, overall, 7.7% of cases with alerts lacked timely follow-up at 4 weeks (after the initial phone call from the authors half of those patients had an appropriate action within a week). Their further analysis showed that housestaff were less likely to acknowledge the alerts than other physicians and physician assistants were more likely to acknowledge alerts. Dual alerts (i.e. alerts going out to more than one physician) were twice as likely to go unacknowledged.
There was no significant difference in lack of timely follow-up between those with acknowledged alerts and those with unacknowledged alerts. CT scans and MRI scans were more likely to have timely follow-up actions done. Timely follow-up actions were more likely when a verbal communication from a radiologist had taken place or if there was a hospitalization subsequent to the alert being issued and again twice as likely not to be done if there had been a dual alert.
When they looked at types of “near-misses” they found that chest imaging showing a nonspecific density (such as our example above) was more likely to be associated with lack of timely follow-up. Half of the test results lacking timely followup were abnormal chest X-rays. A possible new malignant neoplasm was one of the common near-misses. Of all the cases lacking timely follow-up, a quarter ultimately had testing leading to new diagnoses, often a new cancer.
Perhaps the scariest finding is that the likelihood of lack of timely follow-up was twice as likely in those cases where dual alerts were sent out. We typically do that thinking that we are adding a layer of safety, i.e. that if one person misses the alert another is likely to see it and respond to it. Obviously that is not the case. Does that surprise you? It really shouldn’t. We know statistically that, in many industries, there is a 10% chance that someone who is “supervising” or “double checking” someone else’s work will make an error. We’ve also often seen clinically that “co-managed” patients are likely to have gaps in their care. This is because both clinicians assume the other will follow-up and then neither follows up.
Also surprising in the Singh study was that some abnormal imaging results flagged as “critical” did not receive timely follow-up even when the alerts were acknowledged. Some of the issues may have had to do with the academic system in which the study was done (eg. housestaff may have only come to their “continuity” clinic once a week) but most of the issues raised can be applied to any healthcare setting.
In particular, imagine some settings where the result could be even worse. Think about the ER. It is not uncommon in many ER’s for the ER physician to do their own interpretation of the X-ray or receive only a “wet read” from the radiologist on-call (often nowdays a “night hawk” service). The official report comes back the next day. For instance, in the above example, the wet read might simply say “no pneumonia” and the report with the suspicious nodule does not appear until the following day. By that time, both the ER physician and the patient are long gone. Who looks at those reports? In a busy ER, it would be nearly impossible for one person to look at 100 or so radiology reports from the prior day to see which ones might have significant abnormal findings requiring follow-up. Some systems now include some form of electronic alert either on the EMR or the PACS system so that only the ones with significant or unexpected findings might be flagged.
The Singh group identified 5 important lessons for electronic alert systems:
We have previously talked about radiology departments keeping their own log of significant abnormal findings. Some of these will require a direct phone call from the radiologist to the ordering provider. In others, a phone call may not be necessary but there must be some other system in place to see that there was an appropriate follow-up action taken on that patient. Choksi et al noted that the standard of care for radiologists is to notify the referring physician and document that communication in the radiology report whenever an unexpected finding such as a possible malignancy is noted. But they also found that such notification did not guarantee the patient would receive appropriate followup. So they devised a system in which every imaging report was assigned a code. For significant unexpected findings, such as possible malignancy, they assigned a code 8. A list of all code 8’s was given to a designated individual (in their system is was the tumor registrar) on a weekly basis. The latter individual then contacted the appropriate individuals to ensure that appropriate followup was done (or there was documentation as to why followup was not being done). In one year, they identified 395 code 8 cases. In 35 of the cases, no workup was documented at 2 weeks. In many, the finding had been acknowledged and workup initiated even though not documented. But eight cases would likely have been lost to followup if this safety net had not been in place. Five of those were ultimately diagnosed with malignancy.
We’d again like to stress that any provider who orders tests must have some sort of system, paper or electronic, to remind them to check on results of all tests they have ordered. Remember, the system in the Singh paper would not alert the provider to those cases where the test was never done! Failure to have the test done may be just as significant an issue. So your “tickler” system must say something like “if I haven’t heard Mrs. Smith’s CT scan result by Friday, I need to find out why not”.
And the patient, as above, should always ask the provider “When should I expect the result to be available?” and then contact the provider if they have not heard those results within a reasonable period of time. The patient should never assume that the test results were normal if they have not heard from the physician or other provider.
In our July 2009 What’s New in the Patient Safety World column “Failure to Inform Patients of Clinically Significant Outpatient Test Results” we noted a new study showing the frequency with which such failures to inform patients about clinically significant tests occur. Casalino et al reviewed charts from both community and academic primary care practices to find documentation of followup of abnormal results of 11 common blood tests and 3 common preventive tests. They found apparent failure to inform patients of such abnormal test results 7.1% of the time. Perhaps the most interesting finding is that those practices using a combination of paper and electronic records (so called “partial EMR”) had higher failure rates than those having either a full EMR or full paper-based systems. They found that very few practices had explicit rules or systems for managing test results and usually relied on the individual physician to devise his/her own system. Unfortunately, some were still telling patients to rely on the old “no news is good news” concept, which obviously is very flawed and unsafe.
A somewhat related topic appears in another new article this month (Leekha 2009) that looked at patient preferences for notification of test results and noted disparities between those preferences and how they were actually notified. A majority wanted notification via phone call from the physician or nurse practitioner but in reality the majority received notification either via a phone call from a nurse or by a return visit to the office. Use of more hi-tech methods (e-mail, automated answering mechanisms, etc.) were not highly regarded methods, though the average age of the population studied being 70 years may somewhat limit the generalizability of these conclusions. The authors discuss how misalignment of incentives can be a root cause for dissatisfaction (eg. patients dislike having to spend time and money for a followup office visit, whereas providers only get reimbursed for such visits and do not get reimbursed for phone calls).
How does your health system ensure that these cases don’t fall through the cracks?
Singh H, Thomas EJ, Mani S, et al. Timely Follow-up of Abnormal Diagnostic Imaging Test Results in an Outpatient Setting. Arch Intern Med. 2009; 169(17): 1578-1586.
Choksi VR, Marn CS, Bell Y, Carlos R. Efficiency of a Semiautomated Coding and Review Process for Notification of Critical Findings in Diagnostic Imaging. AJR 2006; 186: 933-936
Casalino LP, Dunham D, Chin MH et al. Frequency of Failure to Inform Patients of Clinically Significant Outpatient Test Results. Arch Intern Med. 2009;169(12):1123-1129.
Leekha S, Thomas KG, Chaudhry R, Thomas MR. Patient Preferences for and Satisfaction with Methods of Communicating Test Results in a Primary Care Practice. The Joint Commission Journal on Quality and Patient Safety 2009; 35(10): 497-501