View as “PDF version

Patient Safety Tip of the Week

September 24, 2019

EHR-related Malpractice Claims

 

 

Malpractice claims related to electronic health records (EHR’s) are beginning to appear more and more. We’ve done multiple columns on the unintended consequences of and the errors related to healthcare IT, so it should be no surprise that some of these may result in malpractice claims.

 

Two recent papers illustrate the problems. Graber and colleagues (Graber 2019) analyzed 248 cases involving health IT submitted to the CRICO claims database during 2012 and 2013. Ambulatory care accounted for most of the cases (146 cases). Medications (31%), diagnosis (28%), or a complication of treatment (31%) were the most frequently involved categories. More than 80% of cases involved moderate or severe harm, although lethal cases were less likely in cases from ambulatory settings. Etiologic factors spanned all of the sociotechnical dimensions, and many recurring patterns of error were identified.

 

63% of cases involved user-related issues and 58% involved technology-related issues. In many cases, more than one contributing factor was identified.

 

Ranum (Ranum 2019) analyzed The Doctors Company’s claims in which EHR’s contributed to injury. There was a total of 216 claims closed from 2010 to 2018. EHRs were typically contributing factors rather than the primary cause of claims, and the frequency of claims with an EHR factor continues to be low, accounting for 1.1 percent of all claims closed since 2010 (claims in the Graber study above also accounted for less than 1% of all claims).

 

As you’d expect, errors related to medications or test results tend to be most frequent, reflecting the relative frequency with which these are dealt by EHR’s. Also, those are two categories likely to be related to the more serious types of error that would lead to a malpractice claim, such as missing or delaying a diagnosis of cancer.

 

Both papers break down the EHR contributions as either system-related or user-related.

 

System-related issues include things like EHR’s being down or crashed, inability to access certain parts of the medical record, having data in fields other than where the physician expected it, routing issues, and others. In one case the EHR automatically “signed” a test result when in fact it had not been read.

 

 

Poor system design

Both studies provide multiple examples of errors leading to or contributing to claims. As you’d expect, the “cursor” error (also known by other names like juxtaposition error), that we’ve discussed so often, contributed in many cases. This is where one clicks on an item in a drop-down list that is adjacent to the item one actually intended to click. That may result in choosing the wrong medication or a wrong dose or even a wrong route of administration.

 

One example in the Graber paper was an otolaryngologist intending to order Flonase, instead choosing Flomax from a drop-down menu. Interestingly, the Ranum paper also had a case where a physician entered “FLO” and a list stopped at FLOMAX, which was not the intended drug. Another case in the Ranum paper was a physician ordering morphine but clicking on the wrong dose in a drop-down menu, resulting in an opioid overdose.

 

While some would ascribe these errors to “human” error or user-related error, we consider the root cause of these to be poor design of systems. It should be recognized that your typical user of almost any computer or smart phone probably makes cursor errors every day, clicking on an item other than the one intended. While the user recognizes this in most cases, a few slip by.

 

It’s surprising to us that there were not more instances of truncation errors. In many of our columns on wrong patient errors, we’ve noted that some EHR lists may truncate the full name of patients or show lists that stop above other patients having the same name. The Ranum paper had the case noted above where a physician entered “FLO” and a list stopped at FLOMAX, which was not the intended drug.

 

We are also surprised they did not find cases of patient misidentification due to having multiple individual patient records open at the same time.

 

Yet other cases had to do with failure to see important information. Examples included instances where a test result was “signed” as read even though no one had actually seen the result.

 

 

Cases where good clinical decision support tools could help

There were cases where good clinical decision support tools could help. For example, in some cases, programming in a “usual” dosage range might have prevented an overdose of a medication. That might be especially useful in cases of “missed” decimal points.

Graber et al. also note good CDSS could detect that an order for potassium in a patient already hyperkalemic is probably inappropriate.

 

 

Lack of access to the EHR

Others had to do with the IT systems being down or crashing, resulting in lack of access to important clinical information.

 

 

Routing issues

Graber et al. “encountered repeated examples of laboratory results going to the wrong provider, documentation not being available to the providers who needed it, and assorted other problems of getting the right data to the right provider.” We’ve found that this is especially a problem when there are changes of attending physician, housestaff changes, or changes of service (eg. from surgery to medicine). We’ve also often seen EHR fields used for things other than their intended use. For example, typical hospital EHR’s have a field for the patient’s primary care physician (PCP). But changes in PCP are often not updated in the hospital EHR. Moreover, we’ve often seen hospitals use that field for other purposes (such as for the emergency physician or the inpatient attending or a consultant).

 

 

Interoperability issues

Interoperability issues also played a role in some cases. For example, results of a study available on one system may not have been available on another system. Some involved lack of integration between an office system and a hospital system.

 

 

The “hybrid” medical record

Medical record systems that include both paper and electronic records are particularly vulnerable. Some errors occurred during conversion from paper records to the EHR.  For example, a pediatric patient received an antibiotic to which he/she was allergic. The allergy was documented in the paper record but not uploaded into the EHR. Hybrid systems (where some records are in paper form, others in electronic form) are especially vulnerable. The Graber study found these instances were more common in ambulatory care and occurred when both paper and electronic systems were in use at the same time, or during a transition from paper to electronic, or one EHR to another.

 

 

Field mapping issues

In some cases, information got entered into fields or sections of the EHR other than where the clinician expected to find it. For example, a positive test result for cervical cancer was entered into the problem list rather than the results section, where the clinician expected to find it. In some cases, that may be due to inconsistencies when “mapping” fields for transmission of data items from one system to another.

 

 

Tests pending at discharge

One of the cases noted by Graber et al. involved a Pathology report of adenocarcinoma that was delayed in reaching a patient's chart until after inpatient discharge and no alert was sent to the patient's physician, resulting in the delayed diagnosis of cancer. See our multiple columns, listed below, on both tests pending at discharge and communication of significant findings.

 

Every year we see cases where the diagnosis of cancer is missed or delayed because test results showing the cancer were never seen by a responsible physician. Often, a patient is cared for in a hospital by someone (for example, a hospitalist or an academic service team) who will never see that patient again after discharge. Hence, it is critical that the message that there are pending test results be conveyed to the appropriate physician who will be providing follow up care for that patient.

 

We’ve made a strong case for always including in every discharge summary a section for “test results pending”. We also noted in our October 13, 2009 Patient Safety Tip of the Week “Slipping Through the Cracks” that studies have shown sending reports to two physicians, rather than increasing the likelihood someone will follow up, actually doubles the risk that no one will follow up (Singh 2009)!

 

But even identifying test results pending can be difficulty, particularly in patients with prolonged hospital lengths of stay. In such cases, hospitalists or academic teams may change during the course of that hospital stay and the oncoming physician(s) may not realize which tests have been done without results reported.

 

That’s why hospital labs and radiology departments need to have policies and protocols in place for ensuring that significant test results (such as those suggesting cancer) are communicated to the appropriate physician.

 

 

User-related issues include, among other issues, how clinicians enter data into the EHR or how they interact with alerts.

 

Copy & paste

No surprise here. In one case, a note copied and pasted from a previous note did not include a critical medication that had been added later. In another case, a crucial progress note was identical to the previous note from three months earlier, including old vital signs and spelling errors.

 

Copy & paste errors often led to medication errors, sometimes copying over a medication that had been discontinued since the prior note, and sometimes failing to include a medication that had been started since the prior note. For example, Graber et al. noted a case where a history copied from a previous note, which did not document patient's amiodarone medication; delayed recognition of amiodarone toxicity.

 

Pre-population of fields

Many EHRs auto-populate fields in the patient’s history and physical exam and in procedure notes, sometimes causing the entering of erroneous or outdated clinical information.

 

And, while it is not mentioned in either of the current papers, a new study raises a serious question of an issue closely related to the copy & paste and pre-population issues. Berdahl and colleagues (Berdahl 2019) compared documentation in the EHR to what was actually observed being obtained by emergency department residents. The disparity between electronic clinical documentation and physicians’ observed behavior was quite striking. For ROS (review of systems), physicians documented a median of 14 systems, while audio recordings confirmed a median of 5 systems. Overall, only 38.5% of documented ROS systems were confirmed by audio recording data. For PE (physical examination), resident physicians documented a median of 8 verifiable systems, while observers confirmed a median of 5.5 systems. Overall, only 53.2% of verifiable documented PE systems were confirmed by concurrent observation. The authors speculate whether pressures to maximize billing might be driving this phenomenon and make a case for payers to consider removing financial incentives to generate lengthy documentation.

 

Obviously, from a patient safety perspective, the presence of incorrect information in the EHR be a serious safety vulnerability, especially since that information is likely to be propagated forward. And from a malpractice perspective, lawyers could have a heyday when they discover totally erroneous information in the EHR that would call into question the validity of everything done on that patient. In the accompanying editorial, Jetté and Kwon (Jetté 2019) note that unsubstantiated documentation was more common for elements that seemed to be less clinically relevant. But, imagine what lawyers might do in a claim about a diagnostic error when they find a field in the EHR that states “dorsalis pedis pulse 2+ bilaterally” in a double amputee! Even though that might have zero relevance to the actual claim, the lawyers would question the validity of everything else in that EHR and impeach everything used by the defending physician.

 

 

Overriding alerts

In one example, a physician overrode an alert that the patient was allergic to the drug ordered. We’d expect this may be the category that would be most difficult to defend. We know that very large percentages of alerts are ignored or overridden and that alert fatigue is a huge issue. That is why design of alerts is such a critical function. Sometimes there are good reasons for overriding an alert but, unless there is a “hard stop” associated with that alert, most systems do not allow a physician to enter a reason for overriding the alert. This vulnerability is another good reason to keep the use of alerts to a minimum and offer the clinician the opportunity to provide a reason for the alert override.

 

 

Training/education

Training/education were also important. There was one case where a covering physician had not been adequately trained to use the EHR and did not have a password. In another, the physician expected to see a paper copy of a CT scan report and the report only went to the EHR.

 

While we all invest in education and training at initial rollouts or updates of EHR’s, we need to be particularly wary about the need to orient, educate, and train those clinicians who enter the organization at other times. That may be a new resident rotating through a hospital, a new hospital staff attending, a locum tenens physician, an “agency” nurse, or a hospitalist or emergency physician filling in a gap for another physician.

 

 

The Ranum paper also includes several useful videos. One deals with the cut & paste isse and another with the pre-populated data issue. The third talks about the need for “audit, backup, and cross check

 

One way we found useful to avoid missing test results was to create a field in the EHR in which we put all tests ordered. Once a result of that test was received and reviewed, we would remove that test from the “test pending” field. You can create a search to perform weekly (or at other appropriate interval) that is something like {find all my patients in whom the “test pending” field is not empty}. Including the date a test was ordered can also help make your searches produce more specific results. For example, your search might be {find all cases in which a Pap smear ordered more than 2 weeks ago has not yet been resulted and reviewed}. This can also help you identify those instances where you recommended a test and the patient did not follow up and get the test. Note: you have to make sure your EHR allows such a “temporary” field where you can add and remove text without illegally “altering” the medical record. The other method would be to have a field that lists all the tests with 2 checkboxes, one that gets checked when a test/study is ordered and one that gets checked when the test has been done and the report has been reviewed. Then your search could be {find any tests ordered but not yet reviewed}.

 

The Ranum paper also reminds us that a patient injury may result from a failure to access or make use of available patient information. It also reminds us that EHR metadata documents what was reviewed. So, you might have a tough time in court claiming you saw certain data in the EHR, when the metadata shows you didn’t.

 

 

While some errors are due to poor design of systems and others to “human” factors, more errors are related to the way humans interact with technology. It should not be surprising that we will continue to see both adverse patient events and malpractice claims related to those interactions. But that doesn’t mean we can’t fix some of our systems to minimize the chances of such errors. You’ll find it worthwhile for your clinical and IT staffs, plus your risk management staffs, to look at the two papers reviewed in today’s column. Reviewing any malpractice claims in your organization and any RCA’s you’ve done after adverse events may also identify EHR-related issues you need to address. You’ll also find many issues in our columns, listed below, on the unintended consequences of technology.

 

 

See also our other columns on communicating significant results:

 

 

See some of our other Patient Safety Tip of the Week columns dealing with unintended consequences of technology and other healthcare IT issues:

 

 

 

References:

 

 

Graber ML, Siegal D, Riah H, et al. Electronic Health Record–Related Events in Medical Malpractice Claims. Journal of Patient Safety 2019; 15(2): 77-85

https://journals.lww.com/journalpatientsafety/Fulltext/2019/06000/Electronic_Health_Record_Related_Events_in_Medical.1.aspx

 

 

Ranum, D. Electronic Health Records Continue to Lead to Medical Malpractice Suits. The Doctor Company 2019

https://www.thedoctors.com/articles/electronic-health-records-continue-to-lead-to-medical-malpractice-suits/

 

 

Singh H, Thomas EJ, Mani S, et al. Timely Follow-up of Abnormal Diagnostic Imaging Test Results in an Outpatient Setting. Arch Intern Med. 2009; 169(17): 1578-1586

https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/224747

 

 

Berdahl CT, Moran GJ, McBride O, Santini AM, Verzhbinsky IA, Schriger DL. Concordance Between Electronic Clinical Documentation and Physicians’ Observed Behavior. JAMA Netw Open 2019; 2(9): e1911390 Published online September 18, 2019

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2751388

 

 

Jetté N, Kwon C. Electronic Health Records—A System Only as Beneficial as Its Data. JAMA Netw Open 2019; 2(9): e1911679 Published online September 18, 2019

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2751386

 

 

 

 

Print “PDF version

 

 

 

 

 

 

 

 

 


 

 

 

http://www.patientsafetysolutions.com/

 

Home

 

Tip of the Week Archive

 

What’s New in the Patient Safety World Archive