Patient Safety Tip of the Week

April 20, 2010        HIT’s Limited Impact on Quality To Date



For months now, we have heard complaints from hospitals and physician practices that it will be difficult to get to “meaningful use” to be eligible to receive the financial incentives for adoption of healthcare information technology (HIT) that are part of the American Recovery and Reinvestment Act (ARRA) of 2009. But it is very clear to us that simply adopting HIT is not enough. There is ample evidence that, unless you specifically use systems with clinical decision support tools, you are unlikely to improve quality and patient safety and unlikely to achieve overall reductions in healthcare costs. In fact, the unintended consequences of HIT may even result in adverse events. See our October 2009 What’s New in the Patient Safety World column “A Cautious View on CPOE” and our November 24, 2009 Patient Safety Tip of the Week “Another Rough Month for Healthcare IT”.


The theme of this month’s journal Health Affairs is healthcare information technology. The issue addresses multiple aspects of HIT, including the impact of HIT on quality and cost, the “meaningful use” criteria, the use of HIT in the medical home concept, and the trials and tribulations of adopting HIT in both hospitals and various practice settings. The section on the relationship between HIT and quality has 3 articles that conclude the impact of HIT on quality is lukewarm at best. DesRoches et al used data from the Hospital Quality Alliance for measures on the care of acute MI, CHF, pneumonia, and a number of surgical care measures and correlated these with levels of HIT adoption from various hospital surveys and other sources. They found, in general, little improvement in most quality measures related to use of electronic health records. Nor was there significant improvement in length of stay or readmission rates or inpatient costs. They did see some marginal improvement in several measures in those hospitals using computerized physician order entry for medications and specific clinical decision support tools (alerts and reminders).


A second study (McCullough et al) found some improvements in hospital quality but many of the outcomes were not statistically or clinically significant. Most of the improvements were seen at academic facilities compared to nonacademic ones.


A third study (Byrne et al) reported on some significant quality and cost improvements seen in the VA healthcare system related to its longstanding and significant investment in HIT. They estimated cumulative savings due to HIT to be on the order of $3 billion. Most of the cost savings was due to avoiding hospital admission and unnecessary tests plus efficiencies in workloads and workflows.


A fourth study (Metzger et al) was a multicenter simulation study that looked at the impact of clinical decision support tools for computerized physician order entry of medications. The simulation detected only 53% of the medication orders that could have led to fatalities and 10-82% of the orders that could have led to serious adverse drug events. Note that this is similar to what we have seen in general with CPOE implementations nationally to date, i.e. CPOE may reduce overall medication errors but has had little impact on serious adverse drug events.


Key to all the studies that do show quality or safety improvements with HIT is the use of timely clinical decision support (CDS) tools. These are evidence-based tools that come in a variety of formats: real-time alerts and reminders, forms, templates, standardized order sets, etc. They all are designed to provide the clinician with knowledge that is specific to that patient at an appropriate time and point of care. But this has been easier said than done. Those organizations that have been successful with CDS have largely used home-grown systems that they have tweaked over many years. Perhaps the most important lesson learned to date has been that one must carefully balance the frequency, utility and importance of such interventions against the detrimental effect they may have on workflow. “Alert fatigue” is a well-known phenomenon in which clinicians begin to ignore the majority of alerts simply because they are continuously bombarded with them. A better strategy appears to be minimizing at least the intrusive alerts to those critically important issues that may lead to patient harm.


AHRQ (the Agency for Healthcare Research and Quality) has been funding many HIT projects over the years. They have taken a keen interest in CDS research and have recently funded two significant CDS research projects. This month they released a report describing some of the significant challenges and barriers that those two projects have been encountering (Eichner and Das 2010).


One of those projects is developing clinical decision support tools for aiding the management of several chronic conditions. It is being led by an experienced team from Brigham and Women’s/Partners Healthcare in Boston but has multiple collaborating organizations across the country. The concept is to develop open-architecture tools such that information during order entry would be sent to a web-based resource where the rules would be applied and then sent back to the source regardless of the computer system or HIT vendor involved. Some of the challenges they have encounted have been (1) the organizational structure is large and complex (2) the projected funding, time and personnel needed were underestimated (3) managing a large project across multiple organizations is difficult (4) vendor buy-in has been “hesitant”. They also encountered something we frequently see at all organizations implementing CPOE with CDS: difficulty keeping track of versions and authors (or responsible parties) for various “rules”.


Another significant challenge has been establishing consensus about the clinical decision support. Their best quote: “If you came up with a guideline everyone agreed with, it would be mushy.” Some clinicians inevitably disagree with a rule or the evidence behind it. So they’ve had to establish a Content Governance Committee to intervene in such cases. And that’s just internally. They have concerns about the willingness to adopt rules at the collaborating organizations. Technical issues such as translating the guidelines into executable code have also surfaced and many staff needed to be trained in use of other programming languages and techniques. Additional barriers have included standards for data format and exchange, mapping, terminologies, interfaces, proprietary or locally developed codes, and information in free text fields. A big problem is that data entry into an EMR is not always complete and often there is failure to update key things like problem lists and medication lists. Some rules cannot be triggered if a diagnosis or a medication is missing from those lists. So training clinicians in the proper ways to input data into the EMR can be a big project.


The second AHRQ-sponsored project is at Yale. Their GLIDES (GuideLines Into DEcision Support) project is focusing on a pediatric ambulatory setting to aid with the management of asthma and prevention of obesity. Their project had a short design period followed by a planned 18-month implementation period. They encountered numerous workflow issues and differences between primary care physicians and specialists. They felt many of these issues might have been avoided had the preimplementation planning been longer. The CDS project goals did not align with the priorities of all parties. They did offer small financial incentives for participation in the project, probably too small to directly influence CDS use but felt to be symbolically important. Getting agreement at each implementation site as to how to integrate the CDS into workflows and how to present the interventions was also a challenge. Decisions about continuing current workflows vs. redesigning workflow was a big issue. As the Partners group had found, the Yale group also found issues with the written guidelines and problems integrating guidelines that were written for individual problems to patients having multiple clinical comorbidities. And they also found issues with code, standardization, and differences in vendor applications at multiple sites.


During the implementation phase, they were disappointed that some clinicians were not using the CDS tools. Specialists often felt they were already delivering optimal care and found the tools to be intrusive and time-consuming. Perhaps the most surprising revelation is that the very clinicians who were involved in the development of the CDS tools were unlikely to use them! Interestingly, the specialists often thought that their patients were too complicated for the CDS tools and that the tools were meant for those cared for by the primary care physicians. Their CDS also relied on how the clinicians used the EMR system and they found that some were only using the EMR after the patient encounter, losing the opportunity to see the CDS intervention in real-time. And here’s one we could have predicted: training residents and fellows in use of the CDS was important but attending physicians were not the best role models! And though they tried to do group training sessions for clinicians to help generate buy-in, scheduling those was difficult. They did find that, when they had to do one-on-one training, having a clinician leader lead the training helped generate buy-in.


The AHRQ publication goes on to summarize the challenges and barriers that were common to both the Partners and the Yale CDS projects and makes a set of recommendations about planning and implementing CDS projects. We find the lessons learned in these projects to be incredibly valuable. Our general rule when we discuss a CPOE implementation with an organization is to allow 6-12 months beyond whatever they had planned for a CPOE implementation. Now we are going to tell them that if they are incorporating CDS into that implementation (which they must do!) they should really add at least 12-18 months beyond whatever they had planned for.


We also like to point out that there are different types of CDS alerts: (1) those that need to be delivered at the time of order entry by the physician and (2) those that are not as time-critical and could be delivered to another healthcare worker. An example of the first would be an alert that tells the physician writing a prescription that the drug is not on the formulary of his patient’s payor. That real-time alert is ultimately going to save the physician, patient and pharmacist a lot of time and angst. On the other hand, when a physician writes for an antidiarrheal medication in an inpatient who is on antibiotics, an alert to check for C. diff could go to the infection control nurse rather than the physician. That could help eliminate alert fatigue.


We have always assumed that physician order entry was the only way to go. After all, how would you get the messages to them otherwise? Well, an interesting study just came out that challenges that concept. A study (Kazemi et al 2010) on a neonatal unit at an Iranian teaching hospital (where there apparently was some physician resistance to CPOE to start with) did a study where they had physician order entry (POE) for 4 months, then nurse order entry (NOE) for 4 months. They found that the rate of non-intercepted medication errors was 40% less during the NOE period! Though there may be some methodological issues here (a better study would have used two groups and flip-flopped the order of POE and NOE to minimize the chance of a “learning” bias) this finding is one that cannot be ignored. This study needs to be replicated in more diverse settings. It certainly would turn some of our assumptions upside down! But then we’d need to make very certain that we are not overburdening our nursing staffs who are already being deluged with too many activities. Those of you who have not yet implemented CPOE will be surprised at how much tension often develops between nursing, pharmacy and physician staffs during such implementations. But the bottom line is that the whole purpose of CPOE and CDS is “meaningful use”, i.e. we are doing this to improve patient outcomes and patient safety. There is a ton of learning we have to do to find out what are the best practices in implementing these systems. There is a danger in the timelines put forth in the ARRA incentives for HIT. Since we are moving forward blindly in some cases, the danger is we may try to implement systems and procedures that are not the best possible ones to achieve these lofty goals.



Some of our other columns on healthcare IT issues:





DesRoches CM, Campbell EG, Vogeli C, et al. Electronic Health Records' Limited Successes Suggest More Targeted Uses. Health Affairs 2010; 29(4): 639-646



McCullough JS, Casey M, Moscovice I, Prasad S. The Effect Of Health Information Technology On Quality In U.S. Hospitals. Health Affairs 2010; 29(4): 647-654



Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC. Mixed Results In The Safety Performance Of Computerized Physician Order Entry. Health Affairs 2010; 29(4): 655-663



Byrne CM, Mercincavage LM, Pan EC, et al. The Value From Investments In Health Information Technology At The U.S. Department Of Veterans Affairs. Health Affairs 2010; 29(4): 629-638



Eichner J, Das M. Challenges and Barriers to Clinical Decision Support (CDS) Design and Implementation Experienced in the Agency for Healthcare Research and Quality CDS Demonstrations. AHRQ Publication No. 10-0064-EF. March 2010



Kazemi A, Fors UGH, Tofighi S, et al. Physician Order Entry Or Nurse Order Entry? Comparison of Two Implementation Strategies for a Computerized Order Entry System Aimed at Reducing Dosing Medication Errors. J Med Internet Res. 2010; 12(1): e5.

Published online 2010 February 26. doi: 10.2196/jmir.1284.













Patient Safety Tip of the Week Archive


What’s New in the Patient Safety World Archive