Healthcare Consulting Services
Healthcare Consulting Services with a Focus on Patient Safety Solutions and Quality Improvement Across the Health Care Continuum. Your Patient Safety Resource Solution.
May 26, 2015
How Safe is the Lab You Use?
The Milwaukee Journal Sentinel has done an excellent series on patient safety issues related to laboratories (Gabler 2015). Gabler does a great job using real people stories to illustrate the devastating impact a lab error can have on the lives of patients and their families. But she goes much further and delves into the lack of transparency about laboratory errors and the inability of the general public or even you, the healthcare professional, to know how well the laboratory to which you refer patients is actually performing.
Our regular readers know we are fond of “stories, not statistics” to foster patient safety. Gabler shows how inaccurate HIV diagnoses and false paternity tests destroyed families and incorrect pregnancy testing and blood compatibility testing put neonates in jeopardy. And how failure to heed a lab technician’s warnings about a faulty analyzer resulted in that lab tech being exposed to HIV and HCV and more than 400 patients possibly receiving inaccurate HIV and HCV results.
But, while the stories in this series are indeed compelling, the statistics are even more bothersome. Gabler uncovered not only numerous specific errors and deviations occurring in various labs but also found significant flaws in the regulatory oversight of the labs. Labs appear to have the ability to choose some quality parameters as they see fit and they can choose their oversight organization in some cases. One example given was a lab in which staff failed proficiency testing in pregnancy testing one quarter. Yet rather than do repeat proficiency testing for pregnancy testing that lab “simply didn’t participate in an outside check of its pregnancy testing the following quarter”. Gabler also discusses labs cutting corners to save money (eg. using expired reagents, putting off fixing faulty equipment, etc.).
There are 35,000+ labs in the US. While they must meet the federal standards of the Centers for Medicare and Medicaid Services (CMS), the surveys are usually outsourced to other organizations. State inspectors handle about 50% of labs, with accrediting organizations like The Joint Commission (TJC) or the College of American Pathologists (CAP) reviewing 46%, including most hospitals and large clinics. New York and Washington run their own programs, accounting for the remaining 4%. CMS sends teams to survey a small percentage of labs (about 2%) and to audit the surveys being done by those other organizations.
When CMS does those audits they allow up to a very generous 20% disparity rate (between the CMS findings and the other organization’s findings) before CMS reviews the work being done by those organizations. In 2013 The Joint Commission exceeded that 20% threshold, with 9 of 43 audited inspections not meeting the CMS expectations.
But the public might become aware of faulty laboratory practices only if a regulatory body sanctions that lab. And such sanctions are relatively rare. Findings of surveys are not made public if the lab submits a plan of corrective action that is accepted by the oversight agency.
We all appreciate it when organizations like The Joint Commission take on an “educational” role and an approach to facilitate improvement. But at the same time we don’t want to be sending our patients to a lab that has substandard processes.
So how do you know if your favorite lab is safe or not? You won’t find the answer on any website. Your best bet is to be tied into your hospital’s quality improvement system and make sure that your medical staff representation on all the quality committees is actively participating. But that will only help you get a feel for how safe the hospital lab is. For those using proprietary and/or commercial labs, good luck!
So why aren’t there publicly available measures of lab quality and safety? There is certainly considerable public reporting of hospital quality and safety measures (albeit some good and some bad measures). But a patient is actually much more likely to use a lab than a hospital. The vast majority of quality indicators monitored in laboratories are either too technical or of little interest to the general public. But some potential candidates for publicly reported measures might be:
Of course, you’d have to have standardized ways of determining these measures so the system cannot be “gamed”.
We’ve done multiple columns on errors related to laboratory studies (see list below), many or most of which actually occur before the specimen arrives at the lab (as in our Patient Safety Tips of the Week for October 9, 2007 “Errors in the Laboratory“ and March 6, 2012 ““Lab” Error”). Lost lab specimens can leave patients without a diagnosis (see our November 16, 2010 Patient Safety Tip of the Week “Lost Lab Specimens”). One particularly serious error highlighted in several of our columns is mislabeling of specimens. A CAP study in 2011 had shown a specimen labeling error rate of 1.1 per 1000 cases (Nakhleh 2011). Because that has such potentially devastating consequences for patients, the College of American Pathologists (CAP) has just issued guidelines for uniform labeling of blocks and slides in surgical pathology (Brown 2015). CAP put together a panel of experts to develop that guideline. A systematic literature review found the overall evidence inadequate to inform the guideline so the panel had to rely on expert consensus opinion for 10 of their 12 recommendations.
In several of our columns on specimen mislabeling errors we’ve mentioned that you, as a clinician, should be suspicious when a “surprise” diagnosis (or lack of an expected diagnosis) comes back from the lab. There are DNA-based tools that labs can use to look for switched or cross-contaminated lab specimens.
There are other checks that can be done for some highly sensitive tests. For example, in the Gabler article one lab has two samples of each person's DNA tested by two different lab technicians. Another safeguard requires that whenever a man is excluded as father of a child, the company double-checks to make sure the child's swab wasn't accidentally switched with the mother's swab, since the two often have their cheeks swabbed at the same time.
So, how safe is the lab you use? What quality indicators would you like to see for those labs?
Some of our other columns on errors related to laboratory studies:
Gabler E. Hidden Errors. A Watchdog Report. Weak oversight allows lab failures to put patients at risk. Milwaukee Journal Sentinel 2015; May 16, 2015
Nakhleh RE, Idowu MO, Souers RJ, et al. Mislabeling of cases, specimens, blocks, and slides: a College of American Pathologists study of 136 institutions. Arch Pathol Lab Med 2011; 135(8): 969-974
Brown RW, Speranza VD, Alvarez JO, et al. Uniform Labeling of Blocks and Slides in Surgical Pathology. Guideline From the College of American Pathologists Pathology and Laboratory Quality Center and the National Society for Histotechnology. Arch Pathol Lab Med 2015; Early Online Release April 21, 2015
Print “How Safe is the Lab You Use?”
To get "Patient Safety Tip of the Week " emailed to you, click here and enter "subscribe" in the subject field.
If you don't see the search term you expected to see here, its probably because that tip already went to our Tip of the Week Archive. We do a new tip every week. Click here to search the entire site or you can Go to Tip of the Week Archive a patient safety resource solution loaded with tips, tools, and techniques you can use in your patient safety and quality improvement initiatives. Or it may have moved to our What's New Archive.
Click here to see the consulting services and patient safety solutions that we provide.
We’ve already done numerous columns showing that adverse patient events and mortality are higher for patients admitted on weekends, commonly referred to as “the weekend effect”. We have also noted many studies demonstrating similar adverse occurrences in patients admitted at night so we sometimes lump weekend and night admission problems together as “the after-hours effect”.
A new study has looked at data from a large administrative database over the period 2002 to 2010 to determine the association between hospital-acquired conditions (HAC’s or “never events”) and admission on weekends vs. weekdays (Attenello 2015). They found that the incidence of HAC’s was 5.7% among patients admitted on weekends vs. 3.7% for those admitted on weekdays. Even after adjustment for a variety of patient, hospital, and severity cofactors they determined that weekend admission was associated with a 25% higher likelihood of developing at least one hospital-acquired condition.
Not surprisingly, the occurrence of a hospital-acquired condition was associated with a 76% higher hospital charge and an increased hospital length of stay (from a mean LOS of 4.53 days to 6.26 days). The authors recognize, however, that this LOS association does not necessarily imply causality and that it may be patients with longer LOS have more opportunity to develop a HAC.
Interestingly, patients with comorbid neurological conditions had a 35% increased likelihood of developing a hospital-acquired condition. This may be a reflection that patients with moderate to extreme loss of function were 34% to 157% more likely to incur a HAC (since loss of function is considerably more likely with many neurological conditions). It would be interesting to see how the HAC rates compared between hospitals with or without stroke center designation. We’d expect that those hospitals with coordinated stroke teams might have lower HAC rates. However, there is currently a widespread shortage of neurologists (and especially of neurologists available for night and weekend hospital call) that may be a contributory factor. On the other hand, delays in ancillary services (eg. CT, MRI, ultrasound) may impact patients with neurological conditions to a greater degree than other conditions.
The accompanying editorial (Dharmarajan 2015) discusses the difficulties of using data from large administrative databases to determine quality and safety outcomes, noting that estimates obtained from administrative data have never been convincingly validated against medical record data for many of the patient safety indicators. They make an argument that, instead of focusing on strategies to improve weekend care, we must focus on improving care every day of the week and overall strategies to prevent such adverse events.
In our many previous columns on the weekend effect or after-hours effect we have pointed out how hospitals differ during these more vulnerable times. Our healthcare systems clearly do not deliver uniform care 24x7. Staffing patterns (both in terms of volume and experience) are the most obvious difference but there are many others as well. Many diagnostic tests are not as readily available during these times. Physician and consultant availability may be different and cross-coverage by physicians who lack detailed knowledge about individual patients is common. You also see more verbal orders, which of course are error-prone, at night and on weekends. And a difference in non-clinical staffing may be a root cause. Our December 15, 2009 Patient Safety Tip of the Week “The Weekend Effect” discussed how adding non-clinical administrative tasks to already overburdened nursing staff on weekends may be detrimental to patient care. Just do rounds on one of your med/surg floors or ICU’s on a weekend. You’ll see nurses answering phones all day long, causing interruptions in some attention-critical nursing activities. Calls from radiology and the lab that might go directly to physicians now go first to the nurse on the floor, who then has to try to track down the physician. They end up filing lab and radiology reports or faxing medication orders down to pharmacy, activities often done by clerical staff during daytime hours. Even in those facilities that have CPOE, nurses off-hours often end up entering those orders into the computer because the physicians are off-site and are phoning in verbal orders. You’ll also see nurses giving directions to the increased numbers of visitors typically seen on weekends. They even end up doing some housekeeping chores. All of these interruptions and distractions obviously interfere with nurses’ ability to attend to their clinically important tasks (see our Patient Safety Tips of the Week for August 25, 2009 “Interruptions, Distractions, Inattention…Oops!” and May 4, 2010 “More on the Impact of Interruptions”).
Perhaps the most significant contribution of the current study by Attenello and colleagues is the quantification of the financial impact of HAC’s related to weekend admission. Since hospitals now (theoretically) bear the brunt of the cost of HAC’s, perhaps they will see that better upfront investment of resources may save money in the long run, not to mention result in better patient outcomes.
The weekend effect is complex and involves both patient-related factors and quality of care factors. While we may not be able to do much about the patient-related factors, there remains much we can do about the quality of care factors.
Some of our previous columns on the “weekend effect”:
Attenello FJ, Timothy Wen T, Cen SY, et al. Incidence of “never events” among weekend admissions versus weekday admissions to US hospitals: national analysis. BMJ 2015; 350: h1460 (Published 15 April 2015)
Dharmarajan K, Kim N, Krumholz HM. Patients need safer hospitals, every day of the week. BMJ 2015; 350: h1826 (Published 15 April 2015)
A new study that demonstrated a significant positive impact of the WHO Surgical Safety Checklist on patient morbidity and mortality (Haugen 2015) seems to have touched off a debate on whether we are suffering from “checklist fatigue”. Haugen and colleagues, using a stepped wedge cluster randomized controlled trial at 2 Norwegian hospitals (one academic and one community), demonstrated that implementation of the Surgical Safety Checklist reduced complication rates from 19.9% to 11.5% (absolute risk reduction 8.4%). Moreover, mean hospital length of stay (LOS) was reduced by 0.8 days after the implementation. Mortality reduction from 1.6% to 1.0% overall did not reach statistical significance (though at the community hospital a mortality reduction from 1.9% to 0.2% was statistically significant).
The original introduction of the WHO Surgical Safety Checklist (Haynes 2009) was associated with striking reductions in both mortality and complication rates. However, that study and several others have come under some criticism because of their before-after study designs. And some studies, such as one done in Ontario, Canada (Urbach 2014) showed that implementation of surgical safety checklists was not associated with significant reductions in operative mortality or complications.
So the new study by Haugen and colleagues, using the new design (which is somewhat similar to crossover studies which you may be more familiar with in device or medication studies) should have been a welcome endorsement of the Surgical Safety Checklist. Indeed, in a commentary accompanying the study, several of the coauthors of the original WHO study were delighted that the new study showed support for use of the checklist (Haynes 2015). They pointed out that the Haugen study even showed a “dose effect” in that larger reductions in complications were seen when all portions of the checklist were followed.
But in a second commentary Stock and Sundt were less enthusiastic and raised the concern of “checklist fatigue” (Stock 2015). They note that checklists should be used judiciously, and are particularly useful to prevent memory lapses when a specific sequence of actions must be taken in order the same way each time. But they point out that such memory lapses actually are only involved in a small percentage of significant surgical incidents. They suggest we take a “timeout” before implementing any new checklist and see if it meets 3 criteria:
These are actually good criteria. We’ve done multiple columns on checklists (listed below) and described the ideal qualities of checklists in several of them.
The Haynes commentary also points out that the Norwegian study did several important things during its implementation. First, it modified the Surgical Safety Checklist to meet local needs. Second, they piloted it before widespread implementation, allowing for adjustments and for development of “champions” and “super users” who would be key players in further rollout. And, third, they did appropriate education for all disciplines affected when they did their widespread rollout.
We continue to be enthusiastic proponents of checklists. They need to be short and they don’t need to include a whole bunch of items that are seldom forgotten. And checklists are really good communication tools. So the wisdom of the criteria proposed by Stock and Sundt is well-grounded.
But perhaps a real lesson is that it is not simply enough to implement a checklist blindly based upon its successes in other venues. You actually need to measure after implementation to ensure it led to its intended effect and did not produce any unintended consequences.
Some of our prior columns on checklists:
Haugen AS, Søfteland E, Almeland SK, et al. Effect of the World Health Organization Checklist on Patient Outcomes: A Stepped Wedge Cluster Randomized Controlled Trial. Annals of Surgery 2015; 261(5): 821-828
Haynes A, Weiser T, Berry W, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med 2009; 360(5): 491-499
Urbach DR, Govindarajan A, Saskin R, et al. Introduction of surgical safety checklists in Ontario, Canada. N Engl J Med 2014; 370(11): 1029-1038
Haynes AB, Berry WR, Gawande AA. What Do We Know About the Safe Surgery Checklist Now? Annals of Surgery 2015; 261(5): 829-830
Stock CT, Sundt T. Timeout for Checklists? Annals of Surgery 2015; 261(5): 841-842
Among our numerous columns on potentially inappropriate medication use in the elderly, we’ve done a few specifically on deprescribing (see our Patient Safety Tips of the Week for March 4, 2014 “Evidence-Based Prescribing and Deprescribing in the Elderly” and September 30, 2014 “More on Deprescribing”).
We always recommend that you do a “brown bag” medication reconciliation at least annually with all your geriatric patients in which you determine all the medications a patient is taking, including OTC drugs and supplements. The same can be done in a Medication Therapeutic Management (MTM) session with a pharmacist or nurse in other settings. You will always be surprised how many drugs are found to be duplicative or no longer necessary or potentially inappropriate and the opportunity to “deprescribe” presents itself.
But a new study from Australia points out that we often miss another ideal opportunity for deprescribing: the inpatient hospitalization (Hubbard 2015). They looked at patients aged 70 years or older admitted to general medical units of 11 acute care hospitals and, not unexpectedly, found that polypharmacy and hyperpolypharmacy were prevalent. However, significantly, they found that despite identification of multiple medications that might be considered potentially inappropriate almost no changes were made in the number or classification of medications.
Hubbard and colleagues note that the optimal setting for deprescribing is not clear. The inpatient setting typically has time constraints and the inpatient physicians may be much less familiar with the whole clinical picture than the outpatient physicians. Nevertheless, an inpatient hospitalization should be considered an opportunity to consider deprescribing.
In a related commentary several Australian healthcare professionals discuss the importance of better communication channels between all parts of the healthcare system (Mitchell 2015).
While it may be time-intensive, we believe that failure to do a thorough medication review with intent to deprescribe while the patient is an inpatient is, indeed, a missed opportunity. The inpatient physicians can arrange for a time to discuss the medications with the primary care physician. The inpatient hospitalization provides another unique opportunity. We’ve mentioned on numerous occasions that physicians almost never discontinue a medication they have prescribed even if it appears on Beers’ list or the STOPP list or equivalent list of potentially inappropriate medications. But here it is possible to say “things are different now” so we are going to take you off this medication.
Some of our past columns on Beers’ List and Inappropriate Prescribing in the Elderly:
Hubbard RE, Peel NM, Scott IA, et al. Polypharmacy among inpatients aged 70 years or older in Australia. Med J Aust 2015; 202(7): 373-377
Mitchell C. Polypharmacy a shared duty. MJA InSight 2015; Monday, 20 April, 2015
ECRI Institute has published its Top 10 Patient Safety Concerns for 2015. This year's list includes:
1. Alarm hazards: inadequate alarm configuration policies and practices
2. Data integrity: incorrect or missing data in EHRs and other health IT systems
3. Managing patient violence
4. Mix-up of IV lines leading to misadministration of drugs and solutions
5. Care coordination events related to medication reconciliation
6. Failure to conduct independent double checks independently
7. Opioid-related events
8. Inadequate reprocessing of endoscopes and surgical instruments
9. Inadequate patient handoffs related to patient transport
10. Medication errors related to pounds and kilograms
We can’t object to any of these being on the list and have done numerous columns on each topic. We’ll let you go to the ECRI Institute website where you can download their informative and useful document.
ECRI Institute. Top 10 Patient Safety Concerns for 2015
Go to the "Whats New Archive"
To get "What's New in the Patient Safety World"emailed to you,click here and enter "subscribe" in the subject field.
To get "What's New in the Patient Safety World"emailed to you,click here and enter "subscribe" in the subject field.