The major issue regarding work hours in healthcare for both physicians and nurses has always been whether reducing the detrimental effect of fatigue might be offset by reduced continuity of care and increased number of handoffs that would occur after changes in housestaff or nursing hours. No one argues that healthcare worker fatigue is a serious problem (see our many previous columns listed below). But we’ve also discussed in many columns the problems related to handoffs, cross-coverage, and reduced familiarity with patients.
In the late 1980’s New York State adopted recommendations of the Bell Commission to limit the number of hours housestaff could work in a week. Subsequently other states and the ACGME have adopted significant restrictions in housestaff hours. The ACGME 80-hour work week restriction was implemented in 2003 and the ACGME in 2011 mandated 16-hour duty maximums for PGY-1 residents. The 2011 changes also mandated residents must have at least 8 hours free between shifts and residents in-house for 24 hours may have up to 4 hours for transfer of care activities and must have at least 14 hours off between shifts.
Significantly, most of the restrictions on housestaff work hours were implemented without any formal or systematic measurement of its impact on patient outcomes or for recognition of unintended consequences. So we have always been playing catch-up in assessing the impact of those changes. The evidence of the impact of restricted housestaff hours on patient outcomes and patient safety has been mixed and contradictory (see list of our prior columns below).
In our January 2015 What’s New in the Patient Safety World column “More Data on Effect of Resident Workhour Restrictions” we cited a study by Rajaram and colleagues (Rajaram 2014) which found that implementation of the 2011 ACGME duty hour reform was not associated with a change in general surgery patient outcomes or differences in resident examination performance.
Now Rajaram and colleagues have looked at the impact of the 2011 ACGME duty hour reform on patient outcomes in several surgical subspecialties (Rajaram 2015). They looked at data from the American College of Surgeons NSQIP database for 5 surgical specialties (neurosurgery, obstetrics/gynecology, orthopedic surgery, urology, and vascular surgery) and used a composite measure of death or serious morbidity within 30-days of surgery for each specialty. They then compared that measure for teaching and non-teaching hospitals for one year prior and two years after the reform. They found there were no significant associations between duty hour reform and the composite outcome of death or serious morbidity in the two years post-reform for any of the 5 surgical specialties.
The good news is obviously that there appears to have been no detrimental effect on patient outcomes. The disappointing news is that there was no positive effect on patient outcomes. And there remain numerous questions about the impact on trainee education.
Virtually all the studies to date have been observational studies, usually with a before-after format. In our January 2015 What’s New in the Patient Safety World column “More Data on Effect of Resident Workhour Restrictions” we noted that prospective trials of duty hour requirements are being conducted for both surgical (FIRST Trial) and medical (iCOMPARE Trial) training programs.
As before, we hope these two trials can help answer some of the questions outstanding regarding multiple aspects of the impact of resident work hour restrictions.
Some of our other columns on housestaff workhour restrictions:
December 2008 “IOM Report on Resident Work Hours”
February 26, 2008 “Nightmares: The Hospital at Night”
January 2011 “No Improvement in Patient Safety: Why Not?”
November 2011 “Restricted Housestaff Work Hours and Patient Handoffs”
January 3, 2012 “Unintended Consequences of Restricted Housestaff Hours”
June 2012 “Surgeon Fatigue”
November 2012 “The Mid-Day Nap”
December 10, 2013 “Better Handoffs, Better Results”
April 22, 2014 “Impact of Resident Workhour Restrictions”
January 2015 “More Data on Effect of Resident Workhour Restrictions”
Some of our other columns on the role of fatigue in Patient Safety:
November 9, 2010 “ ”
April 26, 2011 “Sleeping Air Traffic Controllers: What About Healthcare?”
February 2011 “Update on 12-hour Nursing Shifts”
September 2011 “Shiftwork and Patient Safety
November 2011 “Restricted Housestaff Work Hours and Patient Handoffs”
January 3, 2012 “Unintended Consequences of Restricted Housestaff Hours”
June 2012 “June 2012 Surgeon Fatigue”
November 2012 “The Mid-Day Nap”
November 13, 2012 “The 12-Hour Nursing Shift: More Downsides”
July 29, 2014 “The 12-Hour Nursing Shift: Debate Continues”
October 2014 “Another Rap on the 12-Hour Nursing Shift”
December 2, 2014 “ANA Position Statement on Nurse Fatigue”
Rajaram R, Chung JW, Jones AT, et al. Association of the 2011 ACGME Resident Duty Hour Reform With General Surgery Patient Outcomes and With Resident Examination Performance. JAMA 2014; 312(22): 2374-2384
Rajaram R, Chkung JW, Cohen ME, et al. Association of the 2011 ACGME Resident Duty Hour Reform with Postoperative Patient Outcomes in Surgical Specialties. J Am Coll Surg 2015; published online July 7, 2015
The FIRST Trial. Flexibility In duty hour Requirements for Surgical Trainees Trial.
iCOMPARE Trial (Comparative Effectiveness of Models Optimizing Patient Safety and Resident Education)
In our Patient Safety Tips of the Week November 17, 2009 “Switched Babies” and December 11, 2012 “Breastfeeding Mixup Again” we noted that one of the risk factors for these mixups are similar sounding names. Similar names are always an issue when it comes to wrong patient events but neonates may be even more at risk. In our May 20, 2008 Patient Safety Tip of the Week “CPOE Unintended Consequences – Are Wrong Patient Errors More Common?” we noted you would be surprised to see how often patients with the same or very similar names may be hospitalized at the same time. Shojania (2003) described a near-miss related to patients having the same last name and noted that a survey on his medical service over a 3-month period showed patients with the same last names on 28% of the days. The problem is even more significant on neonatal units, where multiple births often lead to many patients with the same last name being hospitalized at the same time and medical record numbers being similar except for one digit. Gray et al (2006) found multiple patients with the same last names on 34% of all NICU days during a full calendar year, and similar sounding names on 9.7% of days. When similar-appearing medical records numbers were also included, not a single day occurred where there was no risk for patient misidentification. Both these studies were on relatively small services so one can anticipate that the risks of similar names is much higher when the entire hospitalized patient population is in the database.
Our June 26, 2012 Patient Safety Tip of the Week “Using Patient Photos to Reduce CPOE Errors”) highlighted an intervention developed by Children’s Hospital of Colorado (Hyman 2012) in which a patient verification prompt accompanied by photos of the patient reduced the frequency of wrong patient order entry errors. That may be helpful for older children and adults but, frankly, is not of much benefit in neonates.
In our July 17, 2012 Patient Safety Tip of the Week “More on Wrong-Patient CPOE” we discussed an elegant tool, the retract-and-reorder or RAR tool, that provides a quantitative estimate of how frequently wrong-patient CPOE may occur (Adelman 2013). Those authors developed a computer tool that identified instances where orders were entered on a patient, promptly retracted, and then entered on a different patient. In a validation study in a hospital they found the RAR tool had a 76.2% positive predictive value for identifying wrong patient errors (though obviously these errors were captured and corrected before reaching the patient).
Now the researchers have applied the RAR tool to assess the impact of a change in naming conventions for newborns (Adelman 2015). Hospitals need to create a name for each newborn promptly on delivery because the families often have not yet decided on a name for their baby. Most hospitals have used the nonspecific convention “Baby Boy” Jones or “Baby Girl” Jones. A suggested alternative uses a more specific naming convention. It uses the first name of the mother. For example, it might be “Wendysgirl Jones”. Montefiore Medical Center switched to this new naming convention in its 2 NICU’s in July 2013 and the RAR tool was used to measure the impact on wrong patient errors. Wrong patient error rates measured in the one year after implementation of the new more specific naming protocol were 36% fewer than in the year prior to implementation.
For reasons not immediately clear, error rates were reduced even more for orders placed by housestaff (52% reduction) and orders placed on male patients (61% reduction).
Switch to the more specific neonatal naming convention was simple and effective and done without significant financial or labor cost and done with technology already present in most NICU’s. Though the Montefiore study was not blinded and was potentially subject to the Hawthorne effect, the more specific naming convention is very promising. Validation at other NICU’s would be the next logical step before adopting this convention in a more widespread fashion.
The authors note that they only studied the impact on order entry. They point out that mixing up names is also a potentially serious for reading imaging studies or pathology specimens, giving blood products, and may also be a factor in breastmilk mixups. So the potential for this new naming convention to avert wrong patient errors is substantial.
Shojania KG. AHRQ Web M&M Case and Commentary. Patient Mix-Up. February 2003
Gray JE, Suresh G, Ursprung R, et al. Patient Misidentification in the Neonatal Intensive Care Unit: Quantification of Risk. Pediatrics 2006; 117: e43-e47
Hyman D, Laire M, Redmond D, Kaplan DW. The Use of Patient Pictures and Verification Screens to Reduce Computerized Provider Order Entry Errors. Pediatrics 2012; 130: 1-9 Published online June 4, 2012 (10.1542/peds.2011-2984)
Adelman JS, Kalkut GE, Schechter CB, et al. Understanding and preventing wrong-patient electronic orders: a randomized controlled trial. J Am Med Inform Assoc 2013;
J Am Med Inform Assoc 2013; 20(2): 305-310
Adelman J, Aschner J, Schechter C, et al. Use of Temporary Names for Newborns and Associated Risks. Pediatrics 2015; Published online July 13, 2015
Back in the 1990’s the Niagara Health Quality Coalition was looking for measures for public reporting of hospital quality. At that time we recognized that mortality rates, with the exception of mortality rates for a few specific conditions, were poor indicators of quality of care. Despite reviews by physicians well-respected in the patient safety field (Iezzoni 1997, Shojania 2008, Lilford 2010) standardized hospital-wide mortality rates continue to be used often as a measure of quality of care.
Now a new study has attempted to determine whether there is a correlation between hospital-wide standardized mortality rates and the more important measure - avoidable deaths. Hogan and colleagues (Hogan 2015) did case record reviews of deaths in 34 English hospital trusts and determined rates of avoidable deaths, then compared these to two commonly used measures - the hospital standardized mortality ratio (HSMR) and the summary hospital level mortality indicator (SHMI).
The proportion of avoidable deaths was actually quite low (3.6%). More importantly there was only a weak, non-statistically significant correlation with the HSMR and SHMI.
The editorial accompanying the Hogan paper (Doran 2015) concludes that evidence is mounting that there is no future for summary mortality rates. They note that no matter how carefully they are risk adjusted there is variation both in risk across hospitals and performance within hospitals, and variation in the availability of alternative places to die. And, now that they have also been shown to not correlate with avoidable deaths we should see the “death of death rates”.
So should we instead switch to the methodology used by Hogan and colleagues and focus on avoidable death rates? Ideally we would. But medical chart review is labor intensive (and the electronic medical record actually does little to expedite such review of records for quality and avoidability of death). Moreover, despite multiple measures to reduce variability of the reviewer’s conclusions, the inter-rater reliability in the Hogan study was modest at best. And the very low rates of avoidable deaths found would require large numbers to render statistically significant comparisons.
We’ve often discussed use of trigger tools to help identify cases of suboptimal care (see our columns on trigger tools listed below). The concept is that the trigger tool flags cases that have a high likelihood of errors or suboptimal care and then more intensive chart review takes place. These are wonderful tools to use in a hospital to identify vulnerabilities and areas in need of improvement. But it would be difficult to compare hospitals using these because the frequency with which the triggers occur varies considerably across hospitals.
So mortality rates are still useful measures in randomized controlled trials of drugs, devices, or surgical procedures. And they are useful for comparing outcomes for conditions in which very rich datasets are available (eg. coronary artery bypass). But using hospital-wide mortality rates based on administrative data is not helpful. It may inappropriately flag some hospitals as “quality outliers” and as Doran et al put it “hazardous hospitals lurking inside the funnel are assumed to be safe”.
Some of our prior columns on trigger tool methodology:
Iezzoni LI. Assessing quality using administrative data. Ann Intern Med 1997;
Shojania KG, Forster AJ. Hospital mortality: when failure is not a good measure of success. CMAJ 2008; 179: 153-157
Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010; 340: c2016
Hogan H, Zipfel R, Neuburger J, et al. Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis. BMJ 2015; 351: h3239
Doran T, Bloor K, Maynard A. The death of death rates? Using mortality as a quality indicator for hospitals. BMJ 2015; 351: h3466
We’ve done numerous columns showing that adverse patient events and mortality are higher for patients admitted on weekends, commonly referred to as “the weekend effect”. Now a new study quantifies the problem across multiple countries.
Researchers (Ruiz 2015) analyzed records of emergency and elective admissions from metropolitan teaching hospitals in four countries participating in the Global Comparators (GC) project (England, Australia, USA and the Netherlands) over a period of 4 years (2009–2012). Their main finding was that mortality outcomes vary within each country and per day of the week in agreement with previous analyses showing a ‘weekend effect’ for emergency and elective admissions.
The adjusted odds of 30-day death following elective surgery remained significantly high when surgery took place on a Friday, Saturday and/or Sunday compared with a Monday procedure.
In the US the adjusted odds ratio of 30-day mortality was roughly 2.5 times higher on Saturdays and Sundays for elective procedures and 11-13% higher for emergency procedures compared to Mondays.
Dutch hospitals were also found to have a “Friday” effect (higher mortality rates for procedures done on Friday compared to Monday). Interestingly, English and Dutch hospitals had lower mortality rates on Tuesdays compared to the US. Some difficulties comparing results between countries were due to differences in coding practices or to difficulty in distinguishing between elective and emergency admissions in some countries. The proportion of “riskier” procedures also differed by day of week from country to country.
The study did not address the factors contributing to the weekend effect. In our many previous columns on the weekend effect or after-hours effect we have pointed out how hospitals differ during these more vulnerable times. Staffing patterns (both in terms of volume and experience) are the most obvious difference but there are many others as well. Many diagnostic tests are not as readily available during these times. Physician and consultant availability may be different and cross-coverage by physicians who lack detailed knowledge about individual patients is common. You also see more verbal orders, which of course are error-prone, at night and on weekends. And a difference in non-clinical staffing may be a root cause. Our December 15, 2009 Patient Safety Tip of the Week “The Weekend Effect” discussed how adding non-clinical administrative tasks to already overburdened nursing staff on weekends may be detrimental to patient care. Just do rounds on one of your med/surg floors or ICU’s on a weekend. You’ll see nurses answering phones all day long, causing interruptions in some attention-critical nursing activities. Calls from radiology and the lab that might go directly to physicians now go first to the nurse on the floor, who then has to try to track down the physician. They end up filing lab and radiology reports or faxing medication orders down to pharmacy, activities often done by clerical staff during daytime hours. Even in those facilities that have CPOE, nurses off-hours often end up entering those orders into the computer because the physicians are off-site and are phoning in verbal orders. You’ll also see nurses giving directions to the increased numbers of visitors typically seen on weekends. They even end up doing some housekeeping chores. All of these interruptions and distractions obviously interfere with nurses’ ability to attend to their clinically important tasks (see our Patient Safety Tips of the Week for August 25, 2009 “ ” and May 4, 2010 “More on the Impact of Interruptions”).
As noted in the accompanying editorial, the Ruiz study really just reconfirms that the weekend effect exists in multiple countries (Lilford 2015). It does not address the reasons. Lilford and Chen discuss several ways we might learn more about the causes of the weekend effect, most of which are not likely to be of much use. However, they do note that the English National Health Service will be measuring the impact of increasing consultant coverage over weekends and also looking at differences in the routes via which patients are admitted.
Previous work shows that the weekend effect is complex and involves both patient-related factors and quality of care factors. While we may not be able to do much about the patient-related factors, there remains much we can do about the quality of care factors.
Some of our previous columns on the “weekend effect”:
· February 26, 2008 “Nightmares….The Hospital at Night”
· December 15, 2009 “The Weekend Effect”
· July 20, 2010 “More on the Weekend Effect/After-Hours Effect”
· October 2008 “”
· September 2009 “After-Hours Surgery – Is There a Downside?”
· December 21, 2010 “More Bad News About Off-Hours Care”
· June 2011 “Another Study on Dangers of Weekend Admissions”
· September 2011 “Add COPD to Perilous Weekends”
· August 2012 “More on the Weekend Effect”
· June 2013 “Oh No! Not Fridays Too!”
· November 2013 “The Weekend Effect: Not One Simple Answer”
· August 2014 “The Weekend Effect in Pediatric Surgery”
· October 2014 “What Time of Day Do You Want Your Surgery?”
· December 2014 “Another Procedure to Avoid Late in the Day or on Weekends”
· January 2015 “Emergency Surgery Also Very Costly”
· May 2015 “HAC’s and the Weekend Effect”
Ruiz M, Bottle A, Aylin PP. The Global Comparators project: international comparison of 30-day in-hospital mortality by day of the week. BMJ Qual Saf 2015; 24: 492-504 Published Online First 6 July 2015
Lilford RJ, Chen Y-F. The ubiquitous weekend effect: moving past proving it exists to clarifying what causes it. BMJ Qual Saf 2015; 24: 480-482
Print “PDF version”