Last week’s Tip of the Week dealt with lessons learned from the tragic mid-air collision of 2 planes in Brazil and showed numerous analogies to healthcare. This week we discuss the investigation of the tragic crash of Continental Flight 3407 near Buffalo, New York in early 2009. Flight 3407 was a Bombardier Dash8-Q400 twin-engine turboprop aircraft that took off from Newark, New Jersey. Only a few miles short of the Buffalo airport the plane encountered an aerodynamic stall and plummeted to the ground, landing on a house. All 49 passengers and crew on the plane died, as well as one individual in the house. The NTSB public hearing on this accident took place this month and there is a wealth of information about this accident both on the NTSB website and in the lay press.
Details about the NTSB public hearing on this crash can be found on the NTSB website. These include the agenda, presentations, an animated video of the crash and other links. The NTSB documents related to this investigation can also be found at this link. The Buffalo News has an extensive collection of articles related to the accident as well.
Synopsis
On descent and approach to the Buffalo-Niagara International Airport, the Dash8-Q400 aircraft experienced a rapid loss of velocity. Shortly after the landing gear were lowered, the plane neared stall speed. When the “stick shaker” alert went off, signifying imminent aerodynamic stall, the pilot pulled back on the yoke in attempt to raise the nose of the plane. The correct maneuver to recover from such a stall is the opposite: the pilot should push the yoke down to lower the nose and accelerate the plane. The copilot had also put the plane flaps up, which was also not a correct response in this situation. The plane rolled and pitched, then plummeted almost straight downward. The animated video on the NTSB website provides a good reproduction of the likely events.
Weather/icing
Icing on the wings is the first thing you think about in Buffalo in February. And while it is hard to say icing was not a contributory factor, the NTSB investigators apparently felt the moderate ice buildup was not significant enough to cause the reduction to stall speed. The type of plane involved in this accident apparently spends more time in icing zones than do bigger jet planes. Apparently, crashes in the past have led some carriers to cease flying this type of plane in northeast and just fly them in their Caribbean fleets.
But the testimony about icing brings up several relevant points. First, the transcript of the cockpit voice recorder clearly shows a discussion about icing that was occurring on the plane and the copilot remarked that she had never really been in icing conditions before and commented on her fright of those conditions. That conversation may have been one of several things that may have distracted the pilots during the last minute or so of the fatal flight. Second, in the interview with the air traffic controllers, the issue of how icing is monitored for came up. Ordinarily, the air traffic controllers check with pilots of incoming flights to see what sort of icing conditions are prevalent. When the handoff occurred between air traffic controllers that evening, the offgoing controller told the oncoming controller there was “negative icing”. However, that was basically assumed because no pilots had specifically complained about icing (that controller had been briefed about one report of icing when he first came on his shift but none of the 20 or so planes that had landed on his shift had complained about icing). In healthcare, assumptions are always dangerous. Almost every time we do a root cause analysis (RCA) we find one or more instances where someone assumed something that proved incorrect and was one of multiple factors leading to the adverse outcome. In fact, when we do a patient safety presentation for our students and housestaff, one of our slides says “Never Assume – it will make an “ass” our of “u” and “me”. Try the following sometime. Sit in on (or record) any handoff at your facility (nurse-to-nurse, housestaff-to-housestaff, attending-to-attending, etc.). Then, when it is over go over with the participants the “facts” and find out what is factual and what is assumed. You (and they) will be amazed at how many of the handoff “facts” are really assumptions.
Maintenance and Physical Factors
The numerous reports did not seem to uncover any significant contributory factors related to airplane maintenance or physical condition of the aircraft. We were somewhat surprised by the lack of discussion about fuel here. Though the aircraft was below its maximum weight allowance, it does appear that it had considerably more fuel aboard than needed for the relatively short trip from Newark to Buffalo (is fuel more expensive in Buffalo than Newark?). Though the plane was below the weight limit, one wonders if the events would have still occurred if the weight of that excess fuel had not been on board.
Sometimes doing more than you need to can give rise to problems. We’ve previously talked about incidents were dosage calculations to the second decimal point may be irrelevant for large numbers yet give rise to errors related to missing the decimal point.
We’ve discussed the role of maintenance in both aviation and healthcare in the past (see our August 7, 2007 Patient Safety Tip of the Week “Role of Maintenance in Incidents”.
Automation surprises: Autopilot
A discussion of automation surprises logically follows the discussion on icing. Several of our previous columns have discussed automation surprises, whether that is a computer changing things in the background or a system operating with several “modes” controlled via a single swich. When we first heard about this crash, the first thing we suspected was that the plane was on autopilot, which was correcting for icing, and when the autopilot was turned off the crew encountered an “automation surprise”. That is, they had not been aware the autopilot was compensating for the icing and they now had to suddenly do that manually. It is not clear what role the use of autopilot actually played in this crash. The NTSB public hearing did not emphasize this (perhaps because they apparently did not feel that the icing was a major contributor). However, the plane was flying on autopilot and the autopilot disengaged shortly before the stall and fatal descent (not clear if that disengagement was automatic or manual). Use of autopilot in icing conditions is well recognized as being hazardous. However, there seems to be no single industry standard about its use in those conditions. Some say autopilot should not be used at all during icing conditions, others say it may be used but should be disengaged every 10 minutes or so to assess conditions. And that “assumes” everyone concurs on what “icing” conditions are. While there do exist graded criteria for icing, our bet is that there is poor agreement amongst pilots about the definition of “significant icing”.
Most of you have already personally experienced automation surprises. Have you ever been driving on a highway and started slowing down, only to find that the cruise control on your car (you forgot you had set it!) speeds you up?
In healthcare, much of our equipment in ICU’s (and other areas) is high tech and computers are often doing things in the background that we are not immediately aware of. Similarly, we often do have equipment that runs several different “modes” off a single switch. We gave an example of such a surprise back in 2007 when we described a ventilator operating on battery power when all thought it was operating on AC current from a wall socket.
Preparation for rare/unexpected circumstances: Simulation
and Rehearsal
There are certain potentially fatal circumstances that may be encountered in many industries. Most are rare and not encountered frequently or may never be encountered in a career. Yet you need to be prepared on how to deal with such if they do occur. The aerodynamic “stall” is one of those. This means that the speed of the plane was too slow to allow the air to lift the wings and keep the plane airborne. Pilots do receive training in “stalls”, mostly during simulation exercises. But it is not clear how much training is done on stalls and, perhaps more important, how often that training is updated. Something you learned 3 or more years ago and have never seen again is probably not something you are likely to remember tomorrow.
In healthcare there are some rare events akin to the aerodynamic stall in which only preparation with simulation or rehearsal could reasonably prepare one to deal with the situation. Surgical fires are a great example. They occur instantaneously and most healthcare workers are ill prepared to deal with them once they occur (that is why prevention is so critical). Yet when they occur there are responses expected from each individual in the OR, both to minimize injury to the patient and to prevent injury to other staff. Even if you don’t have fancy tools to do an actual simulation, you can and should do surgical fire drills where the physician, nurse and anesthesiologist (and others typically present in the OR) learn and rehearse what they should do in the event of a surgical fire. And those drills should be done regularly (once or twice a year).
There is also an evolving body of literature supportive of simulation to improve other aspects of healthcare. A recent prospective randomized controlled trial reported in the British Medical Journal (Larsen et al 2009) showed that a group of young surgeons who received virtual reality training on laparascopic surgery were able to perform to the level of intermediately experienced laparoscopists (20-25 cases) compared to the control group which performed at the novice level (five or less cases) and took half the time on average to complete those cases. The accompanying editorial (Kneebone 2009) urges caution in generalizing these findings to other circumstances and notes that the study dealth with a fairly straight forward laparoscopic procedure not likely to be associated with many complications. Simulation exercises have also been very helpful in developing teamwork and improving communication among team members and have been used to simulate many emergencies and unexpected circumstances, though not in rigorously controlled trials.
One of the experts at the NTSB hearing, Robert Dismukes of NASA, noted that there are problems with the current type of simulation provided. The simulated stalls are anticipated. One’s reactions when a stall occurs as a surprise are likely to be quite different.
We have our own analogy for that. Fellow kayakers learned the first time they got in a kayak how to do a “wet exit”. That is, someone tipped their kayak over and they had to get out of it while upside down under water tightly packed into the kayak with a rubber “skirt” covering their lower half and the opening of the kayak. Of course, they were instructed what to do first and their instructor was there to ensure they did not drown trying. So you anticipate what is going to happen and plan for it. However, the first time you unexpectedly flip your kayak is a different story! You didn’t have time to plan what you are going to do and you have no one to make sure you won’t drown. You are upside down, under water, and you know you will drown if you cannot exit that kayak. If you panic, you lose valuable time. “What if I can’t get the rubber “skirt” to pry loose?” “What if I can’t slide out of that tight kayak?!” A wet exit might be disastrous without the prior training under more controlled conditions. But the surprise wet exit is a confidence builder. Once it has happened to you, you are no longer fearful that you won’t be able to do it.
So if you are fortunate enough to be able to avail your staff of formal simulation training, make sure you program in emergencies and unexpected circumstances that truly come as surprises.
Lastly on the issue of simulation, the training for the pilots in the current crash included witnessing a NASA video on stalls that also included a section on “tail stalls”. In a tail stall, one must do just the opposite of what one does for a wing stall. In fact, the pilot of the current crash did what one would do in a tail stall. We will never know what was going through this pilot’s mind at the time. However, since tail stalls apparently do not apply to this particular type of plane, it was questioned during the public hearings whether inclusion of the “tail stall” in the training video was wise. Could this be an unexpected consequence of “information overload”?
Counterintuitive maneuvers
When the plane begins to nosedive, a natural tendency would be to pull back on the yoke in attempt to aim the nose upward. Unfortunately, in the aerodynamic stall described here the proper response is to push the yoke down and lower the nose while accelerating. That is the only way to get enough air to prop the wings back up. Again, you might remember the correct maneuver if you knew the stall was coming but it is hard to fight your natural tendencies when taken by surprise.
Again, kayakers know about counterintuitive maneuvers. When you do an “Eskimo roll” in a kayak (where you flip over from being turned upside down without exiting the kayak) you have a natural tendency to try to lift your head first as you near the surface. The correct maneuver is to actually lay your head down on the surface of the water while you let your other movements upright your body. Your head should come up last. This is a maneuver that requires practice, practice, practice. Its almost impossible to do without proper instruction and multiple rehearsals.
Are there counterintuitive maneuvers in medicine? Think about diabetic ketoacidosis where the serum potassium is typically elevated. Yet the serum potassium will fall precipitously with insulin therapy. You have to anticipate this drop in potassium and actually begin potassium placement early, sometimes when the serum potassium is still relatively high. Again, such counterintuitive maneuvers must be learned.
Sterile cockpit
We’ve discussed the “sterile cockpit” concept several times in the past. In the current public hearings on the crash of Flight 3407, violation of the sterile cockpit concept received considerable attention. Sterile cockpit procedures mandate that conversations not relevant to the safe operation of the plane do not take place during certain critical phases of aircraft operation (such as taxi, takeoff, landing and all operations below 10,000 feet). The entire conversation between pilot and copilot appears in the transcript of the cockpit voice recorder. It does demonstrate considerable conversation during the descent that was not directly related to the operation of the plane. Did that distract them from their duties at hand?
How do airlines monitor adherence to the sterile cockpit mandate? Some do line operations safety audits (LOSA) audits where an independent observer sits in the cockpit and monitors and assesses multiple operations and procedures, then critiques the crew. This airline apparently did do LOSA audits but said they “never showed very much”. Keep in mind, too, that the cockpit is much more likely to be “sterile” when the crew knows their behavior is being assessed. We wonder how many airlines routinely review cockpit voice recordings randomly to assess cockpit “sterility”.
In our October 2, 2007 Patient Safety Tip of the Week “Taking Off From the Wrong Runway” we discussed the sterile cockpit analogies to healthcare facilities. The “sterile cockpit” concept also applies to the surgical timeout/final verification process. It also applies in those central pharmacies where the pharmacist is expected to do certain work without interruptions. And one could make a case that it should apply to any healthcare worker tasked with doing a double check or second “independent” verification (eg. for a blood product or a chemotherapy infusion rate). There are probably many other circumstances where the “sterile cockpit” concept applies.
How many healthcare organizations actually audit or monitor those processes to see how often the “sterile cockpit” process is indeed “sterile”? We recommend that periodic audits of at least the surgical timeout be done via a sampling methodology. We actually favor videotaping OR cases for use in performance improvement activities. Letting the OR team view and critique their own performance and the performance of the team as a whole is a great way to improve coordination and teamwork and identify issues that would have otherwise been overlooked. The biggest issue is getting your physicians and legal counsel to be comfortable with such videotaping. Very few facilities currently do this.
Of interest, the NTSB report in our October 2, 2007 column mentioned that a LOSA Collaborative showed that flight crewmembers who intentionally deviated from standard operating procedures were three times more likely to commit other types of errors, mismanage more errors, and find themselves in more undesired aircraft situations compared with those flight crewmembers who did not intentionally deviate from procedures. We suspect the numbers in healthcare would be similar. So auditing as above might identify risk for other situations.
The Learning Curve
Pilot experience is obviously an important safety consideration. Generally, it one sees a correlation between safe, efficient operation of an aircraft and the number of hours spent flying, particularly the number of hours spent on that particular type of aircraft. (But keep in mind that seniority and experience does not prevent errors. In fact, in our Novermber 25, 2008 Patient Safety Tip of the Week “Wrong-Site Neurosurgery” we noted that certain types of error, such as wrong site surgery, may be more likely with experienced surgeons.)
In this case, both the pilot and copilot were relatively new to this particular type of airplane (they also had relatively few total flight hours). Sound familiar? Last week we discussed the crash that occurred in pilots flying home a brand new airplane from Brazil. They obviously had the same unfamiliarity with some aspects of that plane.
When one reads through the assessments of the performance of the pilot in the current crash, there is somewhat of a surprise. While the assessments were usually good, several times it was mentioned that he had some difficulties with programming the FMS (Flight Management System) “but all the pilots transitioning to this airplane have trouble with that”. That’s reassuring!!!
Switching to the new plane also means that some functions you used on your previous plane(s) are different on the new plane. Several examples are given where switch location or configurations were opposite on the Dash-8 400Q to what they were on this pilot’s previous plane.
So think about healthcare. Yes, we all know that there is a learning curve for surgeons for certain types of procedures. But there is a learning curve every time a new piece of equipment is introduced almost anywhere in our healthcare facilities. Staff are often confronted with an unfamiliar new piece of equipment without proper training in how to use it. We’ve all seen the case where a nurse “floats” from one ICU to another ICU and encounters a different type of ventilator that has dials and switches and settings that are totally different from that on the ventilators in the other ICU. That is the biggest reason that standardization is so important. When a team gets together to consider purchase of new items such as ventilators, it clearly needs to include frontline staff (those who use the equipment) and actually needs to let them “play” with the equipment before deciding to purchase it. Where possible, the same “look and feel” ought to apply to the equipment in all locations. And one must especially be careful when temporary staff (eg floating nurses, agency nurses) is brought on board. They need to be educated and oriented to all types of equipment and policies and procedures you do at your facility or unit.
Fatigue
Fatigue causes a deterioration of performance in almost all work situations. Aviation was one of the first industries to institute strict work hour limitations. The NTSB investigations go into great detail about not only the work hours but also what was going on in the lives of the involved crew for a longer period of time, looking for activities that might have led to fatigue. They look at things like where the pilots slept the night prior, etc. Fatigue and other factors, like distractions and the unsterile cockpit, are things that interfere with situational awareness of the pilots.
We, of course, in healthcare have now instituted strict work hour limitations for housestaff. However, we have no limits to avoid fatigue in other members of the healthcare team (nurses, attending physicians, extenders, technicians, etc.). And we know of no tools or systems in widespread use at this time to identify fatigue so we could remove fatigued healthcare workers from harm’s way. We need to do a better job at that.
One interesting aspect of the Flight 3407 crash was that the pilots’ first two flights that day were cancelled due to bad weather in the Newark region. So they had considerable “idle time” prior to the fatal flight. We are unaware of any studies on the effect of “idle time” on performance. However, if it does impact performance, that could be relevant in healthcare. Think of all the times you might have an OR team waiting because of a delay in a case already in the OR. Sounds like a good research project for a human factors investigation!
Alarms
Alarm problems are one of our “big three” findings in most root cause analyses of medical incidents (along with communication issues and problems with hierarchy/authority gradients). In this crash, there does not appear to be any tampering with alarm settings or misuse of alarms. But there may be an alarm design issue here.
The “stick shaker” alarm was the first clue to these pilots of the impending stall. But the investigation panel observed that the precipitous drop in airspeed had taken place over 20 seconds. One of the NTSB board members, Deborah Hersman (who always seems to make astute observations in the investigations we’ve read), questioned whether it made more sense to have some other sort of alarm that would alert crew to the dropping airspeed well before the catastrophic “stick shaker” alarm. That certainly makes sense. We have had multiple columns in the past focusing not only on abuse or misuse of alarms but also on faulty alarm design. Alarms must be designed in labs that actually observe how humans interact with the equipment and respond to the alarms.
The Vermont Incident
One piece of evidence in the hearings dealt with an incident in Vermont of a similar plane developing the “stick shake” alert when airspeed reduced below the set limit. The crew was able to recover in that event. However, reading that transcript provides great insight into the complexity of landing an airplane and the structured processes the crew goes through. They describe at least 3 separate checklists used: a “descent” checklist, an “approach” checklist, and a “before landing” checklist. Sound familiar? Recall that the WHO Safe Surgery Checklist is also actually 3 separate checklists.
The latter transcript also brings out another important point about checklists: what happens when you get distracted? It is distractions that often cause items in checklists to be overlooked. After the stick shaker alarm went off, one of the pilots had to “backtrack” on the checklist because he forgot where he had left off. That is good advice: any time you are using a checklist and get interrupted, backtrack so you can be sure you have not omitted any steps.
The Vermont transcript also provides insight into use of alerts under varying circumstances. The airspeed alert system is set at different rates based upon whether there are icing conditions or not (the speed at which you would approach and land is higher in icing conditions than in non-icing ones). So the crew typically sets the alert level based on the conditions and then adds a few more miles per hour so that it would go off before you reached that critical speed. Is there an equivalent for this in healthcare safety? We can think of a few examples where it might be important. Consider the OR. If a case is running smoothly and on time there may be no need for certain types of alert. But suppose there are complications or other delays so that your surgical case is now running longer than normal. Is there a point in time where you need to consider an extra dose of prophylactic antibiotic? Is there a point where you need to consider repositioning the patient to avoid a nerve pressure injury or a decubitus? Is there a point where the risk of DVT has risen high enough that you’d consider beginning intraoperative DVT prophylaxis? Consider setting some sort of time-based alert to ask these questions in your surgical cases.
Lingo and Abbreviations
We thought we had a dizzying array of special words and abbreviations that have led to many errors and adverse outcomes in healthcare. But when you read these NTSB reports it is mindboggling how many terms must be recognized by all sorts of people in the aviation field. And the abbreviations used, both verbally and on paper or display screens, are incredible. Aviation clearly needs to have a “do not use” abbreviation list similar to what we use in healthcare.
Root Causes
We’ve identified above many of the conditions and events at the “sharp end” and some root causes of this unfortunate event. But the deeper root causes will be discussed for years to come. The low pay scales of the pilots in the regional airlines leads to recruiting less experienced pilots who often have to commute long distances because they can’t afford housing near their airline’s bases. The low profit margins may have impacts on training and retraining.
Scapegoats
It is all too easy here to blame this accident on “pilot error”. However, when you do a root cause analysis you must insert yourself into the situation as it played out. You need to ask “Could this have happened to 2 other pilots?” or “Could this have happened to me?”. The answer here is probably “yes”. There were clearly multiple system issues that need to be addressed so that an accident of this sort is not repeated. The same applies in almost every RCA we do in healthcare. The primary goal of an RCA is not to affix blame but rather to learn how to avert similar disasters in the future. Let’s hope we can proactively use some of the many lessons learned from this tragic event. Not to do so would make the loss of lives even more tragic.
References:
NTSB Public Hearing
Colgan Air, Inc. Flight
3407, Bombardier DHC8-400, N200WQ
Clarence Center, New York, February 12, 2009
Public Hearing
May 12-14, 2009
http://www.ntsb.gov/events/2009/buffalo-ny/default.html
NTSB Docket Management System
Documents related to the NTSB investigation of Flight 3407 crash
http://www.ntsb.gov/Dockets/Aviation/DCA09MA027/default.htm
The Buffalo News
The Tragedy of Flight 3407
(contains multiple articles and links on the accident and NTSB investigation)
http://www.buffalonews.com/517
NTSB Reports on the crash of Flight 3407
Air Traffic Control Group Chairman's
Factual Report
http://www.ntsb.gov/Dockets/Aviation/DCA09MA027/417765.pdf
Larsen CR, Soerensen JL, Grantcharov TP, et al. Effect of virtual reality training on laparoscopic surgery: randomised controlled trial. BMJ 2009; 338: b1802
http://www.bmj.com/cgi/content/abstract/338/may14_2/b1802
Kneebone R, Aggarwal R. Editorials. Surgical training using simulation. Early evidence is promising, but integration with existing systems is key. BMJ 2009;338:b1001
http://www.bmj.com/cgi/content/full/338/may14_2/b1001
NTSB Sterile Cockpit Procedures
http://www.ntsb.gov/Dockets/Aviation/DCA09MA027/417492.pdf
NTSB Reports on the crash of Flight 3407
Cockpit Voice Recorder Group Chairman
Factual Report
http://www.ntsb.gov/Dockets/Aviation/DCA09MA027/418693.pdf
NTSB Reports on the crash of Flight 3407
Operations Group Chairman
Interview Summaries during Field Investigation – Buffalo
http://www.ntsb.gov/Dockets/Aviation/DCA09MA027/417449.pdf
Updated: 05/14/09 11:04 AM
'From complacency to catastrophe in 20 seconds'
By Michael Beebe and Jerry Zremski
News Staff Reporters
http://www.buffalonews.com/520/story/671328.html
NTSB Reports on the crash of Flight 3407
Operations Group Chairman
Interview Summaries of Vermont Incident Crew
http://www.ntsb.gov/Dockets/Aviation/DCA09MA027/417450.pdf
Updated: 05/14/09 11:03 AM
SPECIAL REPORT: INVESTIGATING FLIGHT 3407
Low pay, fatigue 'recipe' for crash; Flight 3407 families outraged
By Jerry Zremski and Michael Beebe
NEWS STAFF REPORTERS
http://www.buffalonews.com/520/story/670974.html
http://www.patientsafetysolutions.com
Patient
Safety Tip of the Week Archive
What’s New in the Patient Safety World Archive