Patient Safety Tip of the Week

November 6, 2007


Don Norman Does It Again!



Don Norman, author of the classic “The Design of Everyday Things” has done it again with his newest book “The Design of Future Things”.



His learnings add considerably to our understanding of how people interact with machines and technology, especially when it comes to alarms. Several of our prior Tips of the Week have talked about how often alarms are intentionally disabled, ignored, or inappropriately responded to. Don Norman talks about the striking increase in the number of alerts and alarms we deal with in the healthcare industry. He notes “So, although each single signal may be informative and useful, the cacophony of the many is distracting, irritating, and, as a result, potentially dangerous.” He makes a case, instead, for more natural interactions that are more effective and less annoying.



He talks about the need for natural, deliberate signals as a key concept. These are signals that are easily understood without instruction. An example is a driver maneuvering into a tight space, where a helper visibly guides him using hand signals to indicate how much space is left. He describes how that concept is used in newer automobiles that assist in parking. These uses beeps whose rate increases as you get closer to an obstacle.



And feedback is crucial. Signals must offer just enough information to remain at the “periphery” of our attention, so that they can rapidly move to the “center” of our attention when needed. An example he provides is the background noise of a car engine. We don’t pay close attention to it, but when it makes a funny noise, we do shift our attention to it.  How many of you have driven in one of the new hybrid cars and had trouble telling whether it was running or not when you were stopped at a traffic light?



We’ve talked about “automation surprises” where a task is being largely done in an automated mode by a machine and we are unaware that trouble is brewing. People become “out of the loop” and uninformed about what the machine/technology is doing and exactly where it is in its processes. That may work fine when all is going as planned but when something goes wrong, people cannot jump in and do the correct response required immediately. That has certainly led to plane crashes and shipwrecks in the past. Most of us have even experienced our automobile, being on cruise control, suddenly accelerate when we had anticipated slowing down.



Therefore, feedback from machines/technology is crucial and a system is needed to allow people to understand what strategy the machine/technology is following and how far along it is in its actions.



Overautomation remains a significant issue. Sometimes machines or technological solutions are so reliable that we lose vigilance and come to put too much trust in the technology. Norman tells of a research team flying over the ocean for several hours who went to tell the pilots they had finished the research, only to find the pilots asleep. Amazingly, last week’s headlines had a story about two pilots on an overnight flight who fell asleep at the controls of an airline carrying 100 passengers, only to be woken 20 minutes from landing by frantic calls from an air traffic controller who noticed they were travelling too fast and too high. This sort of over-reliance on technology abounds in healthcare today as well.



Our perception of safety is also a factor that can be misleading. Norman asks the question “Which airport has fewer accidents: an easy one (flat, good visibility, good weather) or a “dangerous” one (hills, wind, difficult approach, etc.)? The answer is the more dangerous ones – because the pilots become more attentive, focused and careful. He goes on to discuss the concept of “risk compensation” in which people who perceive an activity to be safer go on to take more risks, the net result being that the accident rate remains unchanged.



So maybe we should make things look more dangerous! He describes the interesting “Shared Space” project in which designers actually removed safety features to get drivers to be safer! They removed traffic lights, stop signs, pedestrian crossings, etc. and actually saw a 40% reduction in accidents!



He summarizes design rules for human designers of “smart” machines:

  1. Provide rich, complex, and natural signals
  2. Be predictable
  3. Provide good conceptual models
  4. Make the output understandable
  5. Provide continual awareness without annoyance
  6. Exploit natural mappings


And design rules developed by machines to improve their interactions with people:

  1. Keep things simple
  2. Give people a conceptual model
  3. Give reasons
  4. Make people think they are in control
  5. Continually reassure
  6. Never label human behavior as “error”



Wow! These are rules for design related to machines and technology. I’d apply them all to everything else we do! From the OR team to the Board room!



Norman, of course, goes on to describe many great things that will undoubtedly make our lives easier in the future (each, of course, also having a downside!). His writing is, as always, both informative and entertaining. This is a great addition to your library.




Norman DA. The Design of Future Things. New York: Basic Books; 2007



Norman DA. The Design of Everyday Things. New York: Basic Books; 2002




Patient Safety Tip of the Week Archive


What’s New in the Patient Safety World Archive