Bad User Interface Design Can Be Deadly
In his latest Alertbox issue, entitled "Medical Usability: How to Kill Patients Through Bad Design", Jakob Nielsen, one of the world's leading authorities on corporate Web site usability and interface design, points to a paper, entitled "Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors," published in March in the Journal of the American Medical Association.
The paper describes how a field study identified twenty-two ways that automated hospital systems, in this case a hospital's order-entry system, which physicians use to specify patient medications, caused patients to get the wrong medicine.
According to Nielsen, "most of these flaws are classic usability problems that have been understood for decades." Although, as Nielsen admits, medical systems have provided many well-documented killer designs, what's less well-known is that usability problems in the medical sector's office automation systems can harm patients just as seriously as machines used for treatment.
Of the twenty-two ways, Nielsen highlights six of general interest.
1. Misleading Default Values.
The system screens listed dosages based on the medication units available through the hospital's pharmacy. When hospital staff members prescribed infrequently used medications, they often relied on the listed unit as being a typical dose, even though that's not the true meaning of the numbers. If a medication is usually prescribed in 20 or 30 mg doses, for example, the pharmacy might stock 10 mg pills so it can cover both dosage needs and avoid overstocking a rare medication. In this case, users might prescribe 10 mg, even though 20 or 30 would be more appropriate.
According to Nielsen, the solution here is simple: Each screen should list the typical prescription as a guidance. Years of usability studies in many domains have shown that users tend to assume that the given default or example values are applicable to their own situations.
2. New Commands Not Checked Against Previous Ones.
When doctors changed the dosage of a patient's medication, they often entered the new dose without canceling the old one. As a result, the patients received the sum of the old and new doses.
Nielsen likens this common type of user error to a banking interface error, where you specify payment of the same amount to the same recipient twice in one day. Many bank Web sites will catch these errors and ask you to double-check so you don't pay the same bill twice. In general, if users are doing something they've already done, the system should ask whether both operations should remain in effect or whether the new command should overrule the old one.
3. Poor Readability.
Because patient names appeared in a small font that was difficult to read, it was easy for users to select the wrong patient. The problem was compounded by the fact that names were listed alphabetically rather than grouped by hospital areas, which meant that users looking for a specific patient saw many similar names.
Also, in individual patient records, the patient's name didn't appear on all screens, reducing the probability that users would discover the error before reaching a critical point in the interaction.
4. Memory Overload.
At times, users had to review up to twenty screens to see all of a patient's medications. The well-known limits on human short-term memory make it impossible to remember everything across that many screens. In a survey, 72% of staff reported that they were often uncertain about medications and dosages because of the difficulties in reviewing a patient's total medications.
Nielsen points out that humans are notoriously poor at remembering exact information, and minimizing users' memory load has long been one of computing's top-ten usability heuristics. Facts should be restated when and where they're needed rather than requiring users to remember things from one screen to the next (let alone twenty screens down the road).
5. Date Description Errors.
The interface let users specify medications for "tomorrow." When surgeries were late in the day and users entered such orders after midnight, patients would miss a day's medication.
6. Overly Complicated Workflow.
Many aspects of the system required users to go through numerous screens that conflicted with hospital workflow. As a result, the system wasn't always used as intended. Nurses, for example, kept a separate set of paper records that they entered into the system at the end of the shift. This both increased the risk of errors and prevented the system from reflecting real-time information about the medications each patient had received.
Nielsen makes the point that, in general, whenever you see users resorting to sticky notes or other paper-based workarounds, you know you have a failed UI.
Although the paper makes some shocking revelations and all due credit should be ascribed to the researchers in making their findings public, Nielsen goes on to highlight some serious weaknesses in the research's methodology.
In particular the over-reliance on the self-reported data that resulted from a supplementary survey of actual user behavior. The survey asked hospital staff how often various errors had occurred during the previous three months.
According to Nielsen, it's well known that people have a hard time remembering what they do with computers. "Valid data comes from what people do, not what they say."
Furthermore, if a system's interface fails to provide adequate feedback, users might not even realize that they've committed an error. "With medication errors in particular, it's also quite possible that hospital staff might tend to minimize the extent to which patients get the wrong medication -- even when a survey guarantees anonymity."
Nielsen concludes by making some recommendations on how to employ a more robust research process.
"I would have much preferred error-frequency estimates based on actual observations, rather than fallible human memory and possibly biased survey answers.
Still, the survey indicated that many of the errors reportedly occurred at least weekly. If anything, the true error rate is probably higher than the self-reported estimates in the survey.
It's great to see usability branching out beyond its origins and being researched in a clinical epidemiology department. It's less great to observe methodological weaknesses that stem from studying usability issues without the benefit of the last twenty-five years' experience with usability research.
Of the paper's sixty references, 92% are from medical journals and the like. Only five of the sixty references are from the human factors literature. And, despite the fact that the study related to software design, none of the five references are from leading journals, conferences, books, or thinkers in human-computer interaction."
If any case study is needed to highlight the importance of good simple design and usability, this is it.Jakob Nielsen -
Reference: Alertbox [ Read more ]
blog comments powered by Disqus