The Healthcare industry claims it is doing everything possible to care and protect patients in various ways. While this may be true in some areas, there are those who feel that in other areas this is inaccurate, especially when it comes to Electronic Health Records (EHRs). Recently, it was reported that Pew Charitable Trusts says there needs to be more robust testing of electronic health record usability for patient safety.
New Study Reveals Inefficiencies with Electronic Health Records
Recently, writer Mike Miliard from Healthcare IT News posted an article regarding the safety of patients when it comes to electronic health records. Apparently, AMA, MedStar Health and Pew Charitable Trusts have come out to offer model test cases to assist providers and vendors identify usability risks that are potentially dangerous; this is in response to the absence of more rigorous federal regulations.
Pew Charitable Trusts has come out and stated that there is a lack of attention from a safety point of view regarding EHR usability. Considering how the requirements of federal certification isn’t addressing two important safety factors, they have taken upon themselves to offer electronic health record provider organizations and developers a tool-set in assisting to increase patient protections.
According to Pew’s new study, it notes that for every benefit EHRs may bring, variations in their use, design and customization can lead to inefficiencies or workflow challenges and can fail to prevent – or even contribute to – patient harm. However, the flip sides recognizes that proper optimization of electronic health records can be a positive step towards the safety of a patient; yet, the challenge of proper testing and assessment of how the systems can actually be used in the wild can be difficult to achieve.
While testing rules exist for vendors requiring the conducting of usability testing as well as having clinical end users brought into their development and design process, it is not so cut and dry. According to Ben Moscovitch, Pew’s Project Director for Health Information Technology, he says that the rules for testing fall short in two ways when it comes to assessing whether the use of products contributes to patient harm.
Working to Provide Safety Assessments for Patients EHRs
Ben stated that there are two ways that the testing rules have fallen short. First, the criteria for federal testing focuses on vendors’ design and development but do not address circumstances in which customized changes are made to an EHR as part of the implementation process or after the system goes live. The second key challenge is the absence of requirements and guidance on how to test clinician interaction with the EHR for safety issues.
The usage of such clinical test cases that offers scenarios that replicate real-world patient conditions and clinical workflows are hugely valuable; the value lies in identifying customization or design quirks that could possibly impact safety in adverse ways. Pew, in partnership with the American Medical Association and MedStar Health’s National Center for Human Factors in Healthcare, has made available a list of best practices and a group of model test cases to assist providers in making assessments in safety of their own post-install electronic health records to spot any usability-related risks to patients throughout the life cycle of these products.
Earlier in the summer, Healthcare IT News had highlighted research of human factors from AMA, Pew and MedStar as researchers looked at how electronic health records were utilized by four health systems: two that uses Epic and two that uses Cerner. Emergency physicians were tracked at each location on how they coped with six specific scenarios, collecting video data, keystroke and mouse click.
According to the report, there was wide variability in task completion time, clicks and error rates. For certain tasks, there were an average of a nine-fold difference in time and eight-fold difference in clicks. Pew’s new report outlines advice and scenarios to assist providers in getting a handle regarding how their clinical staff utilizes IT.
Moscovitch said that the sample scenarios in the report provide thorough tests covering seven types of usability issues that clinicians may face, including unclear settings, data entry obstacles, and confusing system alerts. Each scenario was designed so that both health IT vendors and purchasers of EHR products can conduct realistic, safety-focused usability tests. The scenarios can be used directly by vendors or providers and can help build additional tests.
Also, for scenarios to be successful, they should include expected users of the system with varying levels of computer expertise; represent realistic clinical care processes; be shaped around a clinically oriented goal; contain clear, quantitative measures of success and failure; and include known risk areas and challenging processes, such as ordering that a patient’s drug dose be tapered.
While there was more to their report, Moscovitch hopes that more vendors of EHRs will take these test cases and utilize them not just in development and design, but also in subsequent iterations as this technology continues to mature. Another thing he hopes is the criteria for the test scenario that is included within the Pew report can serve as a potential standard for the EHR accrediting bodies and a resource for developers. Ideally, this would be incorporated eventually into ONC’s future updates for their requirements for certification.
Meanwhile, Moscovitch said that health systems can use the test criteria and sample cases to evaluate the usability and safety of their product during the implementation phase, after changes are made, and to inform customization decisions. Organizations can immediately leverage the example test cases to quickly evaluate system safety to identify challenges and prevent harm.