6

A Shared Responsibility for Improving Health IT Safety

As discussed in Chapter 2, the use of health IT in some areas has significantly improved the quality of health care and reduced medical errors. Continuing to use paper medical records can place patients at unnecessary risk for harm and substantially constrain the country’s ability to reform the health care system. However, there are clearly cases in which harm has occurred associated with new health IT. The committee believes safer health care is possible in complex, dynamic environments—which are the rule in health care—only when achieving and maintaining safety is given a high priority.

Achieving the desired reduction in harm will depend on a number of factors, including how the technology is designed, how it is implemented, how well it fits into clinical workflow, how it supports informed decision making by both patients and providers, and whether it is safe and reliable. An environment of safer health IT can be created if both the public and private sectors acknowledge that safety is a shared responsibility. Actions are needed to correct the market and commit to ensuring the safety of health IT. A better understanding and acknowledgement of the risks associated with health IT and its use, as well as how to maximize the benefits, are needed. An example of a new kind of error that can occur with IT which did not occur previously is the “adjacency error,” in which a provider selects an item next to the one intended from a pulldown menu, for example picking “penicillamine” instead of “penicillin.” Such errors occur in many products, but effective solutions have not yet generally been fielded. This chapter details the actions to be taken by both the public and private sectors that the committee believes will be necessary for the creation of an environment



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 125
6 A Shared Responsibility for Improving Health IT Safety As discussed in Chapter 2, the use of health IT in some areas has sig- nificantly improved the quality of health care and reduced medical errors. Continuing to use paper medical records can place patients at unnecessary risk for harm and substantially constrain the country’s ability to reform the health care system. However, there are clearly cases in which harm has occurred associated with new health IT. The committee believes safer health care is possible in complex, dynamic environments—which are the rule in health care—only when achieving and maintaining safety is given a high priority. Achieving the desired reduction in harm will depend on a number of factors, including how the technology is designed, how it is implemented, how well it fits into clinical workflow, how it supports informed decision making by both patients and providers, and whether it is safe and reliable. An environment of safer health IT can be created if both the public and pri- vate sectors acknowledge that safety is a shared responsibility. Actions are needed to correct the market and commit to ensuring the safety of health IT. A better understanding and acknowledgement of the risks associated with health IT and its use, as well as how to maximize the benefits, are needed. An example of a new kind of error that can occur with IT which did not occur previously is the “adjacency error,” in which a provider se- lects an item next to the one intended from a pulldown menu, for example picking “penicillamine” instead of “penicillin.” Such errors occur in many products, but effective solutions have not yet generally been fielded. This chapter details the actions to be taken by both the public and private sectors that the committee believes will be necessary for the creation of an environ- 125

OCR for page 125
126 HEALTH IT AND PATIENT SAFETY ment in which IT improves safety overall, and the new problems created by health IT are minimized. THE ROLE OF THE PRIVATE SECTOR: PROMOTING SHARED LEARNING ENVIRONMENTS This chapter broadly defines the private sector to include health IT ven- dors, insurers, and the organizations that support each of these groups (e.g., professional societies). Health care organizations, health professionals, and patients and their families are also considered part of the private sector. The public sector generally refers to the government. Operationally, the line between the private and public sectors is not completely clear, because some organizations operate in both sectors. The current environment in which health IT is designed and used does not adequately protect patient safety. However, the private sector has the ability to drive innovation and creativity, generating the tools to deliver the best possible health care and directly improve safety. In this regard, the private sector has the most direct responsibility to realign the market, but it will need support from the public sector. The complexity and dynamism of health IT requires that private-sector entities work together through shared learning to improve patient safety. Manufacturers and health professionals have to communicate their capa- bilities and needs to each other to facilitate the design of health IT in ways that achieve maximum usability and safety. Patients and their families need to be able to interact seamlessly with health professionals through patient engagement tools. Health care organizations ought to share lessons learned with each other to avoid common patient safety risks as they adopt highly complex health IT products. However, today’s reality is that the private sector currently consists of a broad variety of stakeholders lacking a uniform approach, and potentially misaligned goals. The track record of the private sector in responding to new safety issues created by IT is mixed. Although nearly all stakeholders would endorse the broad goals of improving the quality and safety of pa- tient care, many stakeholders (particularly vendors) are faced with compet- ing priorities, including maximizing profits and maintaining a competitive edge, which can limit shared learning, and have adverse consequences for patient safety. Shared learning about safety risks and mitigation strategies for safer health IT among users, vendors, researchers, and other stake- holders, can optimize patient safety and minimize harm. As discussed in Chapters 4 and 5, there are many opportunities for the private sector to improve safety as it relates to health IT, but to date, little action has been taken. Insufficient action by the private sector to improve patient safety can endanger lives. The private sector must play a major role

OCR for page 125
127 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY in addressing this urgent need to better understand risks and benefits associ- ated with health IT, as well as strategies for improvement and remediation. As it stands now, there is a lack of accountability on the part of vendors, who are generally perceived to shift responsibility and accountability to us- ers through specific contract language (Goodman et al., 2010). As a result, the committee believes a number of critical gaps in knowledge need to be addressed immediately, including the lack of comprehensive mechanisms for identifying patient safety risks, measuring health IT safety, ensuring safe implementation, and educating and training users. Developing a System for Identifying Patient Safety Risks The committee believes that transparency, characterized by develop- ing, identifying, and sharing evidence on risks to patient safety, is essential to a properly functioning market where users would have the ability to choose a product that best suits their needs and the needs of their patients. However, the committee found sparse evidence pertaining to the volume and types of patient safety risks related to health IT. Indeed, the number of errors reported both anecdotally and in the published literature was lower than the committee anticipated. This led primarily to the sense that poten- tially harmful situations and adverse events caused by IT were often not recognized and, even when they were recognized, usually not reported. This lack of reported instances of harm is consistent with other areas of patient safety, including paper-based patient records and other manually based care systems, where there is ample evidence that most adverse events are never reported, even when there are robust programs encouraging health profes- sionals to do so (Classen et al., 2011; Cullen et al., 1995). Information technology can assist organizations in identifying, trouble- shooting, and handling health IT–related adverse events. Digital forensic tools (e.g., centralized logging, regular system backups) can be used to record data during system use. After an adverse event occurs, recorded data—such as log-in information, keystrokes, and how information is trans- ported throughout the network—can be used to identify, reconstruct, and understand in detail how an adverse event occurred (NIST, 2006). Because of the diversity of health IT products and their differing effects on various clinical environments, it is essential that users share detailed information with other users, researchers, and the vendor once information regarding adverse events is identified. Examples of such information include screenshots or descriptions of potentially unsafe processes that could help illustrate how a health IT product threatened patient safety. However, as discussed in Chapter 2, users may fear that sharing this information may violate nondisclosure clauses and vendors’ intellectual property, exposing them to liability and litigation. Although there is little evidence on the

OCR for page 125
128 HEALTH IT AND PATIENT SAFETY impact of such clauses, the committee believes users may be less likely to share information necessary to improve patient safety, given these clauses. If it is clearly understood that transparently sharing health IT issues with the public is for the purpose of patient safety, vendors ought to agree to remove such restrictions from contracts and work with users to explicitly define what can be shared, who it can be shared with, and for what pur- poses. However, to maintain a competitive advantage, many vendors may not be motivated to allow users to disclose patient safety–related risks associated with their health IT products. Many vendors place these clauses within the boilerplate language, and in the absence of comprehensive legal review, users may not even realize these restrictions exist when signing their contracts.1 If more users carefully search for such clauses and negoti- ate terms that allow them to share information related to patient safety risks, vendors may be more likely to exclude such clauses. Furthermore, if it were easier to know which vendors had standard contracts that al- lowed for sharing, users might be more likely to select those vendors. However, users—particularly smaller organizations—are not part of a “co- hesive community” with the legal expertise or knowledge to negotiate such changes. Therefore, the committee believes the Secretary of the Department of Health and Human Services (HHS) should provide tools to motivate vendors and empower users to negotiate contracts that allow for sharing of patient safety–related details and improved transparency. The Secretary ought to investigate what other tools and authorities would be required to ensure the free exchange of patient safety–related information. Recommendation 2: The Secretary of HHS should ensure insofar as possible that health IT vendors support the free exchange of informa- tion about health IT experiences and issues and not prohibit sharing of such information, including details (e.g., screenshots) relating to patient safety. The committee recognizes that, short of Congressional and regula- tory action, the Secretary cannot guarantee how contracts are developed between two private parties. However, the committee views prohibition of the free exchange of information to be the most critical barrier to patient safety and transparency. The committee urges the Secretary to take vigorous steps to restrict contractual language that impedes public sharing of patient safety–related details. Contracts should be developed to allow explicitly for sharing of health IT issues related to patient safety. One method the Secre- tary could use is to ask the Office of the National Coordinator for Health Information Technology (ONC) to create a list of vendors that satisfy this 1 Personal communication, E. Belmont, MaineHealth, September 21, 2011.

OCR for page 125
129 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY requirement and/or those that do not. If such a list were available, users could more easily choose vendors that allow patient safety–related details of health IT products to be shared. Having such a list could also motivate vendors to include contractual terms that allow for sharing of patient safety–related details and, as a result, be more competitive to users. The ONC could also consider creating minimum criteria for determining when a contract adequately allows for sharing of patient safety–related details. These criteria need to define the following: • What situations allow for sharing patient safety–related details of health IT products; • What content of health IT should be shareable; • Which stakeholders the information can be shared with, including other users, consumer groups, researchers, and the government; and • What the limitations of liability are when such information is shared. Private certification bodies such as the ONC-authorized testing and certification bodies (ONC-ATCBs)2 could also promote the free exchange of patient safety-related information. This could be implemented for ex- ample through the creation of a new type of certification that requires this information to be shared. The Secretary could ask the ONC to develop model contract language that would affirmatively establish the ability of users to provide content and contextual information when reporting an adverse event or unsafe condition. Additionally, HHS could educate users about these contracts and develop guidance for users about what to look for before signing a contract. This education could potentially be done through the ONC’s regional ex- tension centers or the Centers for Medicare & Medicaid Services’s (CMS’s) quality improvement organizations. This effort could also be supported by various professional societies. To identify how pervasive these clauses are, the Secretary may need to conduct a review of existing contracts. Although the Secretary may not be privy to vendor–purchaser contracts, HHS could conduct a survey or ask vendors to voluntarily share examples of their contract language. Under- standing the magnitude of these clauses would be a critical first step. Once this information is available, comparative user experiences can be made public. There is currently no effective way for users to communicate their experiences with a health IT product. In many other industries, user 2 ONC-ATCBs as of April 2011 include the Certification Commission for Health Information Technology; Drummond Group, Inc.; ICSALabs; InfoGard Laboratories, Inc.; SLI Globan Solutions; and Surescripts LLC.

OCR for page 125
130 HEALTH IT AND PATIENT SAFETY reviews appear on online forums and other similar guides, while indepen- dent tests are conducted by Consumer Reports and others. These reviews allow users to better understand the products they might be purchasing. Perhaps the more powerful aspect of users being able to rate and compare their experiences with products is the ability to share and report lessons learned. Comparative user experiences for health IT safety need to be cre- ated to enhance communication of safety concerns and ways to mitigate potential risks. To gather objective information about health IT products, researchers should have access to both test versions of software provided by vendors and software already integrated in user organizations. Documentation for health IT products such as user manuals also could be made available to researchers. Resources should be available to share user experiences and other measures of safety specifying data from health IT products. The pri- vate sector needs to be a catalyzing force in this area, but governance from the public sector may be required for such tools to be developed. Recommendation 3: The ONC should work with the private and public sectors to make comparative user experiences across vendors publicly available. Another way to increase transparency in the private sector is to require reporting of health IT–related adverse events through health care provider accrediting organizations such as The Joint Commission or the National Committee for Quality Assurance (NCQA). Professional associations of providers could also play this role. One of the tools the ONC could provide to facilitate the implementation of Recommendation 3 is the development of a uniform format for making these reports, which could be coordinated through the Common Formats.3 However, it is important to note that a public-sector entity could also lead change in this regard. Finally, a more robust and comprehensive infrastructure is needed for providing technical assistance to users who may need advice or training to safely implement health IT products. Shared learning between users and vendors in the form of feedback about how well health IT products are working can help improve the focus on safety and usability in the design of health IT products and identification of performance requirements. Tools to foster this feedback in an organized way are needed to promote safety and quality. The learning curve for safely using health IT–assisted care varies widely and technical assistance needs to be provided to users at all levels. 3 The Common Formats are coordinated by the Agency for Healthcare Research and Quality (AHRQ) in an effort to facilitate standardized reporting of adverse events by creating general definitions and reporting formats for widespread use (AHRQ, 2011a).

OCR for page 125
131 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY Measuring Health IT Safety Another area the committee identified as necessary for making health IT safer is the development of measures. As is often said in quality improve- ment, you can only improve what you measure. Currently, few measures address patient safety as it relates to health IT; without these measures, it will be very difficult to develop and test strategies to ensure safe patient care. Although there has been progress in developing general measures of patient safety, the committee concluded that safety measures focusing on the impact of health IT must go beyond traditional safety measures (as discussed in Chapter 3) and are urgently needed. Measures of health care safety and quality are generally developed by groups such as professional societies and academic researchers and undergo a voluntary consensus process before being adopted for widespread use. During the measure development process, decisions need to be made such as identifying what metrics can be used as an indicator of health IT safety, specifications of the metrics, and the criteria against which measures can be evaluated. Policies for measure ownership and processes for evaluating and maintaining measures will also need to be created. One example of the type of data that are likely to be important would be override rates for important types of safety warnings on alerts and warnings built into electronic health records (EHRs). The committee believes a consensus-based collaborative effort that would oversee development, application, and evaluation of criteria for measures and best practices of safety of health IT—a Health IT Safety Council—is of vital need. For example, the council could be responsible for identifying key performance aspects of health IT and creating a priori- tized agenda for measure development. Given that the process for develop- ing health IT safety metrics would be similar to developing measures of health care safety and quality, a voluntary consensus standards organization would effectively be able to house the recommended council. Because of the ubiquity and complexity of health IT, all health IT stakeholders ought to be involved in the development of such criteria. HHS ought to consider providing the initial funding for the council because the need for measures of safety of health IT is central to all stakeholders. The more costly process of maintaining measures ought to be funded by private-sector entities. Recommendation 4: The Secretary of HHS should fund a new Health IT Safety Council to evaluate criteria for assessing and monitoring the safe use of health IT and the use of health IT to enhance safety. This council should operate within an existing voluntary consensus stan- dards organization.

OCR for page 125
132 HEALTH IT AND PATIENT SAFETY One existing organization with a strong history of convening groups and experience with endorsing health care quality and performance measures that could guide this process is the National Quality Forum (NQF). The NQF is a nonprofit organization whose mission is to develop consensus on national priorities and endorse standards for measuring and publicly report- ing on health care performance based on measures submitted from measure developers. Of particular note, the NQF hosts eight member councils, whose purposes are to build consensus among council members toward advancing quality measurement and reporting (NQF, 2011). These councils provide a voice to stakeholder groups—including consumers, health professionals, health care organizations, professional organizations (e.g., the American Medical Informatics Association [AMIA]), vendors (e.g., individually and through societies such as the Healthcare Information and Management Sys- tems Society [HIMSS]), and insurers—to identify what types of metrics are needed and the criteria for doing so. Measures should be NQF-endorsed, a process that applies nationally accepted standards and criteria. The NQF could provide guidance in identifying criteria against which to develop health IT safety measures to help gain consensus on the right set of policies. The ensuing task of developing measures of health IT safety needs to be undertaken by a variety of entities. To accomplish this, some research will be needed for measure development because good measures currently do not exist; these efforts should be supported by the Agency for Healthcare Research and Quality (AHRQ), the National Library of Medicine, and the ONC, as discussed in Recommendation 1. Health care organizations and even vendors could partner with more traditional measurement develop- ment organizations, for example the NCQA can create measures that would by default be subjected to the NQF consensus approval process. Ensuring Safer Implementation Efforts to safely implement health IT must address three phases: pre- implementation, implementation, and postimplementation of health IT. Preimplementation Vendors, with input from users, play the most significant role in the preimplementation phase of health IT. Vendors ought to be able to assert that their products are designed and developed in a way that promotes patient safety. Currently, health IT products are held to few standards with respect to both design and development. Although it is typically the role of standards development organizations such as the American National Stan- dards Institute, the Association for the Advancement of Medical Instrumen- tation, Health Level 7 (HL7), and the Institute of Electrical and Electronics

OCR for page 125
133 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY Engineers to develop such standards, criteria, and tests, a broader group of stakeholders including patients, users, and vendors should participate in creating safety standards and criteria against which health IT ought to be tested. Vendors are currently being required by the ONC to meet a specific set of criteria in order for their products to be certified as eligible for use in HHS’s meaningful use program. These criteria relate to clinical functional- ity, security, and interoperability and may be helpful for but not sufficient to ensure health IT–related safety. The American National Standards Insti- tute, as the body that will accredit organizations that certify health IT, and certification bodies such as the Certification Commission for Healthcare Information Technology serve a vital function in this regard because they have the ability to require that patient safety be an explicit criterion for certification of EHRs. Doing so would be an important first step. Another step that can be taken by the private sector prior to implemen- tation of health IT is for vendors and manufacturers to declare they have addressed safety issues in the design and development of their products, both self-identified issues and those detected during product testing. Such a declaration ought to include the safety issues considered and the steps taken to address those issues. A similar declaration for usability has been supported by the National Institute of Standards and Technology (NIST), which has developed a common industry format for health IT manu- facturers to declare that they have tested their products for usability and asks manufacturers to show evidence of usability (NIST, 2010). Declara- tions also ought to be made with respect to vendor tests of a health IT product’s reliability and response time, both in vitro and in situ (Sittig and Classen, 2010). Such declarations could provide users and purchasers of health IT with information as they determine which products to acquire. Additionally, vendors can help mitigate safety risks by employing high- quality software engineering principles, as discussed in Chapter 4. Usability represents an exceptionally important issue overall, and un- doubtedly affects safety. However, it would be challenging to mandate us- ability. Although some efforts are beginning to develop usability standards and tools as discussed in Chapter 4, more publicly available data about and testing regarding usability would be helpful in this area. EHRs should in- creasingly use standards and conformance testing to ensure that data from EHRs meet certain standards and would be readable by other systems to enable interoperability is practical. Besides providing feedback to vendors about their products, users also have important responsibilities for safety during the preimplementa- tion phase. Users need to make the often difficult and nuanced decision of choosing a product to purchase, as discussed in Chapter 4. If a product does not meet the needs of the organization and does not appropriately

OCR for page 125
134 HEALTH IT AND PATIENT SAFETY interface with other IT products of the organization, safety problems can arise. Similarly, organizations need to be ready to adopt a new product in order for the transition to be successful. Implementation Industry-developed recommended (or “best”) practices and lessons learned ought to be shared. There are instances where generic lessons are learned and recommended practices can be shared between health profes- sionals through mediums such as forums, chat rooms, and conferences. Lessons can also be shared through training opportunities such as continu- ing professional development activities. Questions arise such as whether to roll out a health IT product throughout an entire health care organization at once or in parts. Health care providers are continually attempting to de- termine the most effective configuration of health IT products for their own specific situations (e.g., drug interactions should be displayed as warnings in such a way that clinicians do not suffer from alert fatigue, leading clini- cians to turn off all alerts). A user’s guide to acquisition and implementation ought to be developed by both the private and public sectors. Some efforts are currently under way, including programs at HIMSS and the ONC, but more work is needed, and the committee believes that such user’s guides should receive public support, though they might be developed by private entities. Opportunities also exist for users to learn more about safer implemen- tation and customization of health IT products. For example, what lessons have been learned regarding customization of specific health IT products? What experiences have others had integrating a specific pharmacy system with a particular EHR? Lessons from such experiences, once they are widely shared, can greatly impact implementation. It will be critical for users and vendors to communicate as health IT products are being implemented to ensure they are functioning correctly and are fitting into clinical workflow. Postimplementation Similar to the preimplementation and implementation phases, stan- dards and criteria will be necessary to ensure that users have appropriately implemented health IT products and integrated them into the entire socio- technical system. In the postimplementation phase, the largest share of the work involves health professionals and organizations working with vendors to ensure patient safety. Postimplementation tests, as discussed in Chapter 4, will be essential to monitoring the successful implementation of health IT products. Few tests currently exist, and more will need to be developed. For example, self-

OCR for page 125
135 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY assessments could monitor the product’s down time, review the ability to perform common actions (e.g., review recent lab results), and record patient safety events. Ongoing tests of how the product is operating with respect to the full sociotechnical system could identify areas for improvement to ensure the product fits into the clinical workflow safely and effectively (Classen and Bates, 2011; Sittig and Classen, 2010; Sittig and Singh, 2009). These tests are also a way for users to work with vendors to ensure that products have been installed correctly. Developers of these postimplementation tests should gather input from health care organizations, clinicians, vendors, and the general public. Similar to how organizations such as the Leapfrog Group validate tests for effective implementation of computerized provider order entry systems, an independent group ought to validate test results for imple- mentation of all health IT. Conducting these tests is so important for ensur- ing safety that they ought to become a required standard and linked to a health care organization’s accreditation through The Joint Commission or others. Periodic inspections could also be conducted onsite by these external accreditation organizations (Sittig and Classen, 2010). Other ways to require these tests be performed include actions from the public sector, including regulation, such as including postimplementation testing in meaningful use criteria, but the committee feels postimplementation testing is too important to be tied only to the initiatives of a particular government program. These issues of safer implementation and safer use will continue to be an ongoing challenge with each new iteration of software and will continue to be an important area of focus long after the meaningful use program is completed. Training Professionals to Use Health IT Safely Education and training of the workforce is critical to the successful adoption of change. If the workforce is not educated and trained correctly, workers will be less likely to use health IT as effectively as their properly trained counterparts. Educating health professionals about health IT and safety can help them understand the complexities of health IT from the perspective of the sociotechnical system. This allows health professionals to transfer context- and product-specific skills, and therefore to be safer and more effective. For example, a team of clinicians using a new electronic pharmacy system needs to be trained on the functionalities of the specific technology. Otherwise, the team is susceptible either to being naïve to the abilities of the technology or to unnecessarily developing workarounds that may undermine the larger sociotechnical system. As discussed in earlier chapters, basic levels of competence, knowl- edge, and skill are needed to navigate the highly complex implementation of health IT. Because health IT exists at the intersection of multiple disci- plines, a variety of professionals will need training in this relatively new

OCR for page 125
158 HEALTH IT AND PATIENT SAFETY A Reporting System for Learning and Improving Patient Safety On their own, vendors and users generally do not have the broad ex- pertise needed to conduct investigations as envisioned by the committee. Vendors and users also may not be impartial arbiters of why an adverse event occurred. As a result, external methods are needed to conduct rigor- ous investigations of health IT–related adverse events. Reports could be aggregated and analyzed by multiple entities, such as the PSOs, but trends in data may not be as easily identified if spread out among the more than 80 PSOs because of the smaller number of reports each organization would receive. Additionally, standards of analysis may not be used in the same way across multiple entities, calling into question the reliability of the analyses conducted. Ideally, as depicted in Figure 6-3, reports of health IT–related adverse events or unsafe conditions from both users and vendors would be aggregated and analyzed by a single entity that would identify reports for immediate investigation. Reports to this entity would have to include identifiable data to allow for investigators to follow up in the event the reported incident requires investigation, but, as discussed previously, full confidentiality protections must be applied to the reports. Reports would need to be received in an identifiable manner from the PSOs or another collecting agency and with enough information to investigate (e.g., specific vendor, model number). The entity would have the discretion to investigate two categories of reports: (1) novel reports that result in death or serious injury and (2) trends of reports of unsafe conditions (e.g., multiple health care organizations find that a specific pharmacy system accepts only 100 characters of a particular EHR’s notes section that allows 125 characters, resulting in the incorrect filling of orders). Prioritization among the reports would be determined on a risk-based hazard analysis. Cases resulting in death or serious injury should be investigated immediately. Reports should be kept confidential and nonpunitive for the purposes of learning. Reports of unsafe conditions should be analyzed and monitored continually and investigated using a risk- based hazard analysis. Which reports to investigate ought to be determined by the explicit risk-based prioritization system that the investigatory entity employs. Reports by vendors should already contain identifiable data. In keeping with the principle of transparency, reports and results of investigations should be made public. Public release of results of investiga- tions could build off the NTSB process, which separates facts discovered by the investigators from opinions and conclusions drawn by the investigators. A feedback loop from the investigatory body back to both the vendors and users is essential to allow groups to rectify any systemic issues that were found to introduce risk into their systems.

OCR for page 125
Reporting Collecting Aggregating, Analyzing, and Investigating Disclosing Aggregate User Reports Vendor Reports Users PSO Public Analyze (Voluntary) – Researchers Reports Vendors Collecting – Consumers (Mandatory) Body Investigate Adverse Events and Provide Unsafe Conditions Feedback Feedback FIGURE 6-3 Reporting system for learning and improving patient safety. 159

OCR for page 125
160 HEALTH IT AND PATIENT SAFETY Potential Actors It is the intent of the committee to build on the current patient safety environment and to maximize the potential for all stakeholders to have a part in ensuring patient safety. However, for the reasons mentioned above, the current system does not adequately address the significant needs of a comprehensive process to learn from health IT–related patient safety risks. A unique knowledge base is needed to understand thoroughly and diagnose ways to improve the interface between health care delivery and health IT, which, as discussed in Chapter 3, is extraordinarily complex and requires the understanding of a large number of sociotechnical domains. The committee considered a variety of alternatives to objectively ana- lyze reports of unsafe events as well as conduct investigations into health IT–related adverse events in the way the committee envisions. The alterna- tives considered include FDA, AHRQ, the CMS, the ONC, and entities in the private sector. • FDA is largely an oversight entity. Adding investigative responsi- bilities of the nature envisioned by the committee would be at odds with its oversight functions. Additionally, FDA lacks the resources in terms of capacity and expertise, limiting its ability to act in this area. • AHRQ primarily supports research and technical assistance activi- ties regarding quality and safety. It is not an oversight or inves- tigative agency. AHRQ is also not an active implementer and is not operationally oriented. For AHRQ to take on the functions envisioned by the committee would require a complete change in the agency’s charge and internal expertise. • The CMS is an administrative agency that has demonstrated its leverage with users of health IT through the development of the EHR incentive program and conditions of participation. These programs, while important, contain punitive elements and are inap- propriate for completing the aggregative, analytic, and investigative functions described above. The CMS also has little internal exper- tise relating to clinical workflow and use of technology to support cognition and therefore does not have the infrastructure needed to support these tasks. Additionally, while the CMS could potentially serve in this role, the CMS’s authority focuses on providers associ- ated with Medicare and Medicaid and is not as expansive as the committee believes is needed. • The ONC coordinates health IT efforts and influences the develop- ment of policy related to health IT but has no formal authority in this area; it is not an operating division of HHS. It is not clear to

OCR for page 125
161 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY the committee that the ONC has the clinical and operations exper- tise to conduct investigations of health IT–related adverse events. • Private­sector organizations such as The Joint Commission, the NCQA, and the NQF play an important role in ensuring safety. However, these organizations are mostly dependent on short-term funding streams, often seeking “soft money” to sustain specific projects and programs. As a result, programmatic content may be shaped in part by funders. Moreover, these groups—even working in collaboration with one another—may not have the far-reaching influences of a federal entity and do not have the mandate or breadth to be able to conduct such investigations at a national level. These groups also do not possess the expertise and ability to properly investigate these issues. Investigating patient safety incidents related to health IT does not match the internal expertise of any existing entity. Given the status quo, the committee concludes that no existing entity has the necessary attributes to perform the crucial function of investigating health IT–related patient safety incidents as envisioned by the committee. The needed functions are under the jurisdiction of multiple federal agencies, and efforts are uncoordinated and are not as comprehensive as a safety-oriented system ought to be. A multiagency structure could be envisioned, but as discussed previously, oversight and investigative functions should not be housed in the same entity. The committee concludes the envisioned necessary functions cannot be realized solely through current structures, and a new entity is needed that can pull together the following desired goals in an integrated fashion in a way current alternatives cannot achieve: • A comprehensive system for identifying and investigating deaths, injuries, and unsafe conditions has to be separated explicitly from oversight functions. • Broad categories of expertise are required to investigate an adverse event fully. • Vendors and users need to be held accountable publicly for patient deaths, injuries, and potentially unsafe conditions. • A streamlined approach is needed to reduce wasteful duplication and inconsistencies. • Lessons from these investigations need to be shared broadly from a respected source so that future adverse events can be averted. To truly improve patient safety, a new approach is needed. The commit- tee believed that the experiences of other industries such as transportation and nuclear energy in creating the NTSB and the NRC were instructive, and

OCR for page 125
162 HEALTH IT AND PATIENT SAFETY concluded that the development of an independent, federal entity was best suited to performing the needed above-described analytic and investigative functions for health IT–related adverse events in a transparent, nonpunitive manner. The committee envisions an entity that would be similar in structure to the NTSB or the NRC, which are both independent federal agencies cre- ated by and reporting directly to Congress. Among other responsibilities, these entities conduct investigations, for the purpose of ensuring safety. The NTSB is a nonregulatory agency that does not establish fault or liability in the legal sense but investigates incidents. The NRC is a regulatory body that has the ability to issue fines and fees. The committee considered both agencies and concluded the NTSB to be most similar to the needs of health IT–assisted care. An independent, federal entity analogous in form and function to the NTSB is needed. This entity would not have enforcement power and would be nonpunitive. Instead, it would have the authority to conduct investiga- tions and, upon their completion, make recommendations. The NTSB makes nonbinding recommendations to the Secretary of the Department of Transportation, who then must state within 90 days whether the depart- ment intends to perform the recommended procedures in total, carry the recommendations out in part, or refuse to adopt the recommendation. In this case, an entity would make similar recommendations to the Secretary of HHS. Although delivering nonbinding recommendations can be described by some as a flaw, the committee believes that the flexibility it provides is a strength, allowing for the health care organizations, vendors, and exter- nal experts to determine the best course forward collectively. If requested by the Secretary, the entity could also perform other functions, such as coordinating with existing bodies, both public and private, as appropri- ate. Investigations could involve representatives from all impacted parties, including vendors and users involved in the incident, as well as experts in the various sociotechnical dimensions of health IT safety. The committee believes that an independent, federal entity is the best option to provide a platform to support shared learning at a national level. The entity would have the following functions: • Aggregate reports of health IT–related adverse events from at least vendors and users; • Analyze the aggregated reports to identify patterns; • Investigate reports of health IT–related patient deaths or serious injury; • Investigate trends of reports of unsafe conditions; • Recommend corrective actions to the Secretary of Health and Human Services;

OCR for page 125
163 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY • Provide feedback to vendors and users following investigations; and • Disclose results of the investigations to the public, including re- searchers and consumers. Recommendation 8: The Secretary of HHS should recommend that Congress establish an independent federal entity for investigating patient safety deaths, serious injuries, or potentially unsafe conditions associated with health IT. This entity should also monitor and analyze data and publicly report results of these activities. It is also important to recognize that the line between health IT–related adverse events and other adverse events will likely become increasingly blurry. Multiple factors contribute to unsafe conditions and adverse events, making it potentially difficult to differentiate between health IT–related or other factors until an investigation has been conducted. If a broader system for all adverse events is created, the spirit of the committee’s recommenda- tions should be recognized and considered. NEXT STEPS Patients must be kept safe in the midst of the current large-scale rollout of health IT. While it is clear to the committee that the market has failed to keep patients safe with respect to health IT, the committee believes trans- parency is the key to improving safety. To truly address health IT safety, many actions will be needed to correct the market; ways to improve flow of communication and correct the market have to be created carefully. When combined, removing contractual restrictions, establishing public reporting, and having a system in place for independent investigations can be a power- ful force for improving patient safety. Achieving transparency, safer health IT products, and safer use of health IT will require the cooperation of all stakeholders. Without more information about the magnitude and types of harm, the committee con- cluded that other mechanisms were necessary to understand how to best approach health IT safety. The committee believes the current state of safety and health IT is not acceptable; specific actions are required to improve the safety of health IT. The first eight recommendations are intended to create conditions and incentives to encourage substantial industry-driven change without formal regulation. However, because the private sector to date has not taken sub- stantive action on its own, the committee believes a follow-up recommen- dation is needed to regulate health IT formally if the actions recommended

OCR for page 125
164 HEALTH IT AND PATIENT SAFETY to the private and public sectors are not effective.11 If the first eight recom- mendations are determined by the Secretary of HHS to be not effective, the Secretary should direct FDA to exercise all authorities to regulate health IT. The committee was of mixed opinion on how FDA regulation would impact the pace of innovation but identified several areas of concern regard- ing immediate FDA regulation. The current FDA framework is oriented toward conventional, out-of-the-box, turnkey devices. However, health IT has multiple different characteristics, suggesting that a more flexible regulatory framework will be needed in this area to achieve the goals of product quality and safety without unduly constraining market innovation. For example, as a software-based product, health IT has a very different product life cycle than conventional technologies. These products exhibit great diversity in features, functions, and scope of intended and actual use, which tend to evolve over the life of the product. Taking a phased, risk-based approach can help address this concern. FDA has chosen to not exercise regulatory authority as discussed previously, and controversy exists over whether some health IT products such as EHRs should be considered medical devices. If the Secretary deems it necessary to regulate EHRs and other health IT products not currently regulated by FDA, clear determina- tions will need to be made about whether all health IT products classify as medical devices for the purposes of regulation. The committee also believes that if FDA regulation is deemed necessary, FDA will need to commit suf- ficient resources and add capacity and expertise to be effective. The ONC and the Secretary should examine progress critically toward achieving safety and, if needed, determine when to move to the next stage. HHS should report annually to Congress and the public on the progress of efforts to improve the safety of health IT beginning 12 months from the release of this report. In these reports, the Secretary should make clear why she does or does not believe further oversight actions are needed. In parallel, the Secretary should ask FDA to begin planning the framework needed for potential regulation consistent with Recommendations 1 through 8 so that, if she deems FDA regulation to be necessary, the agency will be ready to act, allowing for the protection of patient safety without further delay. FDA will need to coordinate these efforts with the actors identified in Recom- mendations 1 through 8, including AHRQ and the ONC, among others. In addition, the Secretary will also need to devise new strategies to stimulate the private sector to meet its responsibility of ensuring patient safety. The committee recognizes that not all of its recommendations can be acted on by the Secretary alone and that some will require congressional action. 11 One member disagrees with the committee and would immediately regulate health IT as a Class III medical device, as outlined in Appendix E.

OCR for page 125
165 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY Recommendation 9a: The Secretary of HHS should monitor and publicly report on the progress of health IT safety annually beginning in 2012. If progress toward safety and reliability is not sufficient as determined by the Secretary, the Secretary should direct FDA to exercise all available authorities to regulate EHRs, health information exchanges, and PHRs. Recommendation 9b: The Secretary should immediately direct FDA to begin developing the necessary framework for regulation. Such a framework should be in place if and when the Secretary decides the state of health IT safety requires FDA regulation as stipulated in Rec- ommendation 9a above. CONCLUSION Today the nation is just scaling up with EHRs, and, as a result, the health IT environment is both very dynamic and also stressed. Patient safety is too important to ignore, but clear routes to solid policies that will improve performance are still wanting. Because of the lack of concrete evidence about how to best improve patient safety, the private and public sectors should work together to take the first steps toward identifying the data and building an evidence base for improving health IT–related patient safety. Lessons should be learned from other industries focusing on safety. Although it is important to recognize that none of those reporting systems is perfect, a critical lesson to be learned from these experiences is that safety demands systems of continual learning. Health IT is a quickly changing field, particularly with respect to outpatient services, and products are being developed continually for the improvement of patient outcomes and more effective care delivery. The functions and types of health IT–related adverse events requiring analysis and investigation will change over time. To this end, the approaches identi- fied in this report should be monitored continually and revised as needed. The identified actor or set of actors should be given the flexibility and lati- tude to amend its charge as appropriate. Creating an infrastructure that supports shared learning about and im- proving the safety of health IT is needed to achieve better health care. Pro- active measures have to be taken to ensure health IT products are developed and implemented with safety as a primary focus through the development of industry-wide measures, standards, and criteria for safety. Surveillance mechanisms will be available to identify, capture, and investigate adverse events to improve the safety of health IT continually. Transparency and cooperation between the private and public sectors is the key to creating the necessary infrastructure to build safer systems that will lead to better care for all Americans.

OCR for page 125
166 HEALTH IT AND PATIENT SAFETY REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2011a. Common formats. http://www. pso.ahrq.gov/formats/commonfmt.htm (accessed May 8, 2011). AHRQ. 2011b. Georgraphic directory of listed patient safety organizations. http://www.pso. ahrq.gov/listing/geolist.htm (accessed August 9, 2011). American Society for Quality. 2011. The history of quality—overview. http://asq.org/learn- about-quality/history-of-quality/overview/overview.html (accessed June 17, 2011). Bagian, J. P., C. Lee, J. Gosbee, J. DeRosier, E. Stalhandske, N. Eldridge, R. Williams, and M. Burkhardt. 2001. Developing and deploying a patient safety program in a large health care delivery system: You can’t fix what you don’t know about. Joint Commission Journal on Quality Improvement 27(10):522-532. Chow-Chua, C., M. Goh, and T. B. Wan. 2003. Does ISO 9000 certification improve business performance. International Journal of Quality & Reliability Management 20(8):936-953. Classen, D. C., and D. W. Bates. 2011. Finding the meaning in meaningful use. New England Journal of Medicine 365(9):855-858. Classen, D. C., R. Resar, F. Griffin, F. Federico, T. Frankel, N. Kimmel, J. C. Whittington, A. Frankel, A. Seger, and B. C. James. 2011. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Affairs 30(4):581-589. Cohen, L. 1979. Innovation and atomic energy: Nuclear power regulation, 1966-present. Law and Contemporary Problems 43(1):67-97. Cullen, D. J., D. W. Bates, S. D. Small, J. B. Cooper, A. R. Nemeskal, and L. L. Leape. 1995. The incident reporting system does not detect adverse drug events: A problem for quality improvement. Joint Commission Journal on Quality Improvement 21(10):541-548. Dahiya, S., R. K. Khar, and A. Chhikara. 2009. Opportunities, challenges and benefits of using HACCP as a quality risk management tool in the pharmaceutical industry. Quality Assurance Journal 12(2):95-104. DeRosier, J., E. Stalhandske, J. P. Bagian, and T. Nudell. 2002. Using health care failure mode and effect analysis: The VA National Center for Patient Safety’s prospective risk analysis system. Joint Commission Journal on Quality Improvement 28(5):248-267, 209. FDA (Food and Drug Administration). 2009. The quality system regulation. http://www. fda.gov/MedicalDevices/DeviceRegulationandGuidance/PostmarketRequirements/ QualitySystemsRegulations/MedicalDeviceQualitySystemsManual/ucm122391.htm (ac- cessed June 1, 2011). Gardner, R. M., J. M. Overhage, E. B. Steen, B. S. Munger, J. H. Holmes, J. J. Williamson, D. E. Detmer, and the AMIA Board of Directors. 2009. Core content for the sub- specialty of clinical informatics. Journal of the American Medical Informatics Association 16(2):153-157. Gastineau, D. A. 2004. Will regulation be the death of cell therapy in the United States? Bone Marrow Transplant 33(8):777-780. Glasgow, J., J Scott-Caziewell, and P. Kaboli. 2010. Guiding inpatient quality improvement: A systematic review of lean and six sigma. Joint Commission Journal on Quality and Patient Safety 36(12):531-532. Goodman, K. W., E. S. Berner, M. A. Dente, B. Kaplan, R. Koppel, D. Rucker, D. Z. Sands, P. Winkelstein, and the AMIA Board of Directors. 2011. Challenges in ethics, safety, best practices, and oversight regarding HIT vendors, their customers, and patients: A report of an AMIA special task force. Journal of the American Medical Informatics Association 18(1):77-81. Grabowski, H. G., and J. M. Vernon. 1977. Consumer protection regulation in ethical drugs. American Economic Review 67(1):359-364.

OCR for page 125
167 A SHARED RESPONSIBILITY FOR IMPROVING SAFETY Hauptman, O., and E. B. Roberts. 1987. FDA regulation of product risk and its impact upon young biomedical firms. Journal of Product Innovation Management 4(2):138-148. HHS (Department of Health and Human Services). 2010. Health information technology: Initial set of standards, implementation, specifications, and certification criteria for electronic health record technology; final rule. Federal Register 75(144):44590-44654. IOM (Institute of Medicine). 2011 (unpublished). Vendor responses—summary. Washington, DC: IOM. ISO (International Organization for Standardization). 2011. Quality management principles. http://www.iso.org/iso/iso_catalogue/management_and_leadership_standards/quality_ management/qmp.htm (accessed May 26, 2011). Johansen, I., M. Bruun-Rasmussen, K. Bourquard, E. Poiseau, M. Zoric, C. Parisot, and M. Onken. 2011. A quality management system for interoperability testing. Paper presented at 23rd International Conference of the European Federation for Medical Informatics, Oslo, Norway, August 28, 2011, to August 31, 2011. The Joint Commission. 2011. Sentinel events. Chicago: The Joint Commission. Kim, D. U. 2002. The quest for quality blood banking program in the new millennium the American way. International Journal of Hematology 76(Suppl 2):258-262. Laflamme, F. M., W. E. Pietraszek, and N. V. Rajadhyax. 2010. Reforming hospitals with IT investment. McKinsey Quarterly 20:27-33. Leape, L. L. 2002. Reporting of adverse events. New England Journal of Medicine 347(20):1633-1638. Marcus, A. A. 1988. Implementing induced innovations: A comparison of rule-bound and autonomous approaches. Academy of Management Journal 31(2):235-256. NASA (National Aeronautics and Space Administration). 2011a. Confidentiality and incen- tives to report. http://asrs.arc.nasa.gov/overview/confidentiality.html (accessed April 18, 2011). NASA. 2011b. Immunity policy. http://asrs.arc.nasa.gov/overview/immunity.html (accessed April 18, 2011). National Academy for State Health Policy. 2010. Patient safety toolbox. http://www.nashp. org/pst-welcome (accessed June 24, 2011). NIST (National Institute of Standards and Technology). 2006. Guide to integrating forensic techniques into incident response. Gaithersburg, MD: NIST. NIST. 2010. Computerized common industry format template for electronic health record usability testing. Gaithersburg, MD: NIST. NQF (National Quality Forum). 2011. NQF: Governance and leadership. http://qualityforum. org/About_NQF/Governance_and_Leadership.aspx (accessed June 24, 2011). ONC (Office of the National Coordinator for Health Information Technology). 2011. Health IT workforce development program facts at a glance. http://healthit.hhs.gov/portal/server. pt?open=512&objID=1432&mode=2 (accessed June 24, 2011). Quality and Safety Education for Nurses. 2011. Quality and safety competencies. http://www. qsen.org/competencies.php (accessed July 19, 2011). Schneider, P. 1996. FDA & clinical software vendors: A line in the sand? Healthcare Informatics 13(6):100-102, 104, 106. Shuren, J. 2010. Statement to IOM Committee on Patient Safety and Health Information Technology. Statement read at the Workshop of the IOM Committee on Patient Safety and Health Information Technology, Washington, DC. Sittig, D. F., and D. C. Classen. 2010. Safe electronic health record use requires a comprehen- sive monitoring and evaluation framework. Journal of the American Medical Association 303(5):450-451. Sittig, D. F., and H. Singh. 2009. Eight rights of safe electronic health record use. Journal of the American Medical Association 302(10):1111-1113.

OCR for page 125
168 HEALTH IT AND PATIENT SAFETY U.S. Congress, Senate Subcommittee on Labor, Health and Human Services, and Education. 2000. Testimony of Leape, L. VanRooyen, M. J., J. G. Grabowski, A. J. Ghidorzi, C. Dey, and G. R. Strange. 1999. The perceived effectiveness of total quality management as a tool for quality improvement in emergency medicine. Academic Emergency Medicine 6(8):811-816. Veterans Affairs National Center for Patient Safety. 2011. FAQ: How do you define an in- tentional unsafe act? http://www.patientsafety.gov/FAQ.html (accessed August 9, 2011). Walshe, K., and S. M. Shortell. 2004. Social regulation of healthcare organizations in the United States: Developing a framework for evaluation. Health Services Management Research 17(2):79-99. Weeda, D. F., and N. F. O’Flaherty. 1998. Food and Drug Administration regulation of blood bank software: The new regulatory landscape for blood establishments and their vendors. Transfusion 38(1):86-89. WHO (World Health Organization). 2005. WHO draft guidelines for adverse event reporting and learning systems: From information to action. Geneva: World Health Organization Press.