1
Introduction

Toxicity testing is approaching a pivotal point where it is poised to take advantage of the revolution in biology and biotechnology. The current system is the product of an approach that has addressed advances in science by incrementally expanding test protocols or by adding new tests without evaluating the testing system in light of overall risk-assessment and risk-management needs. That approach has led to a system that is somewhat cumbersome with respect to the cost of testing, the use of laboratory animals, and the time needed to generate and review data. In combination with varied statutory requirements for testing, it has also resulted in a system in which there are substantial differences in chemical testing, many chemicals not being tested at all despite potential human exposure to them. Furthermore, the data that are generated might not be ideal for answering questions regarding risk to human health. Accordingly, the U.S. Environmental Protection Agency (EPA) recognized that the time had come for an innovative approach to toxicity testing and asked the National Research Council (NRC) to develop a long-range vision and strategy for toxicity testing. In response to EPA’s request, the NRC con-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy 1 Introduction Toxicity testing is approaching a pivotal point where it is poised to take advantage of the revolution in biology and biotechnology. The current system is the product of an approach that has addressed advances in science by incrementally expanding test protocols or by adding new tests without evaluating the testing system in light of overall risk-assessment and risk-management needs. That approach has led to a system that is somewhat cumbersome with respect to the cost of testing, the use of laboratory animals, and the time needed to generate and review data. In combination with varied statutory requirements for testing, it has also resulted in a system in which there are substantial differences in chemical testing, many chemicals not being tested at all despite potential human exposure to them. Furthermore, the data that are generated might not be ideal for answering questions regarding risk to human health. Accordingly, the U.S. Environmental Protection Agency (EPA) recognized that the time had come for an innovative approach to toxicity testing and asked the National Research Council (NRC) to develop a long-range vision and strategy for toxicity testing. In response to EPA’s request, the NRC con-

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy vened the Committee on Toxicity Testing and Assessment of Environmental Agents, which prepared this report. HISTORICAL PERSPECTIVE OF REGULATORY TOXICOLOGY To gain an appreciation of current toxicity-testing strategies, it is helpful to examine how they evolved, why differences arose among and within federal agencies, and who contributed to the process. The current strategies have their foundation in the response to a tragedy that occurred in 1937 (Gad and Chengelis 2001). At that time, few laws prevented the sale of unsafe food or drugs. A labeling law prohibited the sale of “misbranded” food or drugs, but the law could be enforced only on the basis of criminal charges that arose after sale of a product. During fall 1937, the Massengil Company marketed a drug labeled “Elixir of Sulfanilamide,” which was a solution of sulfanilamide in diethylene glycol. From the recognition of the drug’s toxicity to its removal from the market by the Food and Drug Administration (FDA), it had caused at least 73 deaths. The tragedy revealed the inadequacy of the existing law. FDA was able to act only because the drug had been mislabeled; at that time, an elixir was defined as a product that contained alcohol. If the company had labeled the drug “Solution of Sulfanilamide,” FDA would not have been able to act. As a result of the sulfanilamide tragedy, Congress passed the Food, Drug, and Cosmetic Act (FDCA) of 1938, which required evidence (that is, from toxicity studies in animals) of drug safety before marketing (Gad and Chengelis 2001). Major amendments to the FDCA in 1962, known as the Kefauver-Harris Amendments, strengthened the original law and required proof not only of drug safety but of drug efficacy. More extensive clinical trials were required, and FDA had to indicate affirmative approval of a drug

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy before it could be marketed. The approval process thus changed from one based on premarket notification to one based on premarket approval. The FDCA also dealt with food-safety issues and was amended in 1958 to require manufacturers to demonstrate the safety of food additives (Frankos and Rodricks 2001). FDA was given authority to develop toxicity studies for assessing food additives and to specify criteria to be used in assessing safety. As a result of the need for scientific safety assessments, toxicologists in FDA, academe, and industry developed the first modern protocols in toxicology during the 1950s and 1960s (see, for example, FDA 1959). Those protocols helped to shape the toxicity-testing programs that are in use today. Differences in testing strategies between drugs and foods arose in FDA because of differences in characteristics and regulatory requirements (Frankos and Rodricks 2001). Drugs are chemicals with intended biologic effects in people, whereas food additives—such as antioxidants, emulsifiers, and stabilizers—have intended physical and chemical effects in food. Thus, a drug manufacturer must demonstrate the desired biologic effect, and a food-additive manufacturer must demonstrate the absence of measurable biologic effect. Regarding regulatory requirements, the FDCA requires clinical trials in humans for drug approval; there is no such requirement for food additives. FDA considers risks and benefits when approving a drug but considers only safety when approving a food additive. Thus, differences in approaches to food and drug testing have evolved. The public has long been concerned about the safety of intentional food additives and drugs. By the late 1960s, concern about exposure to chemical contaminants in the environment was also growing. In 1970, EPA was established “to protect human health and to safeguard the natural environment—air, water, and land—upon which life depends” (EPA 2005a). Over the years, EPA has developed toxicity-testing strategies to evaluate pesticides and

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy industrial chemicals that may eventually appear as food residues or as environmental contaminants. The 1947 Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) required the registration of pesticides before marketing in interstate or foreign commerce (Conner et al. 1987). The statute was first administered by the U.S. Department of Agriculture, but authority was transferred to EPA when it was created. FIFRA has been amended several times, but the 1972 amendments transformed FIFRA and gave EPA new powers, such as classification of pesticides and regulation of pesticide residues on raw agricultural commodities. Although registration remained the centerpiece of the act, one amendment required proof that the pesticide did not cause “unreasonable adverse effects” on humans or the environment (Conner et al. 1987). That amendment was largely responsible for the testing strategy that eventually emerged in EPA. The other critical pieces of legislation that helped to shape the current toxicity-testing strategy for pesticides were amendments to the FDCA. In 1954, the Miller Amendment “required that a maximum acceptable level (tolerance) be established for pesticide residues in foods and animal feed” (Conner et al. 1987). The Food Quality Protection Act of 1996 amended the FDCA (and FIFRA) and “fundamentally changed the way EPA regulates pesticides” (EPA 2005b). Some of the most important changes were the establishment of a risk-based standard for pesticide residues on all foods, the requirement that EPA “consider all non-occupational sources of exposure…and exposure to other pesticides with a common mechanism of toxicity when setting tolerances,” the requirement that EPA set tolerances that would ensure safety for infants and children, and the requirement that EPA develop and implement an endocrine-disruptor screening program (EPA 2006). FIFRA, the FDCA, and the amendments to them are responsible for the current toxicity-testing strategy for pesticides, which typically requires extensive testing before a pesticide can be mar-

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy keted. The strategy for evaluating industrial chemicals is different. The Toxic Substances Control Act (TSCA) was passed in 1976 to address control of new and existing industrial chemicals not regulated by other statutes (Kraska 2001). Although manufacturers are required to submit premanufacturing notices—which include such information as chemical identity, intended use, manufacturing process, and expected exposure—no specific toxicity testing is required.1 Instead, the strategy for evaluating industrial chemicals relies heavily on the use of structure-activity relationships. FDA’s drug and food-additive testing programs and EPA’s pesticide testing program represent strategies designed to support safety evaluations of chemicals before specified uses. Other testing can occur in response to regulatory concerns regarding environmental agents. For example, EPA sponsors some toxicity testing, epidemiologic studies, and test development to support its regulatory mandates, such as those under the Safe Drinking Water Act. The Health Effects Institute, a joint EPA- and industry-sponsored organization, funds toxicity studies to inform regulatory decisions on air pollutants. As regulatory concerns arise, industry may initiate testing to evaluate further dose-response relationships of important environmental contaminants. The National Toxicology Program (NTP)—which was created in 1978 to “coordinate toxicology testing programs within the federal government[,]…strengthen the science base in toxicology[,]…develop and validate improved testing methods[,]…[and] provide information about potentially toxic chemicals to health, regulatory, and research agencies, scientific and medical communities, and the public” (NTP 2005)—performs toxicity tests on agents of public-health concern. For example, its chronic bioassay has become the gold standard for carcinogenicity testing. The NTP has been instrumental in the acceptance and integration of new tests or approaches in toxicity-testing strategies. It has initiated development 1 For more information on the extent of chemical testing under TSCA, see the committee’s interim report (NRC 2006).

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy of medium- and high-throughput tests to address the ever-growing number of newly introduced chemicals and the existing chemicals and breakdown products that have not been tested.2 Tests proposed by NTP and others that are alternatives to standard protocols are formally reviewed by an interagency authority, the Interagency Coordinating Committee on the Validation of Alternative Methods, to ensure that they have value in regulatory decision-making. Another organization that has influenced toxicity-testing programs in the United States is the Organisation for Economic Co-operation and Development (OECD). OECD is an organization that “provides a setting where governments can compare policy experiences, seek answers to common problems, identify good practice and co-ordinate domestic and international policies” (OECD 2006, p. 7). OECD’s broad interests include health and the environment. OECD has been instrumental in developing internationally accepted, or harmonized, toxicity-testing guidelines. The goal of the harmonization program is to reduce the repetition of similar tests conducted by member countries to assess the toxicity of a given chemical. Other OECD programs that have influenced toxicity-testing approaches or strategies include those to define the tests required for a minimal dataset for a chemical and to determine the approach to screening endocrine disruptors. RISK ASSESSMENT The toxicity data generated by the strategies and programs described above are most often used in a process called risk assessment to evaluate the risk associated with exposure to an agent. The 1983 NRC report, Risk Assessment in the Federal Government: Managing the Process, which presented a systematic and or- 2 The NTP’s general approach as described in its Roadmap for the Future is reviewed in the committee’s first report (NRC 2006).

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy ganized paradigm, set a standard for risk assessment. The report outlined a three-phase process in which scientific data are moved from the laboratory or the field into the risk-assessment process and then on to decision-makers to determine regulatory options. The research phase is marked by data generation and method development, including basic research and routine testing. For any particular risk assessment, the data used may have many sources, including studies of laboratory animals, clinical tests, epidemiologic studies, and studies of animal and human cells in culture. The data may be reported in peer-reviewed publications, in the general scientific literature and government reports, and in unpublished reports of specific tests undertaken for an assessment. In the risk-assessment phase, selected data are interpreted and used to evaluate a potential risk to human health and the environment. The 1983 NRC report described this phase in terms of four components: hazard identification (analysis of the available data to describe qualitatively the nature of the response to toxic chemicals, such as tumors, birth defects, and neurologic effects); dose-response analysis (quantification of the relationship between exposure and the response observed in studies used to identify hazard); exposure assessment (quantification of expected exposure to the agent among the general population and differently exposed groups); and risk characterization (synthesis and integration of the analyses in the three other components to estimate the likelihood and scope of risk among the general, sensitive, and differently exposed populations). Although risk assessment is based on scientific data, the process is characterized by gaps in data and fundamental scientific knowledge, and it relies on models, extrapolation, and other inference methods. The process turns to science policies—choice of mathematical models, safety factors, and assumptions—to fill in data and knowledge gaps. Science policies used in risk assessment are distinct from the regulatory policies developed for risk-management decisions described below.

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy Risk management moves the original data—now synthesized and integrated in the form of a risk characterization—to those responsible for making regulatory decisions. The decision-makers consider the products of the risk assessment with data from other fields (for example, economics), societal and political issues, and interagency and international factors to decide whether regulation is needed and, if so, its nature and scope. The 1983 NRC report and later reports (NRC 1993, 1996; EPA 1998) recognized a planning and scoping stage in which a host of scientific and societal issues are considered in advance of research and risk assessment. That activity includes examining the expected scope of the problem, available data and expected data needs, cost and time requirements, legal considerations, and community-related issues. The present report identifies some of those considerations and other, public-health considerations as “risk contexts” and underlines their important role in decisions related to toxicity testing (see discussion under “The Committee’s Second Task and Approach” in this chapter). Reviews and critiques of the 1983 NRC paradigm have for the most part focused on the risk-assessment module and its four components. A review of the literature shows considerably less attention to the research module and the risk-management module. The present report focuses on the research module, in which testing is conducted; however, it ventures into some risk-assessment considerations. THE COMMITTEE’S FIRST TASK AND KEY POINTS FROM ITS INTERIM REPORT Anticipating the impact of the many scientific advances and the changing needs of the assessment process, EPA recognized the need to review existing strategies and develop a long-range vision for toxicity testing and assessment. The committee that was

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy formed in response to EPA’s request and convened in March 2004 includes experts in developmental toxicology, reproductive toxicology, neurotoxicology, immunology, pediatrics and neonatology, epidemiology, biostatistics, in vitro methods and models, molecular biology, pharmacology, physiologically based pharmacokinetic and pharmacodynamic models, genetics, toxicogenomics, cancer hazard assessment, and risk assessment. As a first task, the committee was asked to review several relevant reports by EPA and others and to comment on aspects pertaining to new developments in toxicity testing and proposals to modify current approaches. Accordingly, the committee reviewed the 2002 EPA evaluation of its reference-dose and reference-concentration process (EPA 2002), the International Life Sciences Institute Health and Environmental Sciences Institute draft reports on a tiered toxicity-testing approach for agricultural-chemical safety evaluations (ILSI-HESI 2004a,b,c), the 2004 European Union report on the REACH (Registration, Evaluation and Authorisation of Chemicals) program, and the 2004 report on the near-term and long-term goals of NTP (NTP 2004). The committee’s interim report, released in December 2005, fulfilled the first part of the study. As discussed in its interim report (NRC 2006), the committee’s review of current toxicity-testing strategies revealed a system that had reached a turning point. Agencies typically have responded to scientific advances and emerging challenges by simply altering individual tests or adding tests to existing regimens. That patchwork approach has not provided a fully satisfactory solution to the fundamental problem—the difficulty in meeting four objectives simultaneously: depth, providing the most accurate, relevant information possible for hazard identification and dose-response assessment; breadth, providing data on the broadest possible universe of chemicals, end points, and life stages; animal welfare, causing the least animal suffering possible and using the fewest ani-

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy mals possible; and conservation, minimizing the expenditure of money and time on testing and regulatory review. The committee identified several recurring themes and questions in the various reports that it was asked to review. The recurring themes included the following: The inherent tension between breadth, depth, animal welfare, and conservation and the challenge to address one of these issues without worsening another. The importance of distinguishing between testing protocols and testing strategies while considering modifications of current testing practices. The possible dangers in making tests so focused that they evaluate only one end point in one species and thus provide no overlap to verify results. The need for both chemical-specific tailored testing to enhance understanding of a particular chemical’s mode of action and uniform testing protocols and strategies to enhance comparability. The importance of recognizing that toxicity testing for regulatory purposes should be conducted primarily to serve the needs of risk management. The recurring questions that arose during the committee’s review included the following: Which environmental agents should be tested? How should priorities for testing chemicals be set? What strategies for toxicity testing are the most useful and effective? How can toxicity testing generate data that are more useful for human health risk assessment? How can toxicity testing be applied to a broader universe of chemicals, life stages, and health effects? How can environmental agents be screened with minimal use of animals and efficient expenditure of time and other resources? How should tests and testing strategies be evaluated?

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy In considering those questions, the committee came to several important conclusions. First, the intensity and depth of testing should be based on practical needs, including the use of the chemical, the likelihood of human exposure, and the scientific questions that testing must answer to support a reasonable science-policy decision. Fundamentally, the design and scope of a toxicity-testing approach need to reflect risk-management needs. Thus, the goal is to focus resources on the evaluation of the more sensitive adverse effects of exposures of greatest concern rather than on full characterization of all adverse effects irrespective of relevance for risk-assessment and risk-management needs. Second, priority-setting should be a component of any testing strategy that is designed to address a large number of chemicals. Chemicals to which people are more likely to be exposed or to which some segment of the population might receive relatively high exposures should undergo more in-depth testing, and this concept is embedded in several existing and proposed strategies. Third, there are major gaps in current toxicity-testing approaches. The importance of the gaps is a matter of debate and depends on whether effects of public-health importance are being missed by current approaches. Testing every chemical for every possible health effect over all life stages is impractical; however, the emerging technologies hold great promise for screening chemicals more rapidly. Fourth, testing strategies will need to be evaluated with respect to the value of information that they provide in light of the four objectives discussed above—depth, breadth, animal welfare, and conservation. In evaluating new tests, there remains the difficult question of what should serve as the gold standard for performance. Simply comparing the outcomes of new tests with the outcomes of currently used tests might not be the best approach; determining whether it is will depend on the reliability and relevance of the current tests.

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy THE COMMITTEE’S SECOND TASK AND APPROACH For the second part of the study, the committee’s statement of task was to build on the work presented in the first report and develop a long-range vision and strategic plan to advance the practices of toxicity testing and human health assessment of environmental contaminants. The committee was directed to consider the following specific issues: Improvements in the assessment of key exposures (for example, potential susceptibility of specific life stages and groups in the general population) and toxicity outcomes (for example, endocrine disruption and developmental neurotoxicity). Incorporation of state-of-the-science testing and assessment procedures, methods, and approaches, such as genomics, proteomics, transgenics, bioinformatics, and pharmacokinetics. Methods for increasing efficiency in experimental design and reducing the use of laboratory animals. Potential uses and limitations of new or alternative testing methods. Application of emerging computational and molecular techniques in risk assessment. Issues to be considered included the data necessary to validate the techniques, the limitations of the techniques, the use of such methods to identify plausible mechanisms or pathways of toxicity, and the use of mechanistic insights in risk assessments or testing decisions. To prepare its final report, the committee held six meetings from April 2005 to June 2006. Three of the meetings included public sessions during which the committee heard presentations by staff of several EPA offices, including the Office of Prevention, Pesticides and Toxic Substances, the Office of Children’s Health Protection, the Office of Water, the Office of Solid Waste and Emergency Response, and the Office of Air and Radiation. The

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy committee also heard presentations by persons in other government agencies, industry, and academe. To develop its long-range vision, the committee identified a variety of scenarios for which toxicity-testing information would be needed to make a decision. Some common scenarios, defined by the committee as “risk contexts” for which toxicity testing is used to generate information needed for decision-making, are outlined below. Evaluation of new environmental agents. This category covers chemicals that have the potential to appear as environmental contaminants. It includes pesticides; industrial chemicals; chemicals that are destined for use in, for example, consumer products; and chemicals that might be emitted by the combustion of new fuels or new manufacturing processes. It would also include their breakdown products. Because of the large number of new agents that are introduced each year, a mechanism is needed to test the agents rapidly for potential toxicity. Questions have been raised about the safety of and risk posed by new categories of potential environmental agents, such as those introduced through nanotechnology and biotechnology. This category would also include those substances. Evaluation of existing environmental agents. Many substances already in the environment have not been evaluated for toxicity. In some cases, a need to evaluate specific existing environmental agents may arise from the discovery of a new source or exposure pathway or from a better understanding of human exposure on the basis of, for example, biomonitoring data. In other cases, scrutiny may be necessary when toxicity is newly recognized, such as toxicity in a worker population. In addition, the backlog of untested chemicals in commerce requires assessment to ensure that the chemicals in use today do not pose unacceptable risks at current exposures. Thus, toxicity testing for existing environmental agents requires a variety of testing approaches, from basic screen-

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy ing of a huge set of chemical agents to use of specific data generated by new exposure or health-effects information. Evaluation of a site. In many areas, soil or water has been contaminated by, for example, former industrial, military, or power-generation activities. If a new use, such as the building of a school or office building, is proposed for such a site, a primary goal would be to protect the health of future users of the site. Other goals could include evaluating the risks to neighbors posed by such a site or determining the degree and type of cleanup needed. Sites that are in use also might need evaluation, such as sites of industrial workplaces, schools, or office buildings. Those evaluations almost always involve concerns about exposures to site-specific chemical mixtures. Evaluation of potential environmental contributors to a specific disease. Many diseases are suspected of having an etiology that is, at least in part, environmental. A higher prevalence of a disease in one geographic area than in another might require decision-makers to consider the role of environmental agents in the disparity. Understanding the role of environmental agents in a prevalent disease can also help to target actions that need to be taken. For example, asthma, which has seen an increase in prevalence over the last 2 decades in Western societies, is now known to be induced or aggravated by air pollutants. That understanding has allowed decision-makers to take action against some pollutants, but other causes or triggers of asthma could yet be discovered. Evaluation of the relative risks posed by environmental agents. A risk manager might need to choose between different manufacturing processes or different solvents. Consumers might wish to distinguish between products on the basis of their potential risks to children. A proponent of a new chemical or process might wish to show that it has a lower risk in some ways than the current chemical or process. Such decisions might require less complex risk characterizations if they focus on the possible outcomes or exposures to be compared rather than requiring an in-depth un-

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy derstanding of the risks associated with each possible choice. This scenario emphasizes the need for toxicity-testing information to be directly comparable, standardized, and quantifiable so that such comparisons can be made. Thus, a primary goal of the committee was to develop a flexible toxicity-testing strategy that would be responsive to the different toxicity-testing needs of the various risk contexts outlined above. Another goal of the committee was to consider the powerful new technologies that have become available and will continue to evolve. For example, bioinformatics, which applies computational approaches to describe and predict biologic function at the molecular level, and systems biology, which is a powerful approach to describing and understanding fundamental mechanisms by which biologic systems operate, have pushed biologic understanding into a new realm. Moreover, genomics, proteomics, and metabolomics offer great potential and are being used to study human disease and to evaluate the safety of pharmaceutical products. Those and other tools are considered to be important in any future toxicity-testing strategy. ORGANIZATION OF THE REPORT The committee’s report is organized into six chapters. In Chapter 2, the committee discusses the limitations of the current toxicity-testing system, the design goals for a new system, and the options considered by the committee. An overview of the new long-range vision for toxicity testing of environmental agents is also presented. Each component of the new vision is discussed in greater detail in Chapter 3. Tools and technologies that might be used in the future toxicity-testing paradigm are described in Chapter 4. Implementation of the new vision over the course of several decades is considered in Chapter 5. In Chapter 6, the

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy committee considers the implications of the long-range vision given the current regulatory framework. REFERENCES Conner, J.D., L.S. Ebner, C.A. O’Connor, C. Volz, and K.W. Weinstein. 1987. Pesticide Regulation Handbook. New York: Executive Enterprises Publication. EPA (U.S. Environmental Protection Agency). 1998. Guidelines for Ecological Risk Assessment. EPA/630/R-95/002F. Risk Assessment Forum, U.S. Environmental Protection Agency. [online]. Available: oaspub.epa.gov/eims/eimscomm.getfile?p_download_id=36512. [accessed March 29, 2007]. EPA (U.S. Environmental Protection Agency). 2002. A Review of the Reference Dose and Reference Concentration Processes. Final Report. EPA/630/P-02/002F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://www.epa.gov/iris/RFD_FINAL%5B1%5D.pdf [accessed March 11, 2005]. EPA (U.S. Environmental Protection Agency). 2005a. EPA Commemorates its History and Celebrates its 35th Anniversary. U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/history/ [accessed February 22, 2006]. EPA (U.S. Environmental Protection Agency). 2005b. Food Quality Protection Act (FQPA) of 1996. Office of Pesticides, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/opppsps1/fqpa/ [accessed February 23, 2006]. EPA (U.S. Environmental Protection Agency). 2006. Highlights of the Food Quality Protection Act of 1996. Office of Pesticides, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/oppfead1/fqpa/fqpahigh.htm [accessed February 23, 2007]. FDA (Food and Drug Administration). 1959. Appraisal of the Safety of Chemicals in Foods, Drugs and Cosmetics. Staff of the Division of Pharmacology, Food and Drug Administration, Department of Health, Education and Welfare. Austin, TX: The Association of Food and Drug Officials of the United States. Frankos, V.H., and J.V. Rodricks. 2001. Food additives and nutrition supplements. Pp. 133-166 in Regulatory Toxicology, 2nd Ed., S.C. Gad, ed. London: Taylor and Francis. Gad, S.C., and C.P. Chengelis. 2001. Human pharmaceutical products. Pp. 9-69 in Regulatory Toxicology, 2nd Ed., S.C. Gad, ed. London: Taylor and Francis. ILSI HESI (International Life Sciences Institute Health and Environmental Sciences Institute). 2004a. Systemic Toxicity White Paper. Systemic Toxicity

OCR for page 18
Toxicity Testing in the 21st Century: A Vision and a Strategy Task Force, Technical Committee on Agricultural Chemical Safety Assessment, ILSI Health Sciences Institute, Washington, DC. November 2, 2004. ILSI HESI (International Life Sciences Institute Health and Environmental Sciences Institute). 2004b. Life Stages White Paper. Life Stages Task Force, Technical Committee on Agricultural Chemical Safety Assessment, ILSI Health Sciences Institute, Washington, DC. November 2, 2004. ILSI HESI (International Life Sciences Institute Health and Environmental Sciences Institute). 2004c. The Acquisition and Application of Absorption, Distribution, Metabolism, and Excretion (ADME) Data in Agricultural Chemical Safety Assessments. ADME Task Force, Technical Committee on Agricultural Chemical Safety Assessment, ILSI Health Sciences Institute, Washington, DC. November 2, 2004. Kraska, R.C. 2001. Industrial chemicals: Regulation of new and existing chemicals (The Toxic Substances Control Act and similar worldwide chemical control laws). Pp. 244-276 in Regulatory Toxicology, 2nd Ed., S.C. Gad, ed. London: Taylor and Francis. NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press. NRC (National Research Council). 1993. Issues in Risk Assessment. Washington, DC: National Academy Press. NRC (National Research Council). 1996. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press. NRC (National Research Council). 2006. Toxicity Testing for Assessment of Environmental Agents: Interim Report. Washington, DC: The National Academies Press. NTP (National Toxicology Program). 2004. The NTP Vision for the 21st Century. National Toxicology Program, National Institute for Environmental Health, Research Triangle Park, NC [online]. Available: http://ntpserver.niehs.nih.gov/ntp/main_pages/-NTPVision.pdf [accessed March 11, 2005]. NTP (National Toxicology Program). 2005. History of NTP. National Toxicology Program [online]. Available: http://ntp-server.niehs.nih.gov/ [accessed February 24, 2007]. OECD (Organization of Economic Co-operation and Development). 2006. The OECD: Organization for Economic Co-operation and Development [online]. Available: http://www.oecd.org/dataoecd/15/33/34011915.pdf [accessed March 7, 2007].