Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 11
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary 2 Conceptual Framework for DRI Development: Session 11 Prior to the workshop, Session 1 participants were asked to consider several general questions (shown in Box 2-1) in preparing their presentations. Session 1 addressed both the conceptual underpinnings and several overarching “roadmap” issues as described in Workshop Introduction (see Chapter 1). The session was moderated by Dr. Stephanie Atkinson of McMaster University. Dr. Robert Russell of Tufts University discussed the pros and cons of the current framework for Dietary Reference Intake (DRI) development. Two case studies were then presented. Dr. Paula Trumbo, a former study director for DRI micronutrient, macronutrient, fiber, and water and electrolyte study committees who is now at the U.S. Food and Drug Administration (FDA), explored considerations when applying the DRI framework to chronic disease endpoints. Dr. Allison Yates, who served as director of the Food and Nutrition Board (FNB) from 1994 through 2003 and is now director of the Agricultural Research Service Human Nutrition Center at the U.S. Department of Agriculture (USDA), discussed applying the DRI framework to non-chronic disease endpoints. Perspectives on the DRIs were offered by Dr. George Beaton and Dr. Janet King. Dr. Beaton is professor emeritus at the University of Toronto and has served as a consultant to the Institute of Medicine (IOM). Dr. King 1 This chapter is an edited version of remarks presented by Drs. Russell, Trumbo, Yates, Beaton, King, Lichtenstein, and Yetley at the workshop. Discussions are composites of input from various panel members, discussants, presenters, moderators, and audience members.
OCR for page 12
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary BOX 2-1 General Questions for Session 1 Participants Conceptual Underpinnings How has the Dietary Reference Intake (DRI) framework “held up” over time? What is the general purpose of the DRIs? Is it still for planning and assessing? Do the Estimated Average Requirements (EARs), Recommended Dietary Allowances (RDAs) and Tolerable Upper Intake Levels (ULs) continue to be desirable values? Is the Adequate Intake (AI) useful and needed? Does the Acceptable Macronutrient Distribution Range (AMDR) pave the way to considering macronutrients using a different approach? Should we continue to include chronic disease risk as an endpoint option? Should we explore multiple endpoints for the same age/gender group? Should the focus of the DRIs continue to expand beyond classic nutrients? Is a modified DRI approach needed to address macronutrients and nonessential nutrient substances? Overarching Road Map Issues What is the role of systematic evidence-based reviews (SEBRs) in DRI development? Can an organizing scheme for DRI development be specified? is senior scientist at the Children’s Hospital Oakland Research Institute and is a former chair of the FNB. Dr. Alice Lichtenstein of Tufts University examined the issues in applying systematic evidence-based review (SEBR) approaches to DRI development. Dr. Elizabeth Yetley, a Senior Nutrition Research Scientist with the Office of Dietary Supplements at the National Institutes of Health, discussed whether risk assessment is a relevant organizing structure for the DRI development process. Designated discussants followed Drs. Russell, Trumbo, and Yates, and a designated discussant engaged Drs. Lichtenstein and Yetley. In each case, the discussions were followed by input from the workshop audience. The session concluded with a panel discussion, at which point the session was again opened to the audience for comment.
OCR for page 13
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary CURRENT FRAMEWORK FOR DRI DEVELOPMENT: WHAT ARE THE PROS AND CONS? Presenter: Robert M. Russell In 1994, two major changes were made to the development of reference values. One was that the values could be based on an endpoint associated with the risk of chronic disease. The second was that reference values in addition to the Recommended Dietary Allowance (RDA) would be provided to address the increasingly broad applications of reference values. However, these major changes to the DRI development process have both pros and cons. Reference Values Expressed: EARs, RDAs, and AIs The Estimated Average Requirement (EAR) is the level of intake for which the risk of inadequacy would be 50 percent. The RDA is two standard deviations (SDs) above the EAR, covering 97 percent of the population. The Adequate Intake (AI) as a reference value was not envisioned until the lack of dose–response data precluded study committees from determining the level at which the risk of inadequacy would be 50 percent. This was often exacerbated by a lack of longitudinal studies. As a result, AIs were generally set when an EAR could not be established.2 These include calcium, vitamin D, chloride, chromium, fluoride, potassium, manganese, sodium, and vitamin K. For calcium, an AI was issued due to uncertainty about methods used in older balance studies, a lack of concordance between observational and experimental data (i.e., the mean intakes of the population are lower than the values needed to achieve calcium retention), and a lack of longitudinal dose–response data to verify an association between the amounts needed for calcium retention and bone fracture or bone loss. For vitamin D, an AI was developed because the study committee did not know how much dietary vitamin D is needed to maintain normal calcium metabolism and bone health, primarily because vitamin D is a complicated hormone: Exposure to sunlight, skin pigmentation, the latitude at which one lives, and the amount of clothing one wears all affect the amount of vitamin D needed. Furthermore, there were uncertainties 2 An exception is the reference value for young infants, for whom AIs were specifically determined as opposed to developed when an EAR could not be developed. The AI for young infants has generally been the average intake by full-term infants born to healthy, well-nourished mothers and exclusively fed human milk. The only exception to this criterion is vitamin D, which occurs in low concentrations in human milk (IOM, 2006).
OCR for page 14
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary about the accuracy of the vitamin D food composition database and levels of food fortification. When a chronic disease endpoint was selected as the basis for a reference value—which occurred for five nutrients—all of the reference values were AIs rather than EARs. Calcium and vitamin D AIs were set primarily on the basis of experimental data on bone density and fracture, fluoride on dental caries, potassium on hypertension, and fiber on coronary artery disease. The selection of an endpoint for EARs presented some difficulties. A variety of endpoints were used. For example, maximum glutathione reductase activity was the endpoint used for selenium. A factorial approach3 was used for vitamin A, zinc, and iron. The maximum neutrophil concentration that would give minimal urinary loss was used to determine the vitamin C EAR. Physiological function was used for vitamin E (the level that would inhibit peroxide-induced hemolysis) and vitamin B12 (maintaining a normal hematological status). The study committees encountered numerous data gaps. The prime one was the lack of defined health-related endpoints associated with status and a lack of biomarkers to define chronic disease. Age-specific data were lacking, so extrapolation was used. Also, there was a lack of information on variability of responses (needed to calculate RDAs). As already mentioned, another data gap was the lack of dose–response data (ending up with AIs) combined with a lack of long-term studies. Adding to this list are the lack of knowledge as to which systems dysfunction with excess, as seen with bone, and the lack of uniform rules on how to apply uncertainty factors. Another problem has been extrapolation. Using the case of vitamin A, the AI for 0- to 6-month-olds is 400 μg retinol activity equivalents (RAEs) per day. The study committee extrapolated up for the 7- to 12-month-olds to get an AI of 500 µg RAE/day, which is very close to the tolerable upper intake level (UL) (based on bulging fontanels) of 600 µg RAE/day. In using these numbers, more than half the infants (4–5 months old) in the USDA’s Special Supplemental Nutrition Program for Women, Infants and Children (WIC) are eating above the UL, yet adverse effects on these infants have not been observed. Another odd observation is the lower requirement for 1- to 3-year-olds (300 µg RAE/day) than for 7- to 12-month-olds (500 µg RAE/ day), because the AI for 7- to 12-month-olds was extrapolated up from 0- to 6-month-olds and the EAR for 1- to 3-year-olds was extrapolated down from the adult number. The validity of these numbers is therefore questionable. 3 A factorial approach can take several forms but generally derives a total nutrient requirement by summing the individual physiological needs of various functional components (e.g., body maintenance, milk synthesis, skin sloughing).
OCR for page 15
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary The major challenge in deriving an RDA from an EAR is variance. To establish an RDA, one determines the EAR, assesses the variability, then calculates the RDA as the EAR plus two SDs. However, variance is not known for most nutrients, and a coefficient of variation (CV) is assumed instead. A 10 percent CV was assumed for thiamin, riboflavin, niacin, vitamin B6, folate, vitamin B12, vitamin C, vitamin E, selenium, and zinc. The CV is known for some nutrients, such as vitamin A. Although the study committee was initially enthusiastic about using a physiological endpoint (abnormal dark adaptation) for determining an EAR for vitamin A, the pooled data from four studies gave a CV of 40 percent. Therefore, the study committee decided not to use dark adaptation as the endpoint, and no EAR or RDA was established on this basis. Instead, a higher EAR (625 μg for men and 500 µg for women, compared with 300 µg) was determined using a factorial approach. Reference Values Expressed: ULs The UL is the highest level of daily nutrient intake that poses no risk of an adverse effect to almost any individual in the general population. It is not a recommended or desirable level of intake. It is derived by dividing a no-observed-adverse-effect level (NOAEL) or a lowest-observed-adverse-effect level (LOAEL) by an uncertainty factor. A concern is that the uncertainty factor is subjective. The sources of uncertainty that the study committees considered were interindividual variation, extrapolation from animals to humans, short-term versus chronic exposures, use of a LOAEL instead of a NOAEL, small numbers of people studied, and the severity of the effects (the higher the severity, the higher the uncertainty factor). The example in Box 2-2 illustrates the subjectivity that study committee members face in trying to derive logical and scientifically valid numbers. Applicability of the Framework to All Nutrient Substances The framework did not “fit” well for establishing reference values for fat and macronutrients. Such substances are not essential and have no beneficial role, except for essential fatty acids and amino acids. Rather, an Acceptable Macronutrient Distribution Range (AMDR) for fat was determined to be 20–35 percent of calories. Furthermore, a UL was not provided for the effects of intakes of saturated fat or trans fat on low-density lipoprotein (LDL) cholesterol, as coronary heart disease (CHD) risk increases progressively. For fiber, an AI was set on the basis of heart disease prevention, as the effect on CHD occurs continuously across the range of intakes. No UL could be determined for fiber, because fiber intake is accompanied
OCR for page 16
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary BOX 2-2 Vitamin A and Uncertainty Factors Four adverse effects were considered in setting a tolerable upper intake level (UL) for vitamin A: bone mineral density, liver toxicity, teratogenicity (for women of reproductive age), and bulging fontanels (for infants). In a study of the daily dietary intake of retinol associated with risk for hip fracture in two populations in Sweden and the United States (Melhus et al., 1998), it was determined that there was a rise in the risk for hip fracture above a vitamin A intake of 1,500 μg/day. However, two other papers were unable to show any effect of vitamin A intake on bone mineral density (Sowers and Wallace, 1990; Ballew et al., 2001). Therefore, the United States decided to use liver toxicity as the critical effect for the general adult population and derived a UL of 3,000 µg/day (twice as high as the UL based on hip fracture). For women of reproductive age, a UL of 3,000 µg/day for teratogenicity was determined based primarily on a study by Rothman et al. (1995). The United Kingdom (UK) panel decided that the Rothman et al. (1995) paper was biased and did not set any UL for teratogenicity, as it considered the evidence base inadequate. It suggested that intakes greater than 1,500 µg/day may be inappropriate and advised pregnant women not to take vitamin A supplements. The European Union (EU), looking at the same database used by the Institute of Medicine (IOM) study committee and the UK panel, established a UL of 3,000 µg/day, the lowest-observed-adverse-effect level (LOAEL) for teratogenicity based on the Rothman et al. (1995) paper. The EU did not use any uncertainty factor because it believed that data from other studies supported a true threshold of more than 3,000 µg/day and that this number covered the risk of hepatotoxicity. Using the same paper, the IOM study committee determined the no-observed-adverse-effect level (NOAEL) for teratogenicity to be 4,500 µg/day and used an uncertainty factor of 1.5 to establish a UL of 3,000 µg/day. However, the IOM study committee had already decided to use 14,000 µg/day as the LOAEL for liver toxicity, with a high uncertainty factor of 5 because of the severity of the effect, resulting in a UL of 3,000 µg/day. Because the study committee believed it would be confusing to have women of reproductive age with one UL and all others with another UL, it somewhat adjusted the numbers to come out with the same UL. by phytate intake, a confounding factor. For the Estimated Energy Requirement (EER), the goal was to maintain a healthy weight at an acceptable level of physical activity. That is, the EER was based on energy balance (no weight gain), not on reduction of disease risk—a different type of paradigm than originally envisioned. Selection of Endpoints In general, the selection of endpoints was based on data availability. For ULs, the endpoints were frequently concerned with public health
OCR for page 17
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary protection, often using a benign adverse effect (e.g., flushing rather than liver dysfunction) to be more protective. The selection of endpoints was not based, for the most part, on the strength or consistency of evidence or on the severity or clinical importance of the endpoints. When the (ideal) data were lacking, the study committees still had to provide numbers; it was emphasized that “no decision was not an option.” This is because the numbers are needed for so many purposes, such as goals for individuals, dietary assessment and planning, food fortification, food assistance program evaluation, food labeling, agricultural policies, dietary guidance policies, and educational program planning. In the future, it might be better to select endpoints more scientifically. The use of biomarkers that correlate with a disease or physiological state would be very helpful. The biomarkers should be attributable and responsive to the nutrient in question—key questions that can be answered using SEBRs. Further SEBRs allow the ranking of the quality of the evidence according to the degree of confidence in the conclusion. If the biomarker is found to be valid, the dietary intake can be correlated with the biomarker, and the overall quality of the data can be ranked. Systematic Evidence-Based Reviews SEBRs can answer only limited types of questions.4 Nevertheless, they are independent and unbiased reviews of a defined topic by a group with no stake in the outcome. They can account for confounders (e.g., dietary supplements) in ranking. They can determine the validity of extrapolations or interpolations. They can increase the transparency of decisions made about specific endpoints, which increases the replicability of the data by other groups. The importance of the SEBR is illustrated in Box 2-3. Other Challenges One quandary for application is that sodium, potassium, calcium, vitamin D, vitamin E, and linoleic acid DRIs are unrealistic values, given the North American food supply and dietary habits. Almost no one meets the numbers for these nutrients. While the science for setting DRI values takes precedence and should not be compromised because of real or perceived inconsistencies about what the population is eating, DRI reports may need to include more discussion about these problems when they occur. While decisions about the use of DRIs for nutrition labeling are outside the purview of the DRI development process, related issues raise interesting questions, such as what to do if there is no DRI (e.g., trans fat), what to 4 SEBRs are discussed in further detail in a separate presentation later in this chapter.
OCR for page 18
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary BOX 2-3 β-Carotene Case Study and the Evidence-Based Review The β-carotene trials were started on the basis of many epidemiological studies showing that the higher the β-carotene in the serum or diet, the lower the incidence of lung cancer in smokers. However, when an intervention trial was done with β-carotene at a fairly high dose, more lung cancers, not fewer, were found in the β-carotene group (Heinonen and Albanes, 1994). This was backed up by a second trial in the United States, the CARET trial, done in 1996 (Omenn et al., 1996). Three years before the first of these trials, in 1991, the Food and Drug Administration (FDA) had looked at the large number of available studies (mostly retrospective or prospective epidemiological studies) with either cancer or pre-malignancy as the endpoint. The first criterion used to evaluate the studies was: Did they allow attribution of β-carotene per se to the observed health effects, not simply to diets or dietary patterns that were rich sources of these nutrients or to serum/plasma levels that could be markers of diets rich in these nutrients? The second criterion was: Did they provide a sufficient basis for relating intakes to the actual reduced risk of cancer (because there were no validated biomarkers at the time to serve as surrogates for cancer sites)? The bottom line was that the FDA’s systematic evidence-based review (SEBR) led it to reject the health claim that antioxidants collectively and carotene specifically could protect against cancer. The government might have saved itself considerable expense if it had paid attention to the FDA’s SEBR performed 3 years before the huge intervention trial began. do if there is an AI (e.g., calcium), how to identify a single dietary value if there is a distribution range, and how to choose between an EAR and an RDA. It should be remembered that people use food labels to choose among food products, not to formulate their diets. Whether an approximate (e.g., interpolated) EAR that is scientifically based can be derived when the data are nonexistent or inadequate should be investigated. If it can be derived, the best way to express that value to make it more useful should be determined. Consistent guidelines should be developed for setting uncertainty factors and for rating the overall evidence for a DRI value, based on the strength of the data, the consistency, the public health relevance, and the applicability to the person or persons of interest. Usefulness of the DRI Framework and Conclusions The DRI framework has often been found not to be useful for planning for groups, such as WIC, primarily because too many assumptions have to
OCR for page 19
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary be made (e.g., the distribution of intakes will not change with a particular intervention). For planning for individuals, it is questionable how the RDA is to be used. The RDA is probably most useful as a goal that is either met or not met. For assessing individual dietary adequacy, the probability equations have been found to be too cumbersome to use; as a result, only 5 percent of dietitians admit to using them. However, for assessing intakes of groups, such as WIC, the framework has worked well. In summary, the pros and cons of the past paradigm are listed below. Pros A comprehensive review of scientific literature at the time was performed. A risk assessment model was developed. The framework for assessing group dietary intakes worked well using the EAR cutpoint method for prevalence of inadequacy. Cons For the most part, the health endpoint data on which to base DRIs were lacking. Variance data were lacking. It was necessary to make many extrapolations, the scientific validity of which was unknown. Long-term data were limited. The uncertainty factors for deriving ULs were very subjective. CASE STUDY: APPLYING THE DRI FRAMEWORK TO CHRONIC DISEASE ENDPOINTS Presenter: Paula Trumbo The conclusion that the “reduction in risk of chronic disease is a concept that should be included in the formulation of future RDAs where sufficient data for efficacy and safety exist” (IOM, 1994) had a notable impact on the DRI development process. It influenced the way in which nutrients were grouped for review, as noted in the following examples: Calcium and related nutrients were grouped together because of their role in bone health and general health. Antioxidants were reviewed together because of their potential role in reduction of risk of chronic diseases, such as cancer and CHD.
OCR for page 20
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary Electrolytes were grouped because of their role in blood pressure and hypertension. Moreover, a guiding principle that was conveyed to the DRI study committees was the need to review the evidence on chronic disease first to determine if it was possible to use such data to set a DRI. Setting EARs Based on Chronic Disease Endpoints Of the nutrients that were assigned reference values related to nutritional adequacy, only five were based on chronic disease endpoints. While the DRI study committees were encouraged to set an EAR rather than an AI because of the limited utility of the AI for assessment purposes, the reference values related to nutritional adequacy that were developed for nutrients based on chronic disease endpoints were all AIs. The endpoints were osteoporosis and fractures for calcium and vitamin D, as well as balance data and biomarkers for vitamin D; dental caries for fluoride; CHD for fiber; and a combination of endpoints, including salt sensitivity (a risk factor of hypertension), kidney stones, and blood pressure, for potassium. An important question to ask is “Could EARs have been set using chronic disease endpoints if sufficient data had been available?” The EAR is an average daily nutrient intake level that is estimated to meet the requirement (defined by the nutrient-specific indicator or criterion of adequacy) of half the healthy individuals in a subpopulation. In Figure 2-1, at a very low intake of 30 units for nutrient X, there is a risk of inadequacy in 100 percent of the subpopulation. At an intake level equivalent to the EAR of 100 units, the risk of inadequacy is 50 percent. At an intake level of approximately 140 units, there is only a 2–3 percent risk of inadequacy for nutrient X (i.e., the RDA). This DRI paradigm worked well when the EAR was based on essentiality because nutrient-specific indicators were being used, such as balance data for molybdenum, factorial data for iron and zinc, status biomarkers that were unique to copper and vitamin E, and turnover data for iodine and carbohydrate. Furthermore, endpoints of inadequacy could be used to set an EAR because all individuals are at risk of inadequacy for essential nutrients. The challenge in fitting a chronic disease endpoint into this DRI paradigm is illustrated by a clinical trial that evaluated potassium intake and frequency of salt sensitivity. This trial provided multiple doses of potassium
OCR for page 21
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary FIGURE 2-1 Estimated Average Requirement for hypothetical nutrient X. NOTE: EAR = Estimated Average Requirement. to individuals who were consuming very high levels of salt. The highest frequency of salt sensitivity occurred at a very low level of potassium intake (30 mmol/day) and was about 78 percent for African Americans and 37 percent for Caucasians (Figure 2-2). It became obvious that it was difficult to apply data from such a trial to the DRI paradigm, which assumed that the risk of inadequacy at very low intake is 100 percent for the population. If the EAR is to be based on chronic disease risk reduction rather than reduction of the risk of nutrient inadequacy, then the definition of the EAR would be the nutrient intake level to reduce the risk of chronic disease in half the healthy individuals in a particular subpopulation, or to achieve an absolute risk reduction of 50 percent (where absolute risk is the probability of getting a disease over a certain time and is affected by the relative risk of a particular risk factor, such as intake of an individual nutrient). Each component in absolute risk reduction has challenges. One is the assumption that the absolute risk of a chronic disease is 100 percent for a subpopulation, as is the case for risk of inadequacy based on essentiality. Perhaps this is the case for dental caries, but it is not the case for other disease endpoints, such as osteoporosis, CHD, and kidney stones. The absolute risk of osteoporosis is not 100 percent, even for Caucasian postmenopausal women, and the absolute risk for CHD is even less than that for osteoporosis. The prevalence of kidney stones is approximately
OCR for page 52
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary step, the decisions made for ULs are generally similar to the decisions made for adequacy. Thus, the risk assessment framework could easily be adapted to both sides of the equation. The first step, hazard identification, is basically the literature review. In general, the nature of the questions for the indicators of hazard for the ULs is the same as that for the indicators of adequacy (e.g., intake/biomarker, biomarker/effect, intake/effect relationships). Both evaluations are focused on identifying dose–response effects and factors that affect dose–response, and both evaluations need this information across a range of life stage groups. The second step, the dose–response assessment, is the step where the reference values (e.g., EARs, ULs) are derived. In deriving the DRIs, a threshold model of dose–response was assumed for both adequate and excessive intakes. In both cases, the study committees frequently lacked good dose–response data. In the case of the UL, if they lacked dose–response data, they used a NOAEL or a LOAEL as the basis for deriving the UL. In the case of the adequacy evaluations, if the study committees lacked dose–response data, they derived an AI. In both cases, study committees preferred full distributions of dose–response data: On the UL side, it is called the benchmark dose; on the adequacy side, the EAR/RDA distribution curves. For both, there have been questions about whether a threshold model always works. In terms of adjustments to the dose–response relationship, bioavailability and bioequivalency issues relate to risks associated with both inadequate and excessive intakes. For both EAR or AI and UL, the traditional adjustments for bioavailability or bioequivalency for adequate intakes may lack relevance to the UL. For example, the EAR/RDA for iron adjusts for differences in bioavailability from food sources based on dietary intakes of heme/nonheme iron sources. However, with the increasing use of fortified foods and dietary supplements, a more appropriate bioavailability adjustment might be a bioequivalency type of adjustment similar to that used for retinol equivalents. Study committees would likely notice these potential incompatibilities if the evaluations for both adequate and toxic intakes were compared in a side-by-side risk assessment framework. Additionally, the same methodological biases in the studies used to evaluate risks associated with both inadequate and excessive intakes likely occur, so a consistent framework for analyzing both makes sense. Uncertainty assessments are a critical component in the dose–response assessment step of a risk assessment framework. Derivations of reference values for both inadequate and excessive intakes must deal with uncertainties in the available evidence and describe the nature and seriousness of those uncertainties in their texts. In some cases, an uncertainty factor is used to lower the observed effect level to give a UL. The use of uncertainty factors was relatively rare in deriving reference values for adequacy.
OCR for page 53
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary However, at least in the case of vitamin D, the study committee multiplied the observed intake of vitamin D by 2 to raise the AI above the observed dose–response relationship to account for uncertainties in background exposure to sunlight and study design inadequacies. Whether dealing with risk associated with inadequate intakes or excessive intakes, uncertainties need to be documented and are generally dealt with in a manner that errs on the side of public health protection. A risk assessment framework identifies the need to deal with uncertainties in the available evidence, but does not specify a methodology to deal with the identified uncertainties. This allows maximum flexibility in applying this organizing framework to different situations. Establishing reference values for both inadequate and excessive intakes also often involves extrapolations from a studied group (e.g., adults) to an unstudied group (e.g., children) because data may be available for some, but not all, of the life stage groups for which DRIs are established. The default for extrapolations for ULs was reference body weight. The default for EARs/AIs was metabolic body weight. There is no acknowledgment of, nor justification for, the use of different defaults for these two types of reference values. If there was a side-by-side common risk assessment framework, these types of differences would likely be noted and either justified or changed. The third step, intake (or exposure) assessment, uses population-based intake data to estimate the prevalence of intakes above or below the reference values. Biomarkers of nutrient status, when available, can also be used to estimate prevalence of inadequate or excessive exposures. The same analysis is often used for both types of reference values. The fourth step is risk characterization, which is the most important step from a user perspective. This is where the public health consequences of not meeting an EAR/RDA or AI and exceeding a UL are discussed. Deviations from reference values for special groups are also described in this section. Implications An advantage of using a risk assessment framework is that the science of risk assessment has been moving forward. The DRI development process can benefit from these efforts. For example, risk assessors increasingly have been using probabilistic models to move from qualitative to quantitative risk assessments. They have been working to establish better defined criteria for dealing with different types and sources of uncertainty. They are starting to use statistical models to simulate dose–response curves from multiple studies that individually lack sufficient data to produce a dose–response curve. They are also learning how to adjust coefficients of variability to account for altered dose–response curves associated with polymorphisms that alter nutrient requirements or toxicity among population groups.
OCR for page 54
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary In summary, the risk assessment organizing framework is probably relevant to the development of reference values related to nutrient adequacy. It provides a systematic delineation of decision steps that enhances transparency and therefore increases usability. A risk assessment organizing framework could help coordinate the decisions related to adequacy with those related to excessive intake, thus reducing the likelihood of unintended inconsistencies or consequences that create challenges for users. The risk assessment framework offers the flexibility to tailor the approach to different types of applications without losing the benefits of the organizing framework. It emphasizes enhanced documentation and transparency and takes advantage of evolving scientific tools. DISCUSSION: SYSTEMATIC EVIDENCE-BASED REVIEW; RISK ASSESSMENT Discussant: Sanford Miller The session moderator, Dr. Stephanie Atkinson, introduced the discussant and invited him to offer an opening remark. Discussant Opening Remarks Dr. Miller opened with the general observation that although it seems we are asking the same questions from years ago, we are learning to ask better questions. He noted that it is not surprising that study committees appeared to derive their own approach to the problems they faced given the lack of experience, structure, or formal guidance when the DRI process began. He suggested that risk assessment and SEBR together provide an excellent framework to organize the process and the questions to be addressed as well as structure to allow transparency on how conclusions were reached or the rationale for why a modified approach was used by a particular committee. Dr. Miller then focused on the nature of the relationship between risk and dose–response. The two risk curves associated with nutrients are composed of families of curves, and in turn represent the components of the metabolic regulatory process for absorption and excretion. If there is uncertainty around the curves, they will overlap, suggesting the nutrient is unsafe at the same time that it is required. For this reason, it is critical to carry out basic research focused on the process by which a nutrient is used and regulated in order to reduce the level of uncertainty. General Discussion Drs. Lichtenstein and Yetley joined the discussant on the dais, and a brief group discussion took place. They agreed that the DRI approach was
OCR for page 55
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary a vast improvement over the previous RDA approach, and that these reference values should improve as science evolves and experience is gained. It was noted that the past 10 years of experience have led to a greater understanding of the breadth of potential uses for the DRIs, which is important when making future revisions. When the discussion was opened to the workshop audience, comments were offered on several topics, including SEBRs and risk assessment. SEBRs One audience member suggested that although SEBRs are important, they may have limitations. They add time as well as cost because the individuals performing the SEBRs are not likely to be volunteers. Furthermore, the development of DRIs inherently requires scientific judgment, which requires a wide range of information about the nutrient. The SEBR focuses on a limited group of questions. A participant responded that individuals commissioned to generate the SEBRs are not charged with offering the scientific judgment necessary for deriving the DRIs. The advantage of study committees working with an evidence-based practice group is that once the relevant questions for the targeted review are defined by the study committee, the practice group can examine the evidence in an objective manner. Database limitations for most questions and the number of questions that can be addressed for each nutrient mean that, ultimately, the judgment of the study committee is required. Thus, SEBRs would not be used to derive DRI reference values; rather, they would be used as one source of data for deriving the reference values. A commenter remarked that SEBRs can be carried out by either paid panel members or unpaid volunteers who “work outside of their day jobs.” She then inquired about the professional expertise needed for these SEBR panels as opposed to the DRI study committees, and about the rewards for unpaid volunteers. A participant responded that SEBRs are not work carried out in spare time. They must be done in a consistent manner and require considerable amounts of time, focus, and resources. Regarding why people are willing to take part in these activities, the discussant suggested that those who believe nutrition is fundamental to reducing the risk of disease will feel a responsibility to participate. Another participant noted that SEBRs do not replace the need for a DRI study committee, but instead serve as a tool to help document, collate, and synthesize the scientific evidence. This tool could lessen the burden of the study committees and allow them to focus on the challenges of defining DRI values. Nor is the SEBR competing with the risk assessment framework. The first step in the risk assessment approach is a literature review; if SEBR were used, it would feed into the larger risk assessment activities. For this type of review, the study committee would help to define the
OCR for page 56
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary inclusion/exclusion criteria for the literature, the endpoints to be reviewed, and other considerations. In fact, the use of a risk assessment framework to help organize activities could make more efficient use of staff time and volunteer time. Outside help with the literature review could relieve the study committee members—who are volunteers—of the burden and allow them more time for other needed deliberations. Additional Comments Other comments on risk assessment as well as SEBRs included the following: the risk assessment framework requires that uncertainties be dealt with, but the methods to be used are unspecified and can be determined case by case; the process of SEBR is robust and not limited to a particular kind of study design; SEBRs can expose gaps in knowledge; and not every question addressed by a study committee would require SEBR. With respect to the DRI framework, an audience member suggested that the EAR/RDA is related to measures of central tendency whereas the UL is not. He postulated that the UL is more analogous to the AI in that the AI is above the amount needed while the UL is below the amount to be avoided. Furthermore, it would be possible to define a level for adequacy in a manner similar to that for developing ULs. He said the value of doing so and whether the data would support it are important discussion points. One person noted that death from disease had not been mentioned as a marker for chronic disease risk in the DRI process, even though there may be a reduction in death from disease associated with some nutrients. A participant responded that in the case of DRIs, death is probably not a preferable measure as compared with appropriately validated biomarkers for the advent of the disease state. The final comment of the discussion related to the value of testing intake recommendations as they are being developed. An audience member used the example of a reasonableness check for iron recommendations and its ability to better inform the process and thus lead to better outcomes. PANEL DISCUSSION: IN WHAT WAYS COULD THE CONCEPTUAL FRAMEWORK BE ENHANCED? Panel Members: Cutberto Garza, Mary L’Abbé, Irwin Rosenberg, Barbara Stoecker (later joined by Janet King and George Beaton) The session moderator, Dr. Stephanie Atkinson, introduced the panel members and began the discussion by asking each panelist to offer an opening remark.
OCR for page 57
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary Panelist Opening Remarks Dr. Garza highlighted three conclusions that he drew from the day’s discussions. One was the need to “keep it simple.” At the same time, he emphasized that we need to be more sophisticated so that finding the simple solution does not result in the wrong answer. He cautioned that this sophistication would be needed especially if we move from focusing on preventing deficiency and diet-related diseases to focusing on enhanced performance. In this case, it may not be reasonable to expect (simplistically) that the frameworks best equipped to deal with deficiency, chronic disease, and enhanced performance will be the same. Second, he suggested harmonizing approaches for deriving the EAR and the UL to allow greater transparency globally and to enhance the rigor of the process, regardless of the degree of precision needed. He made the analogy to the hazard analysis and critical control points approach used to ensure food safety, where control points are identified in the process. Third, the dynamic nature of the field needs to be recognized, and the DRI framework should reflect that dynamism. He emphasized that the dynamism will dictate the type of evidence collected, the criteria for deciding when the numbers need to be revised, and even the format in which the DRIs are published. Dr. L’Abbé touched on the importance of the underlying theme of transparency, specifically from the perspective of a government agency that uses the DRIs in a number of applications (e.g., food fortification, product evaluations, standards setting). She then underscored the need for DRIs to be relevant to public health risk. Finally, she pointed out that to apply the values effectively, regulators and government agencies need to understand the process and the approach to decision making used by the study committees. Conversely, sponsoring government organizations bear the responsibility of defining the general questions to be answered through the process of DRI development if the end result is to be useful. Dr. Rosenberg remarked that realizing the conceptual framework that we seek holds considerable challenges. Essential to our success will be a consensus on the overall goal of the DRIs. If the goal is to sustain the health of the North American population, it must be recognized that this is not the only sustaining pillar of public health. Others include the dietary guidelines and relevant reports from the Office of the Surgeon General. He cautioned that there is risk in using DRI values to cross into dietary guidelines; in turn, this can spawn some confusing concepts, such as semi-quantitative AMDRs for nonessential nutrients. Moreover, despite recent assertions to “change” our paradigm to include chronic disease prevention, the goals for the reference values issued by the NRC and then the IOM have remained remarkably stable since 1941. These dietary recommendations have always been more than minimal allowances and have by implication included prevention
OCR for page 58
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary of chronic disease as part of the definition of good health maintenance. As a last point, Dr. Rosenberg addressed the issue of multiple endpoints. The current approach for DRIs uses different endpoints for different population groups (children, pregnant women, sometimes the elderly). However, the argument that study committees should issue, and users choose, different endpoints for the same group would lead to misunderstandings and undermine the integrity of the process. Dr. Stoecker noted that the public has a false sense of confidence about the knowledge available for setting the DRIs. She commented that, of course, the data on many nutrients are scarce. She noted particularly that dose–response data at intakes near the probable EAR are needed, but may be difficult to obtain. Dr. Stoecker supported the use of the risk assessment approach as an organizing structure and agreed that SEBRs organize, document, and encourage transparency of the process. Furthermore, she suggested that nutrient requirements and chronic disease prevention could be dealt with in separate reports because of the multiple factors that affect the chronic disease endpoints compared with the nutrient requirement endpoints. She agreed with the conclusion that a single endpoint should be used for age/gender groups. General Discussion Cross-Panel Discussion The cross-panel discussion covered several topics. It began with several comments on endpoints. Endpoints One participant suggested that a variety of endpoints are typically expected to be considered using all of the emerging science. Then, based on clear criteria, the endpoint to serve as the basis for the reference value would be selected. However, the criteria for selecting an endpoint have not been clear. Rather, endpoints seem to have been determined primarily on the basis of data availability, which does not necessarily reflect health significance. Another participant noted that she found comfort in discovering that often the same “ballpark” value could be derived using any one of a number of endpoints. She also suggested that chronic disease protection might result in a higher reference value. Another panel member responded that although there is likely to be substantial variation among nutrients, it is not clear that the amount of a nutrient required to reduce the risk of chronic disease is necessarily going to be higher than the amount to achieve some other endpoint. In response to a comment on the apparent inconsistency of the severity or the public health significance of the endpoints used for ULs, a participant
OCR for page 59
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary noted that in setting food fortification limits in Canada, information in the text of the DRI reports was used to elucidate the relative severity of the adverse effect and the margin of safety between the RDA/AI and UL. She further noted that challenges were presented by nutrient substances such as saturated fats or trans fats, which do not fit the classic threshold models for a UL. Another participant said we often fail to look at “population-attributable benefits” and asked: Is there a situation in which a UL could be set too low because a population-attributable benefit of greater public health significance than the mild physiological discomfort used to establish the UL was not taken into account? One participant suggested that from his experience with ULs, the problem was the need to adapt a toxicological model appropriately for nutrient considerations, but there is nonetheless considerable opportunity for parallelism. He asserted that consistency is important, but should not always be expected because key considerations may vary by nutrient and need to be addressed in different ways. Another participant commented that we should not be hobbled by consistency, and it should never preempt scientific rigor. Precision A panel member pointed out that if we are clear about the various uses of the reference values, we can better assess the degree of precision needed. That is, we are too often driven by an obsession for the precision that our training requires, but that the use does not demand. Assuming this hurdle is passed, the “biology of the nutrient” is the next component to consider, because the inability to specify the biological workings of the nutrient would be limiting in establishing meaningful reference values. From this point, the instruments and organizing approaches we have at our disposal to address the tasks become the focus. Ranking evidence In response to the suggestion that ranking evidence was a considerable leap for study committees, a panel member said study committee members did discuss the criteria for judging the evidence and did reject studies, so ranking evidence was not necessarily a challenge. Rather, a major shortcoming in the past was the failure to document discussions and the decision-making process. Open Discussion The cross-panel discussion was followed by a wide-ranging discussion between audience members and the panel on topics such as the appropriateness of AIs, limited data, updating DRIs, and the interest in harmonization. An audience member suggested a focus on terminology, asking participants to consider terms such as “critical effect,” as used in toxicology.
OCR for page 60
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary Appropriateness of AIs A participant asked if the AI system should be retained and, if not, whether an EAR should be approximated in some fashion. Another said it should be eliminated. An audience member asked whether any advances were experienced in either the application or communication of the DRIs by incorporating the AI, and whether the AI belongs within the DRI framework. For some nutrients, an EAR could have been derived if a physiological function of the nutrient had been used as the criterion rather than a chronic disease endpoint. A participant reported that study committees were dissatisfied with the advent of the AI because they had been given the task to derive a value based on an endpoint, and they did not feel confident that an AI was appropriate given this charge. In terms of the evolution of the AI, another participant noted that it was used initially to describe recommended intakes for infants, specifically infants that are exclusively breastfed and are thus the easiest population for which to measure dietary intake. In the case of the breastfed baby, the AI values are probably more solid than for most other groups. Limited data The discussion turned to considering the “no decision is not an option” component of DRI development. One participant expressed concern that numbers developed in the face of limited data appear to take on the same level of significance and credibility as other more well-founded reference values. He recalled that when AIs were first discussed, there was some mention of adding table footnotes or color codes or using faint print as a way of communicating the level of confidence associated with the numbers. He expressed his opinion that, in the end, it seemed that once a value was listed in a table, no one read the footnotes or went back to read the reports. Another participant commented that we should be clear on how the values will be used, then make decisions with respect to how the data are presented in the table. In the case of ULs, there was considerable agreement on the need to create a reference value if the data supported doing so because in the absence of such a value, various misinterpretations have occurred, including the conclusion that there is no risk. One participant suggested that it would be helpful to set out explicitly the disagreements that occur, indicating the level of confidence in the values offered. Others agreed that the approach used to arrive at the ULs should be described more explicitly. There was concern that reaching far beyond the available data to establish a UL is undesirable; there is a distinction between inadequate data and limited data. Other discussion focused on obtaining data on the distribution of requirements in order to enhance the DRI process. One participant suggested that although it would be prohibitively expensive to explore the nature of the distributions in detail, we need at least general information such as
OCR for page 61
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary breadth and skew. In turn, the study committees should be charged with providing such information. Users of the DRIs may prefer to use a value along the distribution curve rather than the RDA for a certain application. Furthermore, it was suggested that the variance of the requirement distribution is not critical to the application except when individuals are concerned. The default CV of 10 or perhaps 20 percent may be adequate in terms of needed precision for practitioners. Updating DRIs With respect to the future DRI process, one participant asked how we can ensure corrections are made to “mistakes” recognized after the study committee disbands. Another participant responded that the framework should address this, not only to correct mistakes, but also because new science will inevitably dictate changes in reference values. A panel member further suggested that the ability to issue DRIs in the format of a downloadable loose-leaf-type notebook is important so that any needed changes can be addressed and made accessible without having to engage in an entire review. It was noted that using SEBR as part of the process would facilitate any updating. One participant remarked that when some of the research questions listed in the report were addressed, this information could serve as a trigger for review. Another participant countered that a nutrient should be reviewed when the nature of the outcome will be important to public health. An audience member asked about the nature of guidelines for prioritizing the nutrients to be updated. A panel member suggested that setting criteria for revision and setting criteria for prioritization were different issues and that criteria for revision would be addressed later in the workshop. Harmonization A discussion clarified that the harmonization referred to by Dr. Garza was a harmonization of approach for deriving the numbers rather than of the specific reference values. Dr. Garza suggested that the only values needed globally are the equivalent of the EAR and the UL, and presumably good science could be brought to bear on deriving these. If the approach for deriving these could be harmonized, different countries could, within the context of their own public health protection considerations, derive their own relevant reference values. Also with respect to harmonization, an audience member suggested that more international expertise should be included in the DRI process so countries could learn from each other, share information, and reduce costs. The benefit to countries not able to mount such a process was also highlighted. Another participant remarked that the EC was working to create a framework for nutrient reference values and harmonize nutrient recommendations across Europe. He suggested benefits in collaboration given the
OCR for page 62
The Development of DRIs 1994–2004: Lessons Learned and New Challenges - Workshop Summary apparent lack of an overall framework for the DRIs. While recognizing the value of collaboration, participants disagreed with the intimation that there was no overall framework for the North American DRI process. Rather, the day’s discussions demonstrated that there was a framework, but it may not have been structured or communicated as well as possible.