Skip to main content

Currently Skimming:

8 Opportunities to Generate Evidence
Pages 159-186

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 159...
... • Future research on obesity prevention, and in public health more generally, can employ a broad array of study designs that support valid inferences about the effects of policies and programs without experi mentation, promoting transdisciplinary exchange. • Published peer-reviewed reports on the results of obesity prevention efforts often lack useful information related to generalizability to other individuals, settings, contexts, and time frames, adding to the problem of incomplete evidence for decision making.
From page 160...
... The chapter begins by briefly reviewing existing evidence needs and outlining the need for new directions in evidence generation and transdisciplinary exchange. It then addresses the limitations in the way evidence is reported in scientific journals and the need to take advantage of natural experiments and emerging and ongoing interventions as sources of practice-based evidence to fill the gaps in the best avail able evidence.
From page 161...
... report Progress in Preventing Childhood Obesity, which is current through 2006, strongly emphasizes the need for evaluation of ongoing and emerging initiatives. The committee that produced the report found that, in response to the urgency of the problem of childhood obesity with respect to prevalence and economic costs, numer ous efforts were being undertaken on the basis of what was already known from theory or practice.
From page 162...
... Consistent with the issues highlighted by the 2007 IOM committee, the NIH recommendations include the evaluation of existing promising programs, as well as the conduct of studies of multilevel and multicomponent interventions. The examples in Box 8-2 relate specifically to child obesity.
From page 163...
... . • Use appropriate study designs and methods, including natural experiments, quasi-experimental designs, and randomized designs; develop time-sensitive funding mechanisms for natural experiments.
From page 164...
... in biomed ical science because of their advantages for drawing causal inferences: Are there good alternatives to randomized designs that will accomplish the same thing but can be implemented more flexibly in natural or field settings?
From page 165...
... Reflecting the type of transdisciplinary TABLE 8-1 Factors Facilitating and Constraining Transdisciplinary Team Science Factor Facilitating Constraining Focus on major PIs able to bring researchers together across Some areas seen as unrealistic problems disciplines and program-unifying themes Lack of integrative research framework Few "how-to" models Team members Possess complementary and intersecting skills See skills as competitive Able to develop common language Tension between solo and collaborative work Positive open attitude Power–prestige differences social and medical Appreciative of others' knowledge sciences Shared understanding of scientific problem Worry about diffusion of focus and loss of identity Mutual trust and respect Research seen as time-consuming/multiple projects Open to mentoring Disincentive for practitioners Sharing credit affects promotion, tenure, publications, funding Training Complementary training Historical barriers across fields Mentored graduate students to participate in Location of departments transdisciplinary research team Funding limited SERCA grants for training in new field Institutions Support, promote, and fund centers, networks, Rigid university policies and teams across disciplines, departments, and Centers lack funds medical and social science facilities on same campus Technology Facilitate communication even when teams and researchers physically dispersed Funding Foundations and government support network/ Grant applications more challenging, team approach (e.g., MacArthur, NIH) time-consuming Publication Journals discourage multiple authors Peer review hard to judge Need to frame more narrowly NOTE: NIH = National Institutes of Health; PI = principal investigator; SERCA = Special Emphasis Research Career Award.
From page 166...
... . LIMITATIONS IN THE WAY EVIDENCE IS REPORTED IN SCIENTIFIC JOURNALS Decision makers (e.g., policy makers, professional caregivers, public health officials, and advocates)
From page 167...
... examined studies published between 1980 and 2004 that were controlled, long-term research trials with a behavioral target of either physical activity or healthful eating or both, together with at least one anthropometric outcome. Using review criteria for a study's generalizability to other individuals, settings, contexts, and time frames (external validity)
From page 168...
... 74 Program sustainability 0 Attrition rate 100 Differential attrition by condition tested 21 Drop-out representativeness 42 a External validity is defined according to Leviton (2001)
From page 169...
... to answer practical and locally specific questions, one alternative is to treat the myriad programs and policies being implemented across the country as natural experiments (Ramanathan et al., 2008)
From page 170...
... . Recently, a Robert Wood Johnson Foundation/CDC partnership conducted a major evaluability assessment that involved taking an inventory of purported innova tive programs and policies in childhood obesity control, assessing their potential for evaluation, and providing resources to assist in their evaluation (RWJF, 2006, 2009)
From page 171...
... Continuous Quality Assessment of Ongoing Programs Once interventions have been established as evidence-based (in the larger sense sug gested in this chapter of a combination of evidence, theory, professional experience, and local participation) , continuous assessment of the quality of implementation is necessary to ensure that the interventions do not slip into a pattern of compromised effort and attenuated resources and that they build on experience with changing cir cumstances, clientele, and personnel.
From page 172...
... In a prevention context, an RCT would be a study in a community or broader setting in which individuals or groups would be assigned by the researchers to experience or be exposed to different interventions. When a randomized experiment is properly implemented and its assumptions can be met, it produces results that are unrivaled in the certainty with which they support causal inferences in the specific research context in which the trial was conducted.
From page 173...
... . The above limitations can result in RCTs having less impact in research on public policy relative to other, nonrandomized designs.
From page 174...
... Two perspectives provide useful, complementary approaches on which research ers can draw to strengthen causal inferences, including approaches that do not involve experimentation. In the behavioral sciences, Campbell and colleagues (Campbell, 1957; Campbell and Stanley, 1966; Cook and Campbell, 1979; Shadish and Cook, 2009; Shadish et al., 2002)
From page 175...
... . With respect to the level of certainty of causal inference, Campbell and colleagues emphasize that researchers need consider only threats to validity that are plausible given their specific design and the prior empirical research in their particular research context (Campbell, 1957; Shadish et al., 2002)
From page 176...
... , or intervention is introduced at a change in measures coincides with another time point; nonequivalent the introduction of the intervention dependent measure Observational study Measured baseline variables equated; Multiple control groups; Propensity score analysis; sensitivity unmeasured baseline variables nonequivalent dependent measures; analysis; subgroup analysis; correction equated; differential maturation; additional pre- and postintervention for measurement error baseline variables reliably measured measurements a Internal validity is defined as a study's level of certainty of the causal relationship between an intervention and the observed outcomes. NOTE: SUTVA = stable unit treatment value assumption.
From page 177...
... , it empha sizes precise definition of the desired causal effect and specification of explicit, ideally verifiable assumptions that are sufficient to draw causal inferences for each research design. Rubin defines a causal effect as the difference between the outcomes for a single unit (e.g., person, community)
From page 178...
... • Posttest observations − Nonequivalent dependent variables: measures that are not sensitive to the causal forces of the treatment, but are sensitive to all or most of the confounding causal forces that might lead to false conclusions about treatment effects (if such measures show no effect, but the outcome measures do show an effect, the causal inference is bolstered because it is less likely due to the confounds)
From page 179...
... a Internal validity denotes the level of certainty of the causal relationship between an intervention and the observed outcomes. SOURCE: Reprinted, with permission.
From page 180...
... compared the small set of random ized and nonrandomized studies that shared the same treatment group and the same measurement of the outcome variable. All cases in which an RCT was compared with a regression discontinuity or interrupted time series study design (see Appendix E for discussion of these study designs)
From page 181...
... They found little difference in the estimates of causal effects after adjusting for an extensive set of baseline covariates in the observational study. The results of the small number of focused comparisons of randomized and nonrandomized designs to date are encouraging.
From page 182...
... 2006. Are there public health lessons that can be used to help prevent childhood obesity?
From page 183...
... American Journal of Preventive Medicine 35(2 Supplement)
From page 184...
... 1974. Estimating causal effects of treatments in randomized and nonrandomized studies.
From page 185...
... American Journal of Preventive Medicine 35(1, Supplement)
From page 186...
... 2000. Causal inference and generalization in field settings: Experimental and quasi-experimental designs.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.