and the newly formed Integrated Assessment Society (http://www.tias-web.info/) are taking laudable steps in this direction.

SCIENCE-POLICY INTERFACE: BALANCING CREDIBILITY WITH SALIENCE

The appropriate interface between science and policy is frequently debated and requires deliberate negotiation at the onset of each assessment process (NRC 1983; Jasanoff 1987; Cash and Moser 2000). The interactions between scientists and policy makers in assessments can assume different forms, ranging from efforts to isolate the scientific community from the policy-making process via boundary organizations such as the National Academies, to highly institutionalized collaboration and deliberative processes between both groups, such as congressional hearings. Regardless of where along the spectrum the science-policy interface falls, each community “must maintain its self-identity and protect its sources of legitimacy and credibility” (Farrell et al. 2006).

Especially careful boundaries are necessary between the authorizing body (i.e., those requesting the assessment) and the assessment participants. While the authorizing body needs to be involved in the framing of the goals and scope of the assessment to ensure that the most salient questions are addressed (NRC 1996), legitimacy and credibility suffer when it is perceived that they control the assessment process (Jasanoff 1987; Cash and Moser 2000). At the same time, isolating scientists from the authorizing body too much is likely to result in a loss of salience (NRC 1996). Therefore, negotiating this boundary is a balancing act between achieving credibility, legitimacy, and salience (Jasanoff 1987).

Based on its deliberations and input from scholars and practitioners of assessments, the committee concludes that an explicit boundary is critical throughout the process, but most importantly during the review stage. A key determinant of credibility is the quality control applied in an assessment. Quality control is defined as the procedures designed to guarantee that the “substantive material contained in the assessment report agrees with underlying data and analysis, as agreed to by competent experts” (Farrell et al. 2006). Different criteria are used to define what an expert opinion is (e.g., that which is published in peer-reviewed journals or is subject to repeated reviews). For assessments that undergo government review, it is critical that the expert participants retain a “veto right” regarding the scientific content of the report (Watson 2006).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement