National Academies Press: OpenBook

Decision Making for the Environment: Social and Behavioral Science Research Priorities (2005)

Chapter: Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear

« Previous: Appendix D Forecasting for Environmental Decision Making: Research Priorities--William Ascher
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

Appendix E
Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making

Cary Coglianese and Lori D. Snyder Bennear

Do environmental policies work? Although this question is simple and straightforward, for most environmental policies it lacks a solid answer. This is not because no answers are available. On the contrary, there are often an abundance of purported answers to be found—just a shortage of systematic, empirical support for these answers. Decision making over environmental policy has too often proceeded simply on the basis of trial and error, without adequate or systematic learning from either the trials or errors. Decision makers often lack carefully collected evidence about what policies have accomplished in the past in order to inform deliberations about what new policies might accomplish in the future.

Obtaining systematic answers to the question of whether environmental policies work is vital. Any environmental policy should make a difference in the world, ideally changing environmental conditions for the better or at least preventing them from getting worse. Although intuitions and anecdotes may provide some reason for suspecting that a given policy has made or will make a difference, the only way to be confident in such suspicions is to evaluate a policy’s impact in practice. Program evaluation research provides the means by which analysts can determine with confidence what works, and what does not, in the field of environmental policy. The results of program evaluation research can then be used by others when deciding if they should retain existing policies or adopt new or modified ones.

Although important program evaluation research has examined the impact of some environmental policies, such research has been remarkably scarce relative to the overall volume of environmental policy decisions

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

made at the state and federal level, as well as relative to the amount of evaluation research found in other fields, such as medicine, education, or transportation safety. A renewed and greatly expanded commitment to program evaluation of environmental policy would help move environmental decision making closer to an evidence-based practice.

In this paper, we begin by defining the role that empirical analysis can play in policy deliberation and decision making, distinguishing program evaluation research from other types of analysis, including risk assessment, cost-effectiveness analysis, and cost-benefit analysis. Although reliance on these other types of analysis has greatly expanded over the past several decades, most other forms of analysis take place before decisions are made; relatively little analysis takes place after decisions have been made and implemented, which is when program evaluation occurs. We argue that any policy process that takes analysis and deliberation seriously before decisions are made should also take seriously the need for research after decisions are made.

We next explain the kinds of methodological practices that program evaluation researchers should use to isolate the causal effect of a particular regulation or other policy initiative, that is, the change in outcomes that would not have occurred but for the program. Even if an environmental policy is correlated with a particular environmental or social outcome, this does not necessarily mean that there is a causal relationship between the policy initiative and the change in outcomes. Only by adhering to the type of methods we highlight here will researchers be able to isolate the effects of specific policy interventions and thereby inform environmental decision making.

Finally, we suggest that the present time is an especially ripe one for expanding program evaluations of environmental policies. Although program evaluation techniques have been available for decades and have certainly been advocated for use in the field of environmental policy, recent developments in policy innovation, government management, and data availability make the present time more conducive for an expanded program evaluation research agenda. During the past several decades, the U.S. Environmental Protection Agency (EPA) and the states have developed a variety of new approaches to environmental protection that are now ready for evaluation. The prevailing policy climate generally supports evaluation of government performance, as evidenced by the Office of Management and Budget’s new Program Assessment Rating Tool and legislation like the Government Performance and Results Act. Moreover, given the increasing ease of access to data made possible by the Internet, researchers will find it easier today to expand program evaluation in the field of environmental policy. Evidence-based deliberation and decision making over environmen-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

tal policy are probably closer to becoming routine practices today than they have ever been before.

THE ROLE OF PROGRAM EVALUATION IN ENVIRONMENTAL POLICY

Since the overarching purpose behind environmental policies is to improve environmental conditions, and often thereby to improve human health, program evaluation can identify whether specific policies are serving their purposes and are having other kinds of effects, such as reducing environmental inequities, imposing economic costs, or promoting or inhibiting technological change. In this section we show how program evaluation research fits into the policy process and serves an important role in environmental decision making.1

Environmental Policy Making and Implementation

The policy process begins with the recognition of a potential environmental problem and a response by the policy maker, often the legislature (Brewer and deLeon, 1983). The response typically takes the form of a statute imposing requirements on industry or delegating authority to a regulatory agency, like the EPA or Fish and Wildlife Service, to create specific requirements that industry must follow or develop other programs to achieve legislative goals. Legislation is then implemented by federal, state, or local regulatory agencies. Implementation often requires these agencies to establish additional, more specific mandates. At the federal level, for example, environmental and natural resources agencies promulgate hundreds of new regulations each year. These regulations typically fill in gaps about the precise level of environmental protection to be achieved, the type of policy instruments to use to achieve statutory goals, and the time frame for compliance with new regulations.

Policy implementation includes other kinds of choices as well. It can include education, licensing, and grant programs. It also can include the selection of enforcement or other strategies to ensure compliance with policies. Regulatory agencies must make decisions about how they will target firms for enforcement: randomly, in reaction to complaints, based on past history, based on size or other criteria related to the regulatory problem to be solved, or some combination of these or other factors. Moreover, agency inspectors can be instructed to approach their work in an adversarial manner—that is, going “by the book” and issuing citations for any violations found—or in a more cooperative manner whereby regulatory inspectors work with regulated entities to solve problems (Bardach and Kagan, 1982; Scholz, 1984; Hutter, 1989).

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

FIGURE E-1 A simple model of the environmental policy process.

Regulatory policies are adopted, and then implemented and enforced, in order to change the behavior of a class of businesses or individuals. The ultimate aim of policy making and implementation is to create incentives for individuals and firms to change their behavior in ways that will solve the problems that motivated the adoption of public policy in the first place. If a policy works properly, the behavioral change it induces will in turn result in the desired changes in environmental conditions, public health, or other outcomes. A basic diagram of the environmental policy process is provided in Figure E-1.

Prospective Analysis of Environmental Policy

Empirical analysis can usefully inform several stages of the policy process. During both the policy making and the implementation stages, analysis can inform deliberation and decision making about whether anything should be done to address an environmental problem and, if so, what set of policy instruments or strategies should be used. Currently, there are several different analytical methods used extensively during both policy making and implementation, including risk assessment, cost-effectiveness analysis, and benefit-cost analysis (Stokey and Zeckhauser, 1978). Each of these types of analysis is used prospectively to inform the deliberative process leading up to policy decisions.

Risk assessment characterizes the health or ecological risks associated with exposure to pollution or other hazardous environmental substances or conditions (National Research Council, 1983). It seeks to identify the causal relationships between exposure to specific environmental hazards and specific health or ecological conditions. As such, risk assessment seeks to provide a scientific basis for understanding the potential range of benefits that can be attained from policies that aim to reduce exposure to environmental hazards.2

Benefit-cost analysis seeks to help policy makers identify both the benefits and the costs of specific environmental policies and implementation strategies. It compares different policy or implementation alternatives based on their net benefits—that is, total benefits minus total costs (Arrow et al., 1996). Such analysis is usually conducted in advance of policy making to

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

try to identify regulatory options that will be the most efficient (Viscusi, 1996; Hahn, 1998). As such, benefit-cost analysis usually leads to estimates of expected net benefits from different alternatives.

Cost-effectiveness analysis seeks to identify the lowest-cost means of achieving a specific goal (U.S. Environmental Protection Agency, 2000). Unlike benefit-cost analysis that compares alternatives in terms of both costs and benefits, cost-effectiveness analysis compares alternatives simply in terms of how much they cost in order to achieve a given goal—regardless of whether there will be positive net benefits from achieving this goal. For example, imagine that policy makers seek to reduce carbon dioxide emissions by 20 percent and that several policies could be selected that would achieve this desired level of reduction. Regardless of whether the 20 percent reduction maximizes net benefits, cost-effectiveness analysis can be used to help ensure the lowest-cost means to attain the selected goal.

Economic analyses of costs and benefits, along with risk assessments, are typically used prospectively in the regulatory process, that is, before government officials make decisions. The prospective use of these analytic techniques has expanded greatly in the past 20 years due to evolving professional practices as well as executive orders mandating economic analysis preceding the adoption of new federal regulations that are anticipated to impose $100 million or more in annual compliance costs (Coglianese, 2002; Hahn and Sunstein, 2002). These executive orders have existed under every administration since Ronald Reagan, and government agencies have developed detailed guidance for conducting the required analyses (U.S. Environmental Protection Agency, 2000; U.S. Office of Management and Budget, 2003a).

Retrospective Analysis: Program Evaluation of Environmental Policy

In contrast to the prospective role played by risk assessment and benefit-cost analysis, program evaluation occurs retrospectively, as it seeks to determine the impact of a chosen policy or implementation strategy after it has been adopted. For example, Snyder (2004a) evaluated the impact of pollution prevention planning laws that 14 states adopted in the 1990s. These laws required industrial facilities using toxic chemicals to develop plans for reducing their use of these chemicals. By forcing facilities to plan, these laws were supposed to encourage industry to find opportunities to lower their production costs as well as improve environmental protection. But did they work? Drawing on more than a decade’s worth of data on toxic chemical releases by manufacturing plants in states with and without the planning laws, Snyder (2004a) found that the pollution planning laws had a measurable impact on plants’ environmental performance. The plan-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

ning laws were associated with a roughly 30 percent decline in releases of toxic chemicals.

Other regulatory policies have been evaluated retrospectively, including hazardous waste cleanup laws (Hamilton and Viscusi, 1999; Revesz and Stewart, 1995), air pollution and other media-specific environmental regulations (U.S. Environmental Protection Agency, 1997; Davies and Mazurek, 1998; Harrington et al., 2000; Chay and Greenstone, 2003), and information disclosure requirements, such as the Toxics Release Inventory (TRI) (Hamilton, 1995; Konar and Cohen, 1997; Khanna et al., 1998; Bui and Mayer, 2003). A variety of innovations in environmental policy have also received retrospective study, including market-based instruments (Stavins, 1998), voluntary programs (Alberini and Segerson, 2002; Arora and Cason, 1995, 1996; Khanna and Damon, 1999), and regulatory contracting programs like EPA’s Project XL (Blackman and Mazurek, 2000; Marcus, Geffen, and Sexton, 2002). In addition, various rocedural “policies” have been subject to retrospective evaluation, such as the use of benefit-cost analysis (Morgenstern, 1997; Farrow, 2000; Hahn and Dudley, 2004) and negotiated rule making (Coglianese, 1997, 2001; Langbein and Kerwin, 2000). Finally, researchers have evaluated the impact of various types of enforcement strategies (Shimshack and Ward, 2003; May and Winter, 2000).

Like the Snyder (2004a) study, such retrospective analyses have sought to ascertain what outcomes specific policies have actually achieved.3 Some of these outcomes are the ones the policy was intended to achieve, such as improvements in human health or the biodiversity of an ecosystem. However, program evaluation research also considers other effects, such as whether a policy has had unintended or undesirable consequences. Has it contributed to other problems similar or related to the one the policy was supposed to solve? What kinds of costs has the policy imposed? How are the costs and benefits of the policy distributed across different groups in society? Finally, program evaluation research can also focus on other outcomes including transparency, equity, intrusiveness, technological change, public acceptability, and conflict avoidance, to name a few.

By assessing the performance of environmental policies in terms of various kinds of impacts, retrospective evaluations can inform policy deliberations. Policy makers revisit regulatory standards periodically, sometimes at regular intervals specified in statutes or whenever industry or environmental groups petition for changes. More frequently, existing policies will be used as model solutions for new environmental problems, and so program evaluation of existing policies informs decisions about what policies to use in new situations. For this reason, program evaluation will also provide critical information for prospective analysis of new policy initiatives. By knowing what policies have accomplished in other contexts, pro-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

FIGURE E-2 Program evaluation in the policy process.

spective analyses—such as benefit-cost analysis—can be grounded in experience as well as theory and forecasting. The accuracy of the estimation strategies used in prospective analyses can also be refined by comparing ex ante estimates with the ex post outcomes indicated in program evaluations. Figure E-2 illustrates the role of program evaluation in the policy process.

METHODS OF PROGRAM EVALUATION

The goal of program evaluation is to ascertain the causal effect of a “treatment” on one or more “outcomes.” In the field of environmental policy the treatment will often include government-mandated regulations that take the form of a range of policy instruments (Harrington, Morgenstern, and Sterner, 2004; Hahn, 1998). These regulations include technology and performance standards (Coglianese, Nash, and Olmstead, 2003), market-based instruments like emissions trading (Stavins, 2003), information disclosure policies (Kleindorfer and Orts, 1998), and management-based policies such as those requiring firms to develop pollution prevention plans (Coglianese and Lazer, 2003). The treatment could also consist of a variety of implementation strategies, ranging from different types of enforcement strategies, grant requirements, or public recognition and waiver programs, including such innovations as the EPA’s Project XL, the National Environmental Performance Track, and the U.S. Department of the Interior’s Habitat Conservation Plans (de Bruijn and Norberg-Bohm, 2001). The treatment could even include international treaties and nongov-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

ernmental initiatives that are designed to effect the environment, such as trade association self-regulatory efforts like the chemical industry’s Responsible Care program or the wood and paper industry’s Sustainable Forestry Initiative.

For each treatment to be evaluated, the researcher must obtain reliable measures of outcomes. Outcome measures used in evaluations of environmental policies can include measures of facility or firm environmental performance (e.g., emissions of pollutants, energy use), human health impacts (e.g., days of illness, mortality or morbidity rates), or overall environmental impacts (e.g., acres of wetland, ambient air quality). When the ultimate outcome of concern cannot be directly measured, proxies must be used to assess the impact of a policy. For example, it sometimes is not possible to assess an environmental policy in terms of its impact on reductions in human health risk, but researchers can use measures of pollution reduction as a proxy for the ultimate outcome of risk reduction.

Isolating the Causal Effects of Treatments on Outcomes

The goal of program evaluation is to go beyond simple correlation to estimate the causal effect of the treatment on the outcomes selected for study. A treatment and outcome may be correlated, but the treatment has “worked” only if it has had a causal effect on the outcome. To see how a researcher isolates the causal effect of one policy from all of the other potential explanations for a given change in the outcome, consider a hypothetical government program designed to encourage plant participation in a voluntary program that offers firms incentives for reducing pollution to levels below those needed to comply with existing regulations. The treatment is participation in the program and the outcome measure consists of emissions of pollutants from industrial facilities. In an ideal world, the researcher would observe the level of pollution each facility emits when it does not participate in the voluntary program. Then the researcher—again in an ideal (and imaginary) world—would travel back in time, assign each facility to participate in the program while leaving all other features of the facility unchanged, and observe the level of pollution each produces after it has participated in the program. If the researcher could actually observe, for each facility, both potential outcomes (that is, the outcome with and without treatment), then the causal effect of the program would be a straightforward difference between the pollution levels with and without participation.

Of course, the fundamental problem of causal inference is that researchers cannot travel back in time and reassign facilities from one group to another. In reality, researchers never observe both potential outcomes for any individual plant. They observe only the pollution levels of partici-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

pating facilities, given that they participated, and the pollution levels for nonparticipating facilities, given that they did not participate. The challenge for program evaluation research is to use observable data to obtain valid estimates of the inherently unobservable difference in potential outcomes between the treatment and nontreatment (or control) groups.

Methods for Drawing Causal Inferences

How can researchers meet this fundamental challenge and draw reliable inferences about the causal effects of environmental policies?4 If possible, the best approach would be to conduct a policy experiment and rely on random assignment of the treatment. If regulated entities subject to a treatment are assigned at random, then other factors that determine potential outcomes are also likely to be randomly distributed between the treatment and the control group. For example, with random assignment, there should not be any systematic differences in the treatment and control groups in terms of such things as industry characteristics, size of firms, or publicly traded versus privately held ownership. In the case of random assignment, any differences in outcomes between the two groups of entities can be attributed to the treatment.

True random experimental designs are, of course, rare or nonexistent in environmental policy. Regulation, voluntary program participation, and other treatments of interest are not generally randomly assigned. Instead, regulatory status is frequently determined by factors that are also correlated with potential outcomes such as the size of the facility, the facilities’ pollution levels, the age of the facility, and so forth. For environmental policy analysis, researchers will generally be forced to use observational study designs—also referred to as quasi-experimental designs.5 Observational studies do not rely on explicit randomization, rather they capitalize on “natural” treatment assignments (as a result these studies are also sometimes referred to as natural experiments). Because assignment to treatment is not random in observational studies, and treatment can be correlated with other determinants of potential outcomes, more sophisticated methods are required to isolate the causal effect of the treatment.

In observational studies where strict random assignment does not hold, there may be random assignment conditional on other observable variables. For example, imagine that one state’s legislature passes a new regulation on hazardous waste while another state’s does not. If the two states were quite similar—that is, they had the same types of facilities and the same socioeconomic and demographic variables—then the conditions of random assignment may be effectively met. If the states are not identical (that is, there are some differences in the types of facilities or community demographics), then observed differences in environmental performance

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

across the states may be due to the difference in regulation or to the differences in these other variables. One state, for instance, may simply have larger or older industrial facilities that will affect how much hazardous waste they produce.

Variables that are correlated with the treatment and also with outcomes are called confounders—the presence of these variables confounds researchers’ ability to draw causal inferences from a simple difference in average outcomes. If the confounders can be quantified with available data, however, then they are “observable.”6 If all of the confounders are observable, then the causal effect of regulation could be estimated by examining the difference in outcomes, conditional on the confounding variables. In our hypothetical two-state example, a researcher could estimate the causal effect of the treatment by controlling for confounders such as the size or age of the facilities in both states. The researcher would essentially be comparing the environmental performance of facilities in the two states that have the same size, age, and other characteristics related to the generation of hazardous wastes.

Program evaluation researchers find analytic techniques such as regression and matching estimators to be useful when conditional random assignment holds. Regression analysis estimates a relationship between the outcome measure and a set of variables that may explain or be related to the outcome. One of these explanatory variables is the treatment variable, and the others are the confounders (also called control variables). Regression analysis isolates statistically the effect of the treatment holding all of the control variables constant.

To illustrate, imagine that Massachusetts passes a new law designed to lower pollution levels at all electronics plants. Connecticut also has many electronics plants, but these plants are not subject to the Massachusetts law. Plants in the two states are very similar except that plants in Massachusetts tend to be larger than plants in Connecticut. A regression of pollution levels on a variable that designates whether the plant is in Massachusetts and on another variable that measures plant size will yield an estimate of the effect of the Massachusetts regulation on pollution levels, holding the size of the plant fixed. If size were to be the only confounder, then this regression would yield a valid estimate of the causal effect of the Massachusetts regulation on pollution levels in electronics plants.

An alternative statistical technique would be to use a matching estimator. For each observation that is subject to the treatment (such as an industrial facility subject to a regulation) the researcher finds a “matching” observation that is not subject to the treatment. To illustrate, let us return to the hypothetical Massachusetts regulation. To implement a matching estimator in this case, the researcher would take each facility in Massachusetts and find a facility in Connecticut of the same size. The researcher

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

would then calculate the difference in pollution levels for the Massachusetts facility and its matching facility in Connecticut. The average of these differences for all Massachusetts plants is the average effect of the regulation on pollution.

Finding a “match” is relatively easy when there is only one confounder (size of the plant in our example). But what if it is important to control not just for size, but also for age of the facility and socioeconomic characteristics of the community, such as the percent employed in manufacturing, population density, median household income, and so forth? To employ a matching estimator in this case, for each facility in Massachusetts the research would need to identify a facility in Connecticut of the same size, age, and with the same socioeconomic characteristics. This may not be possible. This problem is often referred to as the “curse of dimensionality” because the number of dimensions (characteristics) on which facilities must be matched is large. One estimation technique that avoids the curse of dimensionality is matching on the propensity score (Rosenbaum and Rubin, 1983). The propensity score is simply the probability of being treated conditional on the control variables. Observations are then matched on the basis of their propensity to receive treatment, rather than on each individual control variable.

Regression and matching estimates assume that all of the confounders are observable. However, there are frequently cases when there are unobservable factors that are correlated with the treatment as well as potential outcomes. For example, facilities whose managers have a strong personal commitment to the environment may be more likely to participate in certain types of treatment, such as voluntary or so-called “beyond compliance” programs established by government agencies. However, the managers’ commitment, which will likely be unobservable to the researcher, is also likely to be correlated with the facility’s environmental performance regardless of participation in the program (Coglianese and Nash, 2001). When there are unobservable confounders, standard regression and matching estimators will fail to provide a fully valid estimate of the causal effect of the treatment. In voluntary programs, for example, an ordinary regression estimate will be biased because it will be showing not only the effect of the voluntary program but also the effect of managers’ personal commitment to the environment, without being able to separate the level of impact of the two causal factors.

In such cases, alternative estimation strategies need to be used. An estimator known as the differences-in-differences estimator can yield a valid estimate of causal effects if the unobservable differences between the treated and nontreated entities are constant over time. For example, imagine that the researcher has data on two sets of facilities: one that participates in a voluntary environmental program and one that does not. However, these

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

two facilities do not have identical indicators of environmental performance before the program is created. In fact, let us suppose that the facilities that participate in the program have, on average, lower pollution levels even before participation. This is depicted graphically in Figure E-3. It is clear from the figure that it would be incorrect to characterize the difference in environmental performance after the program as the causal effect of the regulation, since some of that difference existed before the program came into existence. The differences-in-differences estimator assumes that, in the absence of treatment, the difference in environmental performance would have been the same between the two sets of facilities. The dashed line in Figure E-3 represents the hypothetical pollution levels of the “treatment” plants if they never participated in the program. The causal effect of the program is correctly estimated as the incremental decrease in pollution in the posttreatment period, labeled “treatment effect” in Figure E-3.

Figure E-3 assumes that the unobservable differences are constant over time. If there is good reason to think that they are not, then other estimation strategies will be required. One frequently used estimation technique in such circumstances is the instrumental variables method. To illustrate how this method works, return to the example of a voluntary program where participation is determined, in part, by facility managers’ personal commitment to the environment, something that we assume is generally unobservable to the researcher.

For the sake of illustration, imagine that the regulatory agency administering the voluntary program sent letters inviting facilities to participate and did so to a completely random sample of facilities. Furthermore, assume that, on average, facilities that received the letter were more likely to

FIGURE E-3 Graphical illustration of the differences-in-differences estimator.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

participate than facilities that did not receive the letter; however, some facilities that received the letter did not participate and some facilities that did not receive a letter nonetheless chose to participate. In such a circumstance, the participation decision is not randomly assigned, and traditional statistical estimates of the effect of participation on outcome measures will be biased by the unobservable differences between participants and nonparticipants.

What the instrumental variables estimator does is capitalize on the fact that the government agency randomly assigned facilities to receive the invitation letter. In other words, some set of facilities would participate if they receive a letter and would not participate if they do not receive a letter.7 For these facilities only, participation is randomly assigned because the letters were randomly sent. The instrumental variables estimator isolates only the effect of participation for those whose participation decisions were determined by whether or not they received a letter.

Although we have highlighted methods only for estimating causal effects, it is clear that these methods are fairly well developed and available for use in evaluating the impacts of environmental policies. These methods, however, have not been widely used in the field of environmental policy, even though they are frequently relied on for evaluation research in other fields. Any effort to increase the role of program evaluation in environmental decision making should therefore seek to encourage research that makes use of these kinds of methods so that reliable inferences can be drawn about the causal effects of environmental policies.

Data Availability and Program Evaluation of Environmental Policies

All of the program evaluation methods we have reviewed here depend on valid and reliable data on environmental outcomes and other nonpolicy determinants of environmental outcomes (such as economic and technological factors). In other fields of policy analysis, researchers have available to them longstanding national surveys such as the Current Population Survey, the National Longitudinal Survey of Youth and the Panel Study of Income Dynamics. For the most part, these kinds of independent longitudinal datasets have not existed for environmental program evaluation.

Much of the data collected on environmental performance are built into the regulations themselves. Thus, researchers have toxics release data available from the TRI, but only on facilities that are subject to the TRI regulations and only for the years during which these regulations have been in effect. Similarly, data are reported by regulated facilities on their air emissions, water discharges, and hazardous waste generation, but these data exist only for the facilities that are regulated under the relevant statutes and for the years in which the regulations have been in effect. This

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

close connection between data and regulation necessarily limits researchers’ ability to evaluate the effects of these regulations as a treatment, since the mandated data are not available for unregulated facilities (the control group). However, these data can be used to evaluate the impact of other policies (e.g., voluntary programs, enforcement strategies) by comparing the effects of the treatment on regulated firms subject to the treatment and those not subject to the treatment.

Longitudinal data are available on some ambient environmental conditions (such as air quality), but it is extremely difficult to pinpoint the effects of specific policy changes using these indicators. In most cases it is impossible to use them to identify the effects on individual firms or facilities. Researchers seeking measures of individual firm performance have often used TRI data because they are readily accessible for many, but by no means all, regulated firms. But these data have their limitations too. Most obviously, they do not capture all the impacts firms have on the environment, as the data only cover releases of certain toxic pollutants. Furthermore, these data are self-reported, not adjusted for risk, and reported only by facilities that exceed the established reporting thresholds. All of these factors have been shown to affect the valid use of TRI data as outcome measures for policy evaluation (Graham and Miller, 2001; Snyder, 2004b).8

Researchers have sometimes used other measures of environmental impacts, such as biological oxygen demand or total suspended solids levels in water (Gunningham, Kagan, and Thornton, 2003) or levels of water use (Olmstead, Hanemann, and Stavins, 2003). However, obtaining these measures has generally required intensive collection efforts on the part of researchers that have so far limited the use of these data. To a large extent, the future of program evaluation in environmental policy will therefore be married to the future of environmental reporting and performance measurement (Esty, 2001; Metzenbaum, 1998).9 As we discuss in the next section, this future looks more hopeful than ever before, in part because of new, more uniform and accessible sets of government data on environmental performance.

THE FUTURE OF PROGRAM EVALUATION OF ENVIRONMENTAL POLICY

The idea of subjecting policies to program evaluation research is certainly not new. At about the same time that environmental issues emerged on the federal policy agenda in the 1960s and early 1970s, the federal government also began to emphasize the use of performance evaluations as part of the budgetary process, through efforts such as the Planning, Programming, and Budgeting System, Management by Objective, and Zero-Based Budgeting. These and other attempts to encourage program evalua-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

tions of government programs certainly have spilled over into the field of environmental policy from time to time. Yet, compared with other types of government programs, environmental policy has generated only a paucity of systematic program evaluation research. Research funding for environmental program evaluation has lagged behind. Nevertheless, three factors give renewed urgency and ripeness to efforts to expand program evaluation in the field of environmental policy.

First, there are numerous policy innovations that have been implemented in the past 15 years and are now ripe for evaluation. After an initial round of environmental policy making in the early 1970s established the main framework for environmental regulation in the United States, there followed an extended period of concentrated efforts to implement these framework laws. By the late 1980s and early 1990s, however, a variety of factors led to a burst of innovative projects and policies implemented in Washington, D.C., and in the states. This later time period saw the introduction of EPA’s “bubble policy,” the TRI, and state pollution prevention laws, as well as more recent experimentation with a host of so-called voluntary, public recognition, and regulatory contract programs (such as the EPA’s Project XL or the National Environmental Performance Track, or the U.S. Department of Interior’s Habitat Conservation Planning program).

Many of these programs have been in place for a sufficiently long time now for their results to be measured through sustained efforts at empirical inquiry. Importantly, many of these programs apply selectively to a subset of all facilities within an industry or sector. Thus, these policies often make it feasible to compare the behavioral responses of participants and nonparticipants (the treatment and control groups). Of course, this does not imply that isolating the causal effect of these policies will be straightforward. The causal effect of voluntary programs is almost always confounded by differences in facilities that explain the decision to participate in the program in the first place—so-called selection effects. But as we discussed above, methods exist to correct for these confounding factors, and some limited evaluation of these programs is already under way that addresses these issues.10 More research in this area is likely to be highly productive at informing policy makers about the features of different policy initiatives that have been successful, as well as the features that have not been successful at improving environmental outcomes.

Second, the present climate of government management is reasonably conducive to environmental program evaluation. The Government Performance and Results Act (GPRA) requires that all federal agencies devise specific performance goals and report on their achievement of these goals using performance measures. This focus on performance measures, rather than on administrative measures, such as numbers of inspections or numbers of voluntary participants, increases the need for outcome-based evalu-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

ation (Sparrow, 2000). Furthermore, the Office of Management and Budget has developed the Performance Assessment Rating Tool (PART) and required that 20 percent of government programs use this tool to evaluate whether they are resulting in significant progress toward public goals (Office of Management and Budget, 2003b). Just as executive orders on the ex ante use of economic analysis for major regulations have given greater prominence to those analytic tools within government agencies, we might expect that GPRA and PART will increase demand within environmental and natural resources agencies for program evaluation research.

Finally, increasing availability and quality of environmental performance data will make it easier for researchers to conduct systematic evaluations of environmental policies. Although the EPA has collected data on air emissions, water discharges, hazardous waste generation, and toxics releases for several decades, in the past these data were collected and maintained separately by the respective program offices within the agency. As a result, each office generated its own metadata and, importantly, its own numbering system for identifying facilities. Thus, the same facility was assigned an Aerometric Information Retrieval System (AIRS) identifier for the air office, a Permit Compliance System identifier for the water office, a TRI identifier for the office of information, and so forth. Researchers hoping to combine data from more than one source were forced to match facilities by hand—usually by name and address. Recently, however, the EPA has instituted a common Facility Registry System identifier. This identifier has been added to all existing EPA databases, allowing researchers to more easily match data on a facility from multiple sources.

Another recent development that is likely to improve environmental policy evaluations is the EPA’s Risk-Screening Environmental Indicators (RSEI) model. The RSEI model combines data on toxics releases from the TRI with scientific indicators of the effect of these releases on health risks. By weighting TRI data, RSEI allows researchers to draw inferences about the health effects of policy interventions.

The EPA has also expanded data on regulatory compliance. The Integrated Data for Enforcement Analysis, Enforcement and Compliance History Online, and Online Tracking Information System provide researchers with easier access to certain kinds of data on enforcement and compliance behavior. From 1988 to 2004, EPA went still further in integrating enforcement, compliance, and environmental performance data through the Sector Facility Indexing Project (SFIP). For five industry sectors—automobile assembly, pulp and paper, petroleum refining, iron and steel production, and metal smelting and refining—the SFIP provided one-stop access to data on the number of inspections, compliance with federal regulations, enforcement actions, toxic release levels, and spills. The SFIP database also pro-

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

vided information about the facility, including production capacity and demographic characteristics of the surrounding area.

Although there is much more work to be done to develop and categorize meaningful metrics (Metzenbaum, 2003), recent developments appear headed in a valuable direction for the future of program evaluation research.11 In Table E-1, we provide information on key types of data currently available for program evaluation of environmental policies. Improvements in data quality and data access, combined with the ripeness of a variety of innovative regulatory instruments and the managerial pressure to evaluate the effectiveness of government programs, suggest that the coming years could be exceptionally promising for program evaluation research on environmental policy.

CONCLUSION

Program evaluation research provides valuable information for policy decision making. The staff and political officials in state and federal regulatory agencies, legislatures, and other oversight bodies (such as the Office of Management and Budget) need to design and implement policies that work to achieve their goals. With information from retrospective evaluations of policies, policy makers will be better able to determine what policies to adopt (and how to design them) in the future. Policy evaluation research can also help identify ways to change existing policies to make them more beneficial.

To be sure, when research shows that policies having intuitive appeal do not yield the anticipated or desired results, some decision makers may remain faithful to their intuitions rather than to what the evidence shows. Resistance to research findings can also occur when actors in the policy process have interests at stake in certain policies. Although these are real considerations, it should be noted that the same was (and still is to a certain extent) the case even in other areas like medicine or education. However, the value of evidence-based practice is only made more compelling when one acknowledges the biases that can otherwise affect decision making.

More program evaluation research should help counteract the skeptical responses to research in the policy process. If a single study demonstrates that a program is effective or ineffective, those who are predisposed to think otherwise may be quick to dismiss the findings. With multiple program evaluation studies on environmental policies, such dismissals will become more difficult to sustain. If several studies reach consistent results, then over time the preponderance of the empirical evidence will be more likely to affect the decisions of policy makers.

Moreover, the reality is that some regulatory officials are receptive to research that can tell them about what works and what does not work. For

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

example, the EPA has recently released a strategy document on environmental management systems that gives priority to the need for careful program evaluation of initiatives in this area (U.S. Environmental Protection Agency, 2004). Both the EPA and the Multi-State Working Group on Environmental Management Systems have sponsored research conferences on management-based strategies for improving environmental performance that have brought together leading researchers from economics and political science (Coglianese and Nash, 2001, 2004).

Only with more efforts to give priority to program evaluation research will decision making over environmental policy be able to become based more on careful deliberation than on rhetorical and political contestation. To be sure, program evaluation research probably will neither end political conflict altogether nor immunize policy makers from all error. But it can help sharpen the focus of policy deliberation as well as inform government’s choices about how to allocate scarce resources more effectively. Making program evaluation of environmental policy a priority will be a necessary step toward an evidence-based approach to environmental decision making.

ACKNOWLEDGMENTS

We are grateful for the helpful comments we received from Garry Brewer, Terry Davies, David Heath, Shelley Metzenbaum, Jennifer Nash, Paul Stern, and two anonymous reviewers.

NOTES

1.  

By the phrase “environmental decision making” we mean to include all policy decisions related to the environment. Although most of the examples throughout this appendix draw on federal pollution-oriented environmental policies in the United States, our discussion applies equally to any type of environmental or natural resources policy decision making at the local, state, federal, and international levels.

2.  

Risk assessment is not exclusively a scientific enterprise, however, as it often involves making certain policy judgments for which public deliberation may be appropriate (National Research Council, 1996).

3.  

Sometimes program evaluation researchers distinguish between the “outcomes” and “outputs” of a program. For example, a new enforcement initiative might increase the number of enforcement actions that a regulatory agency brings (an output), but the program evaluation researcher would want to ask whether this new initiative (and the corresponding increase in enforcement actions) actually reduced pollution (an outcome).

4.  

A comprehensive answer to the question is, of course, beyond the scope of this appendix. For an extensive discussion of the methods of program evaluation research, see Cook and Campbell (1979). King, Keohane, and Verba (1994) also provide a thorough treatment of the methods of qualitative causal inference. Rossi and Freeman (1993) discuss the uses of evaluation methods in the policy process.

5.  

Rosenbaum (2002) provides a detailed description of a wide range of observational

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

   

study designs. Angrist and Krueger (1999) offer an excellent summary of program evaluation methods, as applied to labor policies, including substantially more detail on each of the estimation methods discussed here.

6.  

When we use the terms “observable” and “unobservable” here, we mean what is observable and unobservable from the perspective of the researcher.

7.  

In the parlance of the instrumental variables literature, these facilities are labeled compliers. This contrasts with always-takers (facilities that would have participated regardless of whether or not they received the letter), never-takers (facilities that would not have participated regardless of whether they received the letter), and defiers (facilities that would have participated if they did not receive a letter, but would not have participated if they did receive a letter). The instrumental variables method provides a valid estimate of the causal effect of the treatment for compliers (Angrist, Imbens, and Rubin, 1996).

8.  

A more recent concern is that data may be restricted due to concerns about its potential use by terrorists. For the moment, TRI data continue to be publicly available despite these concerns.

9.  

In addition to requiring good metrics on outcomes (i.e., environmental performance) for both the treatment and the control groups, policy evaluation also requires data on other potential determinants of environmental performance. These include key variables describing the regulated entities (e.g., production processes, production levels, or market characteristics). Although important work on corporate management has begun to emerge (Andrews, 2003; Prakash, 2000; Reinhardt, 2000), the behavior of firms also remains an area in need of further development.

10.  

See Alberini and Segerson (2002) for a survey article on evaluation of voluntary programs in the environmental policy area that provides detailed references for evaluations that have addressed issues of selection bias.

11.  

In addition to developments in the EPA’s data management, promising nongovernmental efforts to study and improve different kinds of environmental metrics have also emerged in recent years (O’Malley, Cavender-Bares, and Clark, 2003; Clark, 2002; Esty and Cornelius, 2002; National Academy of Engineering, 1999).

REFERENCES

Alberini, A., and K. Segerson 2002 Assessing voluntary programs to improve environmental quality. Environmental and Resource Economics 22:157-184.

Andrews, N.L. 2003 Environmental Management Systems: Do They Improve Performance? (Final Report of the National Database on Environmental Management Systems.) Chapel Hill, NC: University of North Carolina.

Angrist, J.D., and A.B. Krueger 1999 Empirical strategies in labor economics. In The Handbook of Labor Economics, Vol. 3, O. Ashenfelter, and D. Card, eds. Amsterdam, Holland: Elsevier Science.

Angrist, J.D., G.W. Imbens, and D.B. Rubin 1996 Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434):444-472.

Arora, S., and T.N. Cason 1995 An experiment in voluntary environmental regulation: Participation in EPA’s 33/50 program. Journal of Environmental Economics and Management 28(3):271-286.

1996 Why do firms volunteer to exceed environmental regulation? Understanding participation in EPA’s 33/50 program. Land Economics 72(4):413-432.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

Arrow, K.J., M.L. Cropper, G.C. Eads, R.W. Hahn, L.B. Lave, R.G. Noll, P.R. Portney, M. Russell, R. Schmalensee, V.K. Smith, and R.N. Stavins 1996 Is there a role for benefit-cost analysis in environmental, health, and safety regulation? Science 272:221-222.


Bardach, E., and R.A. Kagan 1982 Going by the Book: The Problem of Regulatory Unreasonableness. Philadelphia, PA: Temple University Press.

Blackman, A., and J. Mazurek 2000 The Cost of Developing Site-Specific Environmental Regulations: Evidence from EPA’s Project XL. (Discussion Paper 99-35-REV.) Washington, DC: Resources for the Future.

Brewer, G., and P. deLeon 1983 The Foundations of Policy Analysis. Homewood, IL: The Dorsey Press.

Bui, L.T.M., and C.J. Mayer 2003 Regulation and capitalization of environmental amenities: Evidence from the toxics release inventory in Massachusetts. Review of Economics and Statistics 85(3): 693-708.


Chay, K.Y., and M. Greenstone 2003 Air Quality, Infant Mortality and the Clean Air Act of 1970. (NBER Working Paper No. w10053.) Washington, DC: National Bureau of Economic Research.

Clark, W.C. 2002 The State of the Nation’s Ecosystems: Measuring the Lands, Waters, and Living Resources in the United States. Cambridge, England: Cambridge University Press.

Coglianese, C. 1997 Assessing consensus: The promise and performance of negotiated rulemaking. Duke Law Journal 46:1255-1349.

2001 Assessing the advocacy of negotiated rulemaking. New York University Environmental Law Journal 9:386-447.

2002 Empirical analysis and administrative law. University of Illinois Law Review 2002:1111-1137.

Coglianese, C., and D. Lazer 2003 Management-based regulation: Prescribing private management to achieve public goals. Law and Society Review 37(4):691-730.

Coglianese, C., and J. Nash 2001 Regulating from the Inside: Can Environmental Management Systems Achieve Policy Goals? Washington, DC: Resources for the Future.

2004 Leveraging the Private Sector: Management-Based Strategies for Improving Environmental Performance. (RPP Report No. 6.) Cambridge, MA: Regulatory Policy Program, Center for Business and Government at the John F. Kennedy School of Government, Harvard University.

Coglianese, C., J. Nash, and T. Olmstead 2003 Performance-based regulation: Prospects and limitations in health, safety, and environmental regulation. Administrative Law Review 55:741-764.

Cook, T.D., and D.T. Campbell 1979 Quasi-Experimentation: Design and Analysis Issues for Field Settings. Boston: Houghton Mifflin Company.


Davies, J.C., and J. Mazurek 1998 Pollution Control in the United States: Evaluating the System. Washington, DC: Resources for the Future Press.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

de Bruijn, T., and V. Norberg-Bohm 2001 Voluntary, Collaborative, and Information-Based Policies: Lessons and Next Steps for Environmental and Energy Policy in the United States and Europe. (RPP Report No. 2.) Cambridge, MA: Regulatory Policy Program, Center for Business and Government at the John F. Kennedy School of Government, Harvard University.


Esty, D.C. 2001 Toward data driven environmentalism: The environmental sustainability index. Environmental Law Reporter News and Analysis 31(5):10603-10612.

Esty, D., and P.K. Cornelius, eds. 2002 Environmental Performance Measurement: The Global Report, 2001-2002. Oxford, England: Oxford University Press.


Farrow, S. 2000 Improving Regulatory Performance: Does Executive Office Oversight Matter? AEI-Brookings Joint Center on Regulatory Studies Working Paper 04-01. Available: http://www.aei.brookings.org/publications/related/oversight.pdf [February 28, 2005].


Graham, M., and C. Miller 2001 Disclosure of toxic releases in the United States. Environment 43(8):9-20.

Gunningham, N., R.A. Kagan, and D. Thornton 2003 Shades of Green: Business, Regulation, and Environment. Palo Alto, CA: Stanford University Press.


Hahn, R., and P.M. Dudley 2004 How Well Does the Government Do Cost-Benefit Analysis? AEI-Brookings Joint Center on Regulatory Studies Working Paper 04-01. Available: http://www.aei-brookings.org/admin/authorpdfs/page.php?id=317 [February 14, 2005].

Hahn, R.W. 1998 Government analysis of the benefits and costs of regulations. Journal of Economic Perspectives 12(4):201-210.

Hahn, R.W., and C.R. Sunstein 2002 A new executive order for improving federal regulation? Deeper and wider cost-benefit analysis. University of Pennsylvania Law Review 150:1489-1552.

Hamilton, J.T. 1995 Pollution as news: Media and stock market reactions to the toxic release inventory data. Journal of Environmental Economics and Management 28(1):98-113.

Hamilton, J.T., and W.K. Viscusi 1999 Calculating Risks?: The Spatial and Political Dimensions of Hazardous Waste Policy. Cambridge, MA: MIT Press.

Harrington, W., R.D. Morgenstern, and P. Nelson 2000 On the accuracy of regulatory cost estimates. Journal of Public Policy Analysis and Management 19(2):297-322.

Harrington, W., R.D. Morgenstern, and T. Sterner 2004 Choosing Environmental Policy: Instruments and Outcomes in the United States and Europe. Washington, DC: Resources for the Future.

Hutter, B. 1989 Variations in regulatory enforcement styles. Law and Policy 11(2):153-174.


Khanna, M., and L.A. Damon 1999 EPA’s voluntary 33/50 program: Impact on toxic releases and economic performance of firms. Journal of Environmental Economics and Management 37(1): 1-25.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

Khanna, M., W.R.H. Quimio, and D. Bojilova 1998 Toxics release information: A policy tool for environmental protection. Journal of Environmental Economics and Management 36(3):243-266.

King, G., R.O. Keohane, and S. Verba 1994 Designing Social Inquiry. Princeton, NJ: Princeton University Press.

Kleindorfer, P., and E. Orts 1998 Informational regulation of environmental risks. Risk Analysis 18:155-170.

Konar, S., and M.A. Cohen 1997 Information as regulation: The effect of community right to know laws on toxic emissions. Journal of Environmental Economics and Management 32(2):109-124.


Langbein, L., and C. Kerwin 2000 Regulatory negotiation versus conventional rule making: Claims, counterclaims, and empirical evidence. Journal of Public Administration Research and Theory 10:599-632.


Marcus, A., D.A. Geffen, and K. Sexton 2002 Reinventing Environmental Regulation: Lessons from Project XL. Washington, DC: Resources for the Future.

May, P., and S. Winter 2000 Reconsidering styles of regulatory enforcement: Patterns in Danish agro-environmental inspection. Law and Policy 22:143-173.

Metzenbaum, S. 1998 Making Measurement Matter: The Challenge and Promise of Building a Performance-Focused Environmental Protection System. (Report No. CPM-92-2.) Washington, DC: Brookings Institution Center for Public Management.

2003 More nutritious beans. Environmental Forum April/May:18-41.

Morgenstern, R., ed. 1997 Economic Analyses at EPA: Assessing Regulatory Impact. Washington, DC: Resources for the Future.


National Academy of Engineering 1999 Industrial Environmental Performance Metrics. Challenges and Opportunities. Committee on Industrial Environmental Performance Metrics. Washington, DC: National Academy Press.

National Research Council 1983 Risk Assessment in Federal Government: Managing the Process. Committee on the Institutional Means for Assessment of Risks to Public Health. Commission on Life Sciences. Washington, DC: National Academy Press.

1996 Understanding Risk: Informing Decisions in a Democratic Society. P.C. Stern and H.V. Fineberg, eds. Committee on Risk Characterization. Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.


Office of Management and Budget, Office of Information and Regulatory Affairs 2003a Circular A-4: Regulatory Analysis. Available: http://www.whitehouse.gov/omb/circulars/a004/a-4.pdf [February 14, 2005].

2003b Testimony of the Honorable Clay Johnson III, Deputy Director for Management, before the Committee on Government Reform, U.S. House of Representatives, September 18, 2003. Available: http://www.whitehouse.gov/omb/legislative/testimony/cjohnson/030918_cjohnson.html [February 14, 2005].

Olmstead, S.M., W.M. Hanemann, and R.N. Stavins 2003 Does Price Structure Matter?: Household Water Demand Under Increasing-Block and Uniform Prices. New Haven, CT: School of Forestry and Environmental Studies, Yale University.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

O’Malley, R., K. Cavender-Bares, and W.C. Clark 2003 Providing “better” data: Not as simple as it might seem. Environment 45(May): 8-18.


Prakash, A. 2000 Greening the Firm: The Politics of Corporate Environmentalism. Cambridge, England: Cambridge University Press.


Reinhardt, F. 2000 Down to Earth: Applying Business Principles to Environmental Management. Boston: Harvard Business School Press.

Revesz, R.L., and R.B. Stewart, eds. 1995 Analyzing Superfund: Economics, Science and Law. Washington, DC: Resources for the Future.

Rosenbaum, P.R. 2002 Observational Studies. New York: Springer-Verlag.

Rosenbaum, P.R., and D.B. Rubin 1983 The central role of the propensity score in observational studies for causal effects. Biometrika 70(1):41-55.

Rossi, P.H., and H.E. Freeman 1993 Evaluation: A Systematic Approach, Fifth Edition. Newbury Park, CA: Sage Publications.


Scholz, J.T. 1984 Cooperation, deterrence and the ecology of regulatory enforcement. Law & Society Review 18:179-224.

Shimshack, J.P., and M.B. Ward 2003 Enforcement and Environmental Compliance: A Statistical Analysis of the Pulp and Paper Industry. Medford, MA: Tufts University.

Snyder, L.D. 2004a Are management-based regulations effective? Evidence from state pollution prevention programs. In Essays on Facility Level Response to Environmental Regulation. Ph.D. dissertation. Cambridge, MA: Harvard University.

2004b Are the TRI Data valid measures of facility-level environmental performance?: Reporting thresholds and truncation bias. In Essays on Facility Level Response to Environmental Regulation. Ph.D. dissertation. Cambridge, MA: Harvard University.

Sparrow, M. 2000 The Regulatory Craft: Controlling Risks, Solving Problems, Managing Compliance. Washington, DC: Brookings Institution Press.

Stavins, R.N. 1998 What can we learn from the grand policy experiment? Positive and normative lessons from SO2 allowance trading. Journal of Economic Perspectives 12(3): 69-88.

2003 Experience with market-based environmental policy instruments. In Handbook of Environmental Economics, Volume 1 Environmental Degradation and Institutional Responses, K.-G. Maler, and J.R. Vincent, eds. Amsterdam, Holland: North-Holland Press.

Stokey, E., and R. Zeckhauser 1978 Project evaluation: Benefit-cost analysis. In A Primer for Policy Analysis. New York: W.W. Norton & Company.


U.S. Environmental Protection Agency, EMS Permits and Regulations Workgroup 2004 EPA’s Strategy for Determining the Role of Environmental Management Systems in Regulatory Programs. Washington, DC: U.S. Environmental Protection Agency.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

U.S. Environmental Protection Agency, Office of Air and Radiation 1997 Final Report to Congress on Benefits and Costs of the Clean Air Act, 1970 to 1990. (EPA 410-R-97-002.) Washington, DC: U.S. Environmental Protection Agency.

U.S. Environmental Protection Agency, Office of the Administrator 2000 Guidelines for Preparing Economic Analyses. (EPA 240-R-00-003.) Washington, DC: U.S. Environmental Protection Agency.


Viscusi, W.K. 1996 Regulating the regulators. University of Chicago Law Review 63:1423-1461.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

TABLE E-1 Data Sources for Program Evaluation of Environmental Policy

Topic

Data

Source

Description

Types of Facilities Covered

Data on Outcomes

Toxics and Hazardous Waste

Toxics Release Inventory

Self-reported by facilities

Contains data on pounds of chemicals released to air, water, land, underground injection, and transferred off site. Also includes data on pollution prevention activities and recycling.

Manufacturing facilities that meet certain thresholds.

 

Comprehensive Environmental Response, Compensation and Liability Information System (CERCLIS)

 

Contains data on Superfund sites, including whether they are on the National Priority List, ownership information, dates and descriptions of actions taken.

Superfund sites

 

Record of Decisions

 

Provides *.pdf files of Superfund sites decisions regarding

Superfund sites.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

 

Resource Conservation and Recovery Act Information (RCRAInfo)

 

Contains data on hazardous waste generation for large quantity generators of hazardous waste and disposal information for all treatment, storage and disposal facilities. Replaces two previously maintained databases, the Biennial Reporting System and the Resource Conservation and Recovery Information System.

Generators of hazardous waste and hazardous waste treatment storage and disposal facilities

Water

Permit Compliance System (PCS)

Discharge data are self-reported by facilities. Other information entered and maintained by either the EPA or the states.

Contains data on permit limits, discharge levels, enforcement, and inspection activities.

All National Permit Discharge and Elimination System permit holders.

 

Safe Drinking Water Information System

Maintained by the EPA or designated states.

Contains data on drinking water contaminant violations and enforcement actions.

Public drinking water systems

Air

Aerometric Information Retrieval System (AIRS) Facility Subsystem

Self-reported by facilities.

Contains data on permits, emissions, inspection, and compliance with air quality standards.

All air permit holders

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

Topic

Data

Source

Description

Types of Facilities Covered

Compliance and Enforcement

Enforcement and Compliance History Online

Combined enforcement and compliance data from PCS, AIRS, and RCRAInfo.

Contains data on inspection and compliance for water, air, and hazardous waste permit holders.

Same as underlying PCS, AIRS, and RCRAInfo databases.

 

Integrated Data for Enforcement Analysis

Combined enforcement and compliance data from PCS, AIRS, and RCRAInfo.

Contains data on inspection and compliance for water, air, and hazardous waste permit holders.

Same as underlying PCS, AIRS, and RCRAInfo databases.

Data on Covariates

Firm Data

Compustat

Standard and Poor’s

Contains income, balance sheet, and cash flow data.

Publicly held companies. Data are available by subscription.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×

 

Dunn and Bradstreet Million Dollar Database

Dunn and Bradstreet

Contains data on sales, employment, industry, and ownership.

1.6 million U.S. and Canadian companies, both private and public. Data are proprietary and available by subscription only.

Plant data

Dunn and Bradstreet Million Dollar Database

Dunn and Bradstreet

Contains employment information at plant and firm level.

1.6 million U.S. and Canadian companies, both private and public. Data are proprietary and available by subscription only.

 

Longitudinal Research Database

U.S. Census Bureau

Contains data from the Census of Manufacturers and the Annual Survey of Manufacturers. Data include employment, product classes, and shipments.

Available only by approved proposal at one of eight regional data centers.

Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 246
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 247
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 248
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 249
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 250
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 251
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 252
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 253
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 254
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 255
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 256
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 257
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 258
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 259
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 260
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 261
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 262
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 263
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 264
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 265
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 266
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 267
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 268
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 269
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 270
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 271
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 272
Suggested Citation:"Appendix E Program Evaluation of Environmental Policies: Toward Evidence-Based Decision Making--Cary Coglianese and Lori D. Snyder Bennear." National Research Council. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities. Washington, DC: The National Academies Press. doi: 10.17226/11186.
×
Page 273
Next: Appendix F Panel Members, Staff, and Contributors »
Decision Making for the Environment: Social and Behavioral Science Research Priorities Get This Book
×
Buy Paperback | $61.00 Buy Ebook | $48.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

With the growing number, complexity, and importance of environmental problems come demands to include a full range of intellectual disciplines and scholarly traditions to help define and eventually manage such problems more effectively. Decision Making for the Environment: Social and Behavioral Science Research Priorities is the result of a 2-year effort by 12 social and behavioral scientists, scholars, and practitioners. The report sets research priorities for the social and behavioral sciences as they relate to several different kinds of environmental problems.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!