National Academies Press: OpenBook

Decadal Science Strategy Surveys: Report of a Workshop (2007)

Chapter: 4 Lessons Learned and Implications for the Next Decadal Surveys

« Previous: 3 Assessing Cost and Technology Readiness
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

4
Lessons Learned and Implications for the Next Decadal Surveys

The afternoon session on November 15 (Session 3) was devoted to the assumptions on which decadal surveys have been or should be based, and the final session on the morning of November 16 (Session 4) featured lessons learned from the preceding workshop sessions and how they might be incorporated into future surveys. (See Appendix A for a detailed agenda and lists of panelists.) Speakers said their observations reflected not only their own opinions but also the views of colleagues in their respective research communities. For example, Megan Urry noted that the NRC Committee on Astronomy and Astrophysics, which she formerly co-chaired, had devoted more than a year to gathering input from other relevant committees, experts, and agency representatives and from other sources and had used those perspectives to draft a white paper concerning the next astronomy and astrophysics survey. The last two sessions reinforced and elaborated on points that had been introduced in the first two workshop sessions, and their highlights are presented below.

SUSTAINING THE OVERALL VALUE OF SURVEYS

All workshop participants appeared to share the view that these surveys continue to be important to their respective communities and to those who use them to make programmatic, policy, and budget decisions. Not doing decadal surveys was not viewed as an option, for without these surveys there would be no way to maintain community coherence on priorities. In short, the alternative would be lots of parochial lobbying for individual programs. Several agency representatives mentioned that they get plenty of conflicting advice, so having a stable, long-term perspective in the form of a decadal survey is very important for their planning.

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

The discussions indicated that future surveys should perform functions that have been important in the past—namely, they should (1) reflect broad community consensus; (2) present ranked priorities; (3) integrate all aspects of a research program, including measurements, theory, data analysis, technology development, training, and so on; (4) demonstrate the field’s benefits to society; and (5) include plans to market the survey to users.

The discussions identified four important audiences for survey reports:

  • Practitioners in the research community, who expect fairness, wisdom, and advocacy;

  • Students, who expect to see a big picture of the field and how the pieces fit;

  • Federal agencies, which expect direction and judicious advocacy; and

  • OMB and Congress, which expect unambiguous rankings.

Speakers asserted that it is at least as important for policy makers in senior offices of the executive branch (i.e., OSTP and OMB) to understand the consensus within the community (as embodied in a survey report) as it is for the federal agencies themselves to understand. Furthermore, participants argued that is very important that members of Congress and their staff be fully aware of survey results and have a chance to question survey committee members freely and directly, because the best communicators and advocates for the surveys are really the community members themselves.

Participants noted that strong community ownership of a survey is a key to success. Such ownership is possible only if community members, including emerging leaders who will be important players in succeeding decades, have had ample chance to provide input and give guidance to the survey panels. In his comments, workshop panel member Joseph Burns applied his experience as chair of the 1994 Committee on Planetary and Lunar Exploration to the challenge of building broad community support and acceptance. Insisting that solidarity is essential, he suggested that the way to build (and communicate) that solidarity is (1) to utilize large survey panels, which engage more people who then become agents for the survey; (2) to invite white papers, which engages the entire community; (3) to organize community forums where people can offer views in person; (4) to use Internet tools effectively; and (5) to have an active outreach program after the study is complete. By the same token, issuing decrees from on high, publicizing the outcome, and then ignoring feedback would result in failure to secure community buy-in.

In response to questions about ensuring independence and objectivity in the survey process, participants tended to accept that absolute independence and objectivity are generally not possible on survey panels. They argued that the goal should be to have people on panels who are highly engaged and passionately interested in the programs, and that the best that one can do is to balance vested

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

interests. Surveys probably always will involve some trade-offs and even some obvious advocacy, so the NRC and the survey committees need mechanisms to keep aspects of advocacy in check. An effective way to do this, some noted, would be to have a portion of the survey committee’s membership come from outside the discipline community—they would be experts who could look at the committee’s conclusions dispassionately.

SURVEY DESIGN, STRUCTURE, AND TIMING

Surveys require a lot of work, and there seemed to be general agreement among the workshop participants that roughly 2 years are needed to complete a survey and to ensure community coherence on recommendations. Participants commented on the need for the survey charge to be clear and focused and to avoid open-ended tasks. Surveys also need to be comprehensive (e.g., astronomy surveys should include both ground- and space-based efforts and should span the interests of NASA, NSF, and DOE). The survey charge needs to be fully vetted at the outset to ensure community and agencies buy-in and to avoid later delays. The surveys need to reflect the possibility that the recommendations may impact the programs of agencies other than the survey sponsors, and the survey committees will have to think carefully about that.

Participants noted that the survey committee membership must adequately represent the scientific field, but it also should include other experts in areas beyond the science (e.g., in hardware development, program and project management, systems engineering and operations, cost estimation, and policy). Most surveys have used panels to explore subfields, and that approach was widely accepted by workshop participants, who also noted that panel structure is what enables cross-prioritization, something that some fields have had trouble doing. Finally, some participants emphasized that different fields all have unique aspects that a survey’s design needs to accommodate; a one-size-fits-all approach would not be sensible.

Many participants seemed to feel 10 years is probably the right time interval between surveys. Some suggested that there is value in synchronizing surveys with other key planning activities. For example, agency roadmaps and strategic plans tend to be revised every 3 years and to look out over about 10 years. Hence, decadal surveys can be most useful if they also look out over a decade and if they are available when agencies begin a cycle of roadmapping and strategic planning. Agency roadmapping and implementation planning are likely to be done more frequently than the surveys, but there needs to be a way to keep the agency plans consonant with the survey’s strategic priorities.

Providing clear and convincing advice to multiple agencies was noted as being especially challenging for a survey. As one workshop speaker put it with tongue in cheek: “Asymmetries in agency sizes and budgets can promote fear, loathing, and jealousy.” A survey needs, therefore, to identify the right role for

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

each agency and to tailor advice to each agency’s culture. Many appeared to agree that if surveys can rise to that challenge then they have great potential to promote efficiency in science programs and even better interagency cooperation.

PRIORITIES, QUEUING, BALANCE, AND PORTFOLIO MIX

Workshop panelists noted that surveys should lead with the science. Every survey report needs a compelling exposition of the science: where progress in the field stands, where the field hopes to go, and how and why the science is exciting. They recognized how hard it is to market science and how more emphasis needs to be placed on science advocacy. Scientific communities clearly need to make a better public argument for their science, and the surveys are a good way to do this.

The deliverables from a research program cannot be just another set of interesting questions for scientists to ponder. Thus, while surveys must stay focused on the best science, the recommended programs or missions can still contribute substantially to economic and societal applications. Participants argued that good science underpins sound applications, and surveys have a duty to spell out clearly how important applications can be drawn from basic science missions and programs. They also seemed to accept that for disciplines such as the Earth sciences, applications are the central motivating goals of the field. Applications tend to be interdisciplinary by nature, they said, pushing the envelope of science and forcing a look at the entire system being studied, and they do produce something useful (beyond journal articles). Thus, speakers proposed that a decadal survey’s broad science questions be articulated in a way that is comprehensible and appealing to the public and that, among other things, explains why taxpayer money should be invested in an effort.

Participants fully agreed with the need to establish priorities in the surveys, but there was a range of opinions about exactly how to do that. Most of the discussion was focused on what would be most useful for the agencies—that is, it asked which of the following a survey should aim to present:

  • A single integrated priority list or parallel lists for projects of different size or scope.

  • Lists of specific initiatives or lists of priority science objectives.

  • Recommendations for specific project queues or delegation of queuing to the agencies.

Some participants noted that agencies are in a better position to decide on the queuing of projects than are the survey committees. Several agency representatives mentioned that in today’s funding-strapped world, something will probably have to be given up before something new is started, and they encouraged the

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

surveys to look at this issue of triage. That is particularly true because operational costs are now commensurate with development costs for many projects.

Most participants appeared to share the view that surveys should be as broadly inclusive as possible. That is, all elements of the discipline should be evaluated and ranked in the survey, and all aspects of the discipline’s program, from smallest to largest and from major missions and facilities to core research infrastructure, should be placed into scientific and programmatic context and their roles should be made clear to government decision makers and the public. The failure of some recent decadal surveys to explicitly endorse the Explorer program and to include it in priority rankings was cited as an example of how devastating the consequences could be when such program elements are assumed to be sacrosanct and are not treated explicitly. In the Earth sciences, the recent dramatic de-scoping of the National Polar-orbiting Operational Environmental Satellite System (NPOESS), which had once been the principal platform for obtaining climate data from space, was cited as another case where unquestioned assumptions led to major headaches and reassessments for a survey committee.

Similarly, most participants appeared to agree that a priority recommended by an earlier survey that had not been initiated by the time of a new survey should be considered explicitly and reevaluated. If, however, a legacy project has started up and continues to have high merit, it should be fully carried out. But priorities can change over a 5- to 10-year period, so that if a legacy project has lost relevance or been superseded it should not be allowed to block progress on more important efforts.

Participants also noted that there is value in discussing the synergies between different missions or initiatives that are recommended; queuing and the relative phasing of recommended missions; and the benefits, if any, of conducting particular missions simultaneously, both within a given field and between related fields.

It was clear from the discussion that participants believed the surveys should consider the balance between small, medium, and large projects and the portfolio mix of missions and mission-enabling elements of a comprehensive program—that is, what it takes to do a complete project: a mission, its data analysis, technology development, training, and theory. Participants emphasized that all recommended priorities and portfolio mixes should be driven by, and traceable to, science priorities. A number of speakers noted that larger projects cannot be accomplished without key supporting research and technology efforts.

There were differing views about how surveys should characterize project balance. For example, should a survey recommend a specific mix, such as 60 percent large, 30 percent medium, and 10 percent small projects? To some, such levels would be arbitrary and difficult to defend. To others, they establish metrics for needed balance and can be used to review whether project cost growth is significantly impacting portfolio balance. There appeared to be considerable interest in having surveys set forth decision rules (principles by which departures from

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

survey recommendations could be decided or implementation decisions could be guided) that allow agencies to cope with exigencies that might threaten program balance. Participants did note that the ability to have a balanced portfolio varies by agency, with some suggesting that NASA is now more focused on human spaceflight than on science.

There was an extended discussion about the fact that the human exploration of space is now a major national goal. Some participants argued that human spaceflight should not be ignored in future decadal surveys. Human spaceflight is relevant in some fields—for example, planetary science and solar and space physics—specifically from the standpoint of how research in those fields will enable future human exploration missions. There are cases, most notably the Hubble Space Telescope, where human spaceflight has served to directly expand the scientific capability of a science mission. Human space exploration missions might even enable entirely new research.

Two views were expressed. Some participants were concerned about the field-of-dreams approach, whereby if human exploration goes forward people will feel compelled to gravitate toward those programs to use them as research opportunities. These concerns recall the rueful experience of the scientific community when NASA promised abundant research opportunities from the space shuttle and the International Space Station. Other participants argued that while the scientific opportunities that might accompany human spaceflight should not be ignored, they should be considered from a science-first perspective. That is, science that is enabled by human spaceflight should be prioritized along with other science and against the same criteria, not given higher priority simply because it is aligned with current NASA policy.

There was also a sense that science to enable human space exploration is an appropriate topic for future decadal surveys but that in those cases the science and the applications should be treated separately and priorities for the human spaceflight exploration applications and for fundamental science should not be intermixed.

RESILIENCE, EXECUTION, TECHNOLOGY READINESS, AND RISK

The panelists and other participants discussed at length (1) how to make surveys more resilient to future (scientific, technical, programmatic, or political) changes, (2) how much survey recommendations should push the likely programmatic, technological, and budget limits, and (3) how to assess the feasibility of a project and its difficulty of execution. Recent dramatic changes in agency planning environments bring a basic instability to the system that cannot be eliminated but that must be managed. One thing is predictable: Changes will occur. Participants cited changing political agendas (the announcement of the Vision for Space Exploration), apparent changes in agency priorities (removal of “protect the home planet” from the NASA mission statement), budget pressures, technical

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

challenges (restructuring of NPOESS), unanticipated disasters (Columbia accident and Hurricane Katrina), and unanticipated scientific results (e.g., discovery of the ozone hole, compelling evidence for dark energy). Past surveys have had to deal with issues such as these but lacked robust ways to address them. Therefore, many participants agreed that a balanced portfolio of missions provides resilience so long as there are some decision rules included in the survey on what to do if missions get in trouble or if there are radical shifts in budgets or politics. It may be difficult to predict what these decision rules should include so far in advance, but it was clear from the discussions that participants believed having them would help keep a survey flexible in the face of change.

Many participants argued that including competitively selected, smaller missions (e.g., Explorer, Discovery, and Earth Probe missions) also makes the surveys resilient. Small missions are of strategic value because they can respond more quickly to new discoveries or changes in available resources, contribute to a higher mission launch rate, accept more technological risk, provide an effective vehicle for training students and new engineers and scientists about space project design and management, and facilitate testing and demonstration of new technologies. Participants also agreed that insisting on sufficient technology development and mission planning can substantially improve the resiliency of future surveys and ensure that immature mission concepts are avoided. They also seemed to agree that it is quite unlikely that a new large mission would arise ab initio and be ready for initiation in the 2 years it takes to carry out a survey.

Speakers also noted that it is worth working with survey sponsors and affected agencies to get the best sense of realistic budget and policy environments (i.e., the likely funding wedge) during the period when the survey recommendations are to be implemented. That said, many thought that better cost estimating was more important than a better planning wedge, since projected budget wedges come and go so quickly in the annual federal budgeting process. This will require effective communication during the survey period, which is important for ensuring strong partnership between the survey committee and various stakeholders (e.g., agencies and Congress).

There were different views about providing multiple budget scenarios as a means of building resiliency into the surveys. Some participants thought that including multiple scenarios was a good idea, but others suspected that the lowest cost alternative would tend to be selected, thereby undermining the broad scientific reach of a survey.

Most participants seemed to feel that because decadal surveys should be held in high regard, they should not (“like the Constitution”) be revised arbitrarily, largely because of the arduous and balanced process of getting broad community buy-in on their recommendations. However, some participants said that surely there can come a point in time when cost growth or policy changes warrant revisiting the survey. Such a trigger point should be built into a survey if possible. Speakers cited examples such as a flagship mission’s cost growth or

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

a major event like a space shuttle accident or the deletion of climate observation from NPOESS, any of which could be viewed as destroying the balance of the overall program recommended by a survey. It was pointed out that seldom is there a specific trigger—it is more likely to be a slippery slope that leads to big change (e.g., the JWST project had spent more than $1.2 billion before a good PDR cost estimate could be obtained).

Some past survey chairs said that NASA’s internal roadmapping activities or advisory committees had deviated from survey priorities and intent and that there should be a mechanism to follow up on an agency’s execution of a survey. Many participants voiced strong support for keeping survey committees together for the 10-year life of a survey so they can monitor implementation of the survey’s recommendations and participate in any needed revisions. They noted that standing oversight committees (e.g., the Committee on Astronomy and Astrophysics and the Committee on Earth Studies) should be able to address specific issues that arise between surveys. Most of the agency representatives strongly agreed with the need for decision rules and trigger points in surveys.

COST ESTIMATES

The session panelists and other participants discussed how surveys should handle cost estimates and who should prepare them. A clear message was that cost-estimating problems are plaguing every agency (e.g., NASA, DOD, NOAA, and NSF) and that the quality of these estimates depends on project design maturity and size. One speaker noted that there is a “conspiracy of optimism” to get new initiatives into the queue but that priorities need to be matched with technological readiness and mature cost estimates. The impacts of poor cost estimates included loss of balance in project sizes and support for the core research program, limited capacity to accomplish survey recommendations, and loss of scientifically important overlap of mission phases.

While growth in the JWST cost estimates was seen as having crowded out other important investments, the mission is still seen as holding great scientific promise and supporting a healthy research and data analysis program. Some participants asked whether the community would have supported JWST at the time of the 2000 decadal survey if it had suspected the potential for cost growth and the likely current budget environment. How can a survey committee foresee such an outcome? A NASA representative said that the 2000 survey had not helped NASA evaluate the opportunity cost of the JWST when it started to overrun its budget significantly, nor did it provide guidance on whether there should be a point in JWST costs beyond which its priority should change vis-à-vis the other astronomy program elements.

Many participants noted that it is unlikely that one can get credible cost estimates until a PDR is completed. This poses a dilemma, because most projects

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

that are considered in a decadal survey are far from the PDR stage. Indeed, they are more likely to be mission concepts than defined missions at that time. Some speakers suggested that the surveys should either insist on better cost estimates or set objectives and define cost caps tied to a level of effort determined by science priorities. Getting better estimates requires that a mission be well defined, which can be very time consuming to achieve. Setting objectives and cost caps is more feasible and provides agencies with mission design flexibility.

Independent cost estimates, it was agreed, would add value to surveys, and having a cost uncertainty index would help define risk of cost growth. Cost estimates tend to be treated as gospel truth and surveys need to be very careful with them. Indeed, cost ranges are preferred if there are doubts. NASA and NSF representatives said that life-cycle costs are clearly becoming more important these days and must be considered by a survey if possible. Several agency representatives added that while the survey committees do not need to be project managers, they do need to help establish the scope of missions and provide rough guidance on what to do if there are problems. Uniform cost-estimating tools should be used within a survey so as to facilitate cost comparisons between initiatives, and uncertainties should be identified wherever possible (i.e., estimate confidence levels and ranges rather than specific costs).

INTERAGENCY COLLABORATION

Panel members were asked to consider how interagency plans, missions, and mission opportunities should be treated in decadal surveys. There were many examples of interagency projects that became very problematic in surveys. Most notably, while both NASA and NOAA support climate observations on NPOESS, NPOESS is strongly driven by DOD, which has a compelling interest in short-term meteorology but relatively little interest in climate change research. Sometimes important agencies have not even been part of the survey sponsorship—for example, DOE was not a sponsor of the astronomy and astrophysics survey in 2000. There was also much discussion about the disconnect between NASA and NOAA: The former focuses on research, the latter on observational monitoring. Yet, speakers noted that NOAA cannot really perform its observational mission without NASA research, and NASA either has failed to see the value of long-term monitoring for science or has double standards, in that HST, which is a pure science mission, has monitored the skies for more than 16 years. There are no clear lines between research and operations for either NASA or NOAA.

INTERNATIONAL COLLABORATION

Participants suggested that international collaboration is good and that many missions have been enabled or enhanced by international participation. However,

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×

they also suggested that the management complications that occur in such collaborations might limit any cost savings, as might the extra demands arising from International Traffic in Arms Regulations restrictions, visa restrictions, international coordination, and differences in the way that different foreign agencies make plans and budget commitments. Speakers cited the need to avoid duplicating or competing with foreign missions, and they noted that international participation might help stabilize a program (because, for example, an agency would be reluctant to drop a project with international partners).

Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 31
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 32
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 33
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 34
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 35
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 36
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 37
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 38
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 39
Suggested Citation:"4 Lessons Learned and Implications for the Next Decadal Surveys." National Research Council. 2007. Decadal Science Strategy Surveys: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11894.
×
Page 40
Next: Appendix A Workshop Agenda »
Decadal Science Strategy Surveys: Report of a Workshop Get This Book
×
 Decadal Science Strategy Surveys: Report of a Workshop
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Workshop on Decadal Science Strategy Surveys was held on November 14-16, 2006, to promote discussions of the use of National Research Council (NRC) decadal surveys for developing and implementing scientific priorities, to review lessons learned from the most recent surveys, and to identify potential approaches for future surveys that can enhance their realism, utility, and endurance.

The workshop involved approximately 60 participants from academia, industry, government, and the NRC. This report summarizes the workshop presentations, panel discussions, and general discussions on the use of decadal surveys for developing and implementing scientific priorities in astronomy and astrophysics, planetary science, solar and space physics, and Earth science. Decadal Science Strategy Surveys: Report of a Workshop summarizes the evnts of the three day workshop.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!