National Academies Press: OpenBook

The Future of Computing Performance: Game Over or Next Level? (2011)

Chapter: Appendix B: Biographies of Committee Members and Staff

« Previous: Appendix A: A History of Computer Performance
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

B


Biographies of Committee Members and Staff

Samuel H. Fuller (Chair), NAE, is the CTO and vice president of research and development at Analog Devices, Inc. (ADI) and is responsible for its technology and product strategy. He also manages university research programs and advanced development initiatives and supports the growth of ADI product-design centers around the world. Dr. Fuller has managed the development of EDA tools and methods and design of digital signal processors and sponsored the development of advanced optoelectronic integrated circuits. Before joining ADI in 1998, Dr. Fuller was vice president of research at Digital Equipment Corporation and built the company’s corporate research programs, which included laboratories in Massachusetts, California, France, and Germany. While at Digital, he initiated work in local-area networking, RISC processors, distributed systems, and Internet search engines. He was also responsible for research programs with universities; the Massachusetts Institute of Technology Project Athena was one of the major programs. Earlier, Dr. Fuller was an associate professor of computer science and electrical engineering at Carnegie Mellon University, where he led the design and performance evaluation of experimental multiprocessor computer systems. He holds a BS in electrical engineering from the University of Michigan and an MS and a PhD from Stanford University. He is a member of the board of Zygo Corporation and the Corporation for National Research Initiatives and serves on the Technology Strategy Committee of the Semiconductor Industry Association. Dr. Fuller has served on several National Research Council studies, including the one that produced Cryptography’s

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

Role in Securing the Information Society, and was a founding member of the Research Council’s Computer Science and Telecommunications Board. He is a fellow of the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science and a member of the National Academy of Engineering.

Luiz André Barroso is a Distinguished Engineer at Google Inc., where his work has spanned a number of fields, including software-infrastructure design, fault detection and recovery, power provisioning, networking software, performance optimizations, and the design of Google’s computing platform. Before joining Google, he was a member of the research staff at Compaq and Digital Equipment Corporation, where his group did some of the pioneering work on processor and memory-system design for commercial workloads (such as database and Web servers). The group also designed Piranha, a scalable shared-memory architecture based on single-chip multiprocessing; this work on Piranha has had an important impact in the microprocessor industry, helping to inspire many of the multicore central processing units that are now in the mainstream. Before joining Digital, he was one of the designers of the USC RPM, an FPGA-based multiprocessor emulator for rapid hardware prototyping. He has also worked at IBM Brazil’s Rio Scientific Center and lectured at PUC-Rio (Brazil) and Stanford University. He holds a PhD in computer engineering from the University of Southern California and a BS and an MS in electrical engineering from the Pontifícia Universidade Católica, Rio de Janeiro.

Robert P. Colwell, NAE, was Intel’s chief IA32 (Pentium) microprocessor architect from 1992 to 2000 and managed the IA32 Architecture group at Intel’s Hillsboro, Oregon, facility through the P6 and Pentium 4 projects. He was named the Eckert-Mauchly Award winner for 2005. He was elected to the National Academy of Engineering in 2006 “for contributions to turning novel computer architecture concepts into viable, cutting-edge commercial processors.” He was named an Intel fellow in 1996, and a fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2006. Previously, Dr. Colwell was a central processing unit architect at VLIW minisupercomputer pioneer Multiflow Computer, a hardware-design engineer at workstation vendor Perq Systems, and a member of technical staff at Bell Labs. He has published many technical papers and journal articles, is inventor or coinventor on 40 patents, and has participated in numerous panel sessions and invited talks. He is the Perspectives editor for IEEE’s Computer magazine, wrote the At Random column in 2002-2005, and is author of The Pentium Chronicles, a behind-the-scenes look at modern microprocessor design. He is currently an independent consultant. Dr.

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

Colwell holds a BSEE from the University of Pittsburgh and an MSEE and a PhD from Carnegie Mellon University.

William J. Dally, NAE, is the Willard R. and Inez Kerr Bell Professor of Engineering at Stanford University and chair of the Computer Science Department. He is also chief scientist and vice president of NVIDIA Research. He has done pioneering development work at Bell Telephone Laboratories, the California Institute of Technology, and the Massachusetts Institute of Technology, where he was a professor of electrical engineering and computer science. At Stanford University, his group has developed the Imagine processor, which introduced the concepts of stream processing and partitioned register organizations. Dr. Dally has worked with Cray Research and Intel to incorporate many of those innovations into commercial parallel computers and with Avici Systems to incorporate the technology into Internet routers, and he cofounded Velio Communications to commercialize high-speed signaling technology and Stream Processors to commercialize stream-processor technology. He is a fellow of the Institute of Electrical and Electronics Engineers and of the Association for Computing Machinery (ACM) and has received numerous honors, including the ACM Maurice Wilkes award. He has published more than 150 papers and is an author of the textbooks Digital Systems Engineering (Cambridge University Press, 1998) and Principles and Practices of Interconnection Networks (Morgan Kaufmann, 2003). Dr. Dally is a member of the Computer Science and Telecommunications Board (CSTB) and was a member of the CSTB committee that produced the report Getting up to Speed: The Future of Supercomputing.

Dan Dobberpuhl, NAE, cofounder, president, and CEO of P. A. Semi, has been credited with developing fundamental breakthroughs in the evolution of high-speed and low-power microprocessors. Before starting P. A. Semi, Mr. Dobberpuhl was vice president and general manager of the broadband processor division of Broadcom Corporation. He came to Broadcom via an acquisition of his previous company, SiByte, Inc., founded in 1998, which was sold to Broadcom in 2000. Before that, he worked for Digital Equipment Corporation for more than 20 years, where he was credited with creating some of the most fundamental breakthroughs in microprocessing technology. In 1998, EE Times named Mr. Dobberpuhl as one of the “40 forces to shape the future of the Semiconductor Industry.” In 2003, he was awarded the prestigious IEEE Solid State Circuits Award for “pioneering design of high-speed and low-power microprocessors.” In 2006, Mr. Dobberpuhl was elected to the National Academy of Engineering for “innovative design and implementation of high-performance, low-power microprocessors.” Mr. Dobberpuhl holds

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

15 patents and has written many publications related to integrated circuits and central processing units, including being coauthor of the seminal textbook Design and Analysis of VLSI Circuits, published by Addison-Wesley in 1985. He holds a bachelor’s degree in electrical engineering from the University of Illinois.

Pradeep Dubey is a senior principal engineer and director of the Parallel Computing Lab, part of Intel Labs at Intel Corporation. His research focus is computer architectures to handle new application paradigms for the future computing environment efficiently. Dr. Dubey previously worked at IBM’s T. J. Watson Research Center and Broadcom Corporation. He was one of the principal architects of the AltiVec* multimedia extension to Power PC* architecture. He also worked on the design, architecture, and performance issues of various microprocessors, including Intel® i386™, i486™, and Pentium® processors. He holds 26 patents and has published extensively. Dr. Dubey received a BS in electronics and communication engineering from Birla Institute of Technology, India, an MSEE from the University of Massachusetts at Amherst, and a PhD in electrical engineering from Purdue University. He is a fellow of the Institute of Electrical and Electronics Engineering.

Mark D. Hill is a professor in both the Computer Sciences Department and the Electrical and Computer Engineering Department at the University of Wisconsin-Madison. Dr. Hill’s research targets the memory systems of multiple-processor and single-processor computer systems. His work emphasizes quantitative analysis of system-level performance. His research interests include parallel computer-system design (for example, memory-consistency models and cache coherence), memory-system design (for example, caches and translation buffers), computer simulation (for example, parallel systems and memory systems), and software (page tables and cache-conscious optimizations for databases and pointer-based codes). He is the inventor of the widely used 3C model of cache behavior (compulsory, capacity, and conflict misses). Dr. Hill’s current research is mostly part of the Wisconsin Multifacet Project that seeks to improve the multiprocessor servers that form the computational infrastructure for Internet Web servers, databases, and other demanding applications. His work focuses on using the transistor bounty predicted by Moore’s law to improve multiprocessor performance, cost, and fault tolerance while making these systems easier to design and program. Dr. Hill is a fellow of the Association for Computing Machinery (ACM) (2004) for contributions to memory-consistency models and memory-system design and a fellow of the Institute of Electrical and Electronics Engineers (2000) for contributions to cache-memory design and analysis. He was named a

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

Wisconsin Vilas Associate in 2006, was a co-winner of the best-paper award in VLDB in 2001, was named a Wisconsin Romnes fellow in 1997, and won a National Science Foundation Presidential Young Investigator award in 1989. He is a director of ACM SIGARCH, coeditor of Readings in Computer Architecture (2000), and coinventor on 28 U.S. patents (several coissued in the European Union and Japan). He has held visiting positions at Universidad Politecnica de Catalunya (2002-2003) and Sun Microsystems (1995-1996). Dr. Hill earned a PhD in computer science from the University of California, Berkeley (UCB) in 1987, an MS in computer science from UCB in 1983, and a BSE in computer engineering from the University of Michigan-Ann Arbor in 1981.

Mark Horowitz, NAE, is the associate vice provost for graduate education, working on special programs, and the Yahoo! Founders Professor of the School of Engineering at Stanford University. In addition, he is chief scientist at Rambus Inc. He received his BS and MS in electrical engineering from the Massachusetts Institute of Technology in 1978 and his PhD from Stanford in 1984. Dr. Horowitz has received many awards, including a 1985 Presidential Young Investigator Award, the 1993 ISSCC Best Paper Award, the ISCA 2004 Most Influential Paper of 1989, and the 2006 Don Pederson IEEE Technical Field Award. He is a fellow of the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery and is a member of the National Academy of Engineering. Dr. Horowitz’s research interests are quite broad and span using electrical engineering and computer science analysis methods on problems in molecular biology and creating new design methods for analogue and digital very-large-scale implementation circuits. He has worked on many processor designs, from early RISC chips to creating some of the first distributed shared-memory multiprocessors and is currently working on on-chip multiprocessor designs. Recently, he has worked on a number of problems in computational photography. In 1990, he took leave from Stanford to help start Rambus Inc., a company designing high-bandwidth memory-interface technology, and has continued work in high-speed I/O at Stanford. His current research includes multiprocessor design, low-power circuits, high-speed links, computational photography, and applying engineering to biology.

David Kirk, NAE, was NVIDIA’s chief scientist since from 1997 to 2009 and is now an NVIDIA fellow. His contributions include leading NVIDIA graphics-technology development for today’s most popular consumer entertainment platforms. In 2002, Dr. Kirk received the SIGGRAPH Computer Graphics Achievement Award for his role in bringing high-perfor-

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

mance computer graphics systems to the mass market. From 1993 to 1996, he was chief scientist and head of technology for Crystal Dynamics, a video-game manufacturing company. From 1989 to 1991, Dr. Kirk was an engineer for the Apollo Systems Division of Hewlett-Packard Company. He is the inventor on 50 patents and patent applications related to graphics design and has published more than 50 articles on graphics technology. Dr. Kirk holds a BS and an MS in mechanical engineering from the Massachusetts Institute of Technology and an MS and a PhD in computer science from the California Institute of Technology.

Monica Lam is a professor of computer science at Stanford University, having joined the faculty in 1988. She has contributed to research on a wide array of computer-systems topics, including compilers, program analysis, operating systems, security, computer architecture, and high-performance computing. Her recent research focus is to make computing and programming easier. In the Collective Project, she and her research group developed the concept of a livePC: subscribers to the livePC will automatically run the latest of the published PC virtual images with each reboot. That approach allows computers to be managed scalably and securely. In 2005, the group started a company called moka5 to transfer the technology to industry. In another research project, her program-analysis group has developed a collection of tools for improving software security and reliability. They developed the first scalable context-sensitive inclusion-based pointer analysis and a freely available tool called BDDBDDB that allows programmers to express context-sensitive analyses simply by writing Datalog queries. Other tools developed include Griffin, static and dynamic analysis for finding security vulnerabilities in Web applications, such as SQL injection; a static and dynamic program query language called PQL; a static memory-leak detector called Clouseau; a dynamic buffer-overrun detector called CRED; and a dynamic error-diagnosis tool called DIDUCE. Previously, Dr. Lam led the Stanford University Intermediate Format Compiler project, which produced a widely used compiler infrastructure known for its locality optimizations and interprocedural parallelization. Many of the compiler techniques that she developed have been adopted by industry. Her other research projects included the architecture and compiler for the CMU Warp machine, a systolic array of very-long-instruction-word processors, and the Stanford DASH distributed shared-memory machine. In 1998, she took a sabbatical leave from Stanford University to help to start Tensilica Inc., a company that specializes in configurable processor cores. She received a BSc from the University of British Columbia in 1980 and a PhD in computer science from Carnegie Mellon University in 1987.

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

Kathryn S. McKinley is a professor at the University of Texas at Austin. Her research interests include compilers, runtime systems, and architecture. Her research seeks to enable high-level programming languages to achieve high performance, reliability, and availability. She and her collaborators have developed compiler optimizations for improving memory-system performance, high-performance garbage-collection algorithms, scalable explicit-memory management algorithms for parallel systems, and cooperative dynamic optimizations for improving the performance of managed languages. She is leading the compiler effort for the TRIPS project, which is exploring attaining scalable performance improvements using explicit dataflow graph execution architectures. Her honors include being named an Association for Computing Machinery (ACM) Distinguished Scientist and receiving a National Science Foundation Career Award. She is the co-editor-in-chief of ACM’s Transactions on Programming Language and Systems (TOPLAS). She is active in increasing minority-group participation in computer science and, for example, co-led with Daniel Jimenez the CRAW/CDC Programming Languages Summer School in 2007. She has published over 75 refereed articles and has supervised eight PhD degrees. Dr. McKinley holds a BA (1985) in electrical engineering and computer science and an MS (1990) and a PhD (1992) in computer science, all from Rice University.

Charles Moore is an Advanced Micro Devices (AMD) corporate fellow and the CTO for AMD’s Technology Group. He is the chief engineer of AMD’s next-generation processor design. His responsibilities include interacting with key customers to understand their requirements, identifying important technology trends that may affect future designs, and architectural development and management of the next-generation design. Before joining AMD, Mr. Moore was a senior industrial research fellow at the University of Texas at Austin, where he did research on technology-scalable computer architecture. Before then, he was a distinguished engineer at IBM, where he was the chief engineer on the POWER4 project. Earlier, he was the coleader of the first single-chip POWER architecture implementation and the coleader of the first PowerPC implementation used by Apple Computer in its PowerMac line of personal computers. While at IBM, he was elected to the IBM Academy of Technology and was named an IBM master inventor. He has been granted 29 US patents and has several others pending. He has published numerous conference papers and articles on a wide array of subjects related to computer architecture and design. He is on the editorial board of IEEE Micro magazine and on the program committee for several important industry conferences. Mr. Moore holds a master’s degree in electrical engineering from the University of Texas at

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

Austin and a bachelor’s degree in electrical engineering from the Rens-selaer Polytechnic Institute.

Katherine Yelick is a professor in the Computer Science Division of the University of California, Berkeley. The main goal of her research is to develop techniques for obtaining high performance on a wide array of computational platforms and to ease the programming effort required to obtain performance. Dr. Yelick is perhaps best known for her efforts in global address space (GAS) languages, which attempt to present the programmer with a shared-memory model for parallel programming. Those efforts have led to the design of Unified Parallel C (UPC), which merged some of the ideas of three shared-address-space dialects of C: Split-C, AC (from IDA), and PCP (from Lawrence Livermore National Laboratory). In recent years, UPC has gained recognition as an alternative to message-passing programming for large-scale machines. Compaq, Sun, Cray, HP, and SGI are implementing UPC, and she is currently leading a large effort at Lawrence Berkeley National Laboratory to implement UPC on Linux clusters and IBM machines and to develop new optimizations. The language provides a uniform programming model for both shared and distributed memory hardware. She has also worked on other global-address-space languages, such as Titanium, which is based on Java. She has done notable work on single-processor optimizations, including techniques for automatically optimizing sparse matrix algorithms for memory hierarchies. Another field that she has worked in is architectures for memory-intensive applications and in particular the use of mixed logic, which avoids the off-chip accesses to DRAM, thereby gaining bandwidth while lowering latency and energy consumption. In the IRAM project, a joint effort with David Patterson, she developed an architecture to take advantage of this technology. The IRAM processor is a single-chip system designed for low power and high performance in multimedia applications and achieves an estimated 6.4 gigaops per second in a 2-W design. Dr. Yelick received her bachelor’s degree (1985), master’s degree (1985), and PhD (1991) in electrical engineering and computer science from the Massachusetts Institute of Technology.

STAFF

Lynette I. Millett is a senior program officer and study director at the Computer Science and Telecommunications Board (CSTB), National Research Council of the National Academies. She currently directs several CSTB projects, including a study to advise the Centers for Medicare and Medicaid Service on future information systems architectures and a study examining opportunities for computing research to help meet sus-

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×

tainability challenges. She served as the study director for the CSTB report Social Security Administration Electronic Service Provision: A Strategic Assessment. Ms. Millett’s portfolio includes substantial portions of CSTB’s recent work on software, identity systems, and privacy. She directed, among other projects, those that produced Software for Dependable Systems: Sufficient Evidence?, an exploration of fundamental approaches to developing dependable mission-critical systems; Biometric Recognition: Challenges and Opportunities, a comprehensive assessment of biometric technology; Who Goes There? Authentication Through the Lens of Privacy, a discussion of authentication technologies and their privacy implications; and IDs—Not That Easy: Questions About Nationwide Identity Systems, a post-9/11 analysis of the challenges presented by large-scale identity systems. She has an M.Sc. in computer science from Cornell University, where her work was supported by graduate fellowships from the National Science Foundation and the Intel Corporation; and a BA with honors in mathematics and computer science from Colby College, where she was elected to Phi Beta Kappa.

Shenae Bradley is a senior program assistant at the Computer Science and Telecommunications Board of the National Research Council. She currently provides support for the Committee on Sustaining Growth in Computing Performance, the Committee on Wireless Technology Prospects and Policy Options, and the Computational Thinking for Everyone: A Workshop Series Planning Committee, to name a few. Prior to this, she served as an administrative assistant for the Ironworker Management Progressive Action Cooperative Trust and managed a number of apartment rental communities for Edgewood Management Corporation in the Maryland/DC/Delaware metropolitan areas. Ms. Bradley is in the process of earning her BS in family studies from the University of Maryland at College Park.

Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 160
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 161
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 162
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 163
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 164
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 165
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 166
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 167
Suggested Citation:"Appendix B: Biographies of Committee Members and Staff." National Research Council. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. doi: 10.17226/12980.
×
Page 168
Next: Appendix C: Reprint of Gordon E. Moore's "Cramming More Components onto Integrated Circuits" »
The Future of Computing Performance: Game Over or Next Level? Get This Book
×
Buy Paperback | $46.00 Buy Ebook | $36.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The end of dramatic exponential growth in single-processor performance marks the end of the dominance of the single microprocessor in computing. The era of sequential computing must give way to a new era in which parallelism is at the forefront. Although important scientific and engineering challenges lie ahead, this is an opportune time for innovation in programming systems and computing architectures. We have already begun to see diversity in computer designs to optimize for such considerations as power and throughput. The next generation of discoveries is likely to require advances at both the hardware and software levels of computing systems.

There is no guarantee that we can make parallel computing as common and easy to use as yesterday's sequential single-processor computer systems, but unless we aggressively pursue efforts suggested by the recommendations in this book, it will be "game over" for growth in computing performance. If parallel programming and related software efforts fail to become widespread, the development of exciting new applications that drive the computer industry will stall; if such innovation stalls, many other parts of the economy will follow suit.

The Future of Computing Performance describes the factors that have led to the future limitations on growth for single processors that are based on complementary metal oxide semiconductor (CMOS) technology. It explores challenges inherent in parallel computing and architecture, including ever-increasing power consumption and the escalated requirements for heat dissipation. The book delineates a research, practice, and education agenda to help overcome these challenges. The Future of Computing Performance will guide researchers, manufacturers, and information technology professionals in the right direction for sustainable growth in computer performance, so that we may all enjoy the next level of benefits to society.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!