Appendix B

White Papers

Six workshop participants, through a keynote presentation and associated white paper, were tasked with presenting a vision that would help guide the deliberations of the workshop participants. Each discussed a key component of earthquake engineering research—community, lifelines, buildings, information technology, materials, and modeling and simulation—and considered the four cross-cutting dimensions—community resilience, pre-event prediction and planning, design of infrastructure, and post-event response and recovery. The white papers were distributed to all participants prior to the workshop, and they are published here in their original form. Final responsibility for their content rests entirely with the individual author.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 43
Appendix B White Papers Six workshop participants, through a keynote presenta- dimensions—community resilience, pre-event prediction tion and associated white paper, were tasked with present- and planning, design of infrastructure, and post-event re- ing a vision that would help guide the deliberations of the sponse and recovery. The white papers were distributed to workshop participants. Each discussed a key component all participants prior to the workshop, and they are published of earthquake engineering research—community, lifelines, here in their original form. Final responsibility for their con- buildings, information technology, materials, and model- tent rests entirely with the individual author. ing and simulation—and considered the four cross-cutting 43

OCR for page 43
44 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH TRANSFORMATIVE EARTHQUAKE ENGINEERING in public safety, economic strength, and national security. RESEARCH AND SOLUTIONS FOR ACHIEVING The White House National Security Strategy, released in EARTHQUAKE-RESILIENT COMMUNITIES May 2010, offers the following definition of resilience: the ability to prepare for, withstand, and rapidly recover from Laurie A. Johnson, PhD, AICP disruption, and adapt to changing conditions (White House, Principal, Laurie Johnson Consulting | Research 2010). The first part of this definition encapsulates the vast majority of work that has been done under NEHRP and as Summary part of modern earthquake engineering research and practice: strengthening the built environment to withstand earthquakes This paper is prepared for the National Science with life-safety standards and codes for new buildings and Foundation–sponsored, and National Research Council–led, lifeline construction, developing methods and standards for Community Workshop to describe the Grand Challenges in retrofitting existing construction, and preparing government Earthquake Engineering Research, held March 14–15, 2011, institutions for disaster response. The second half of this in Irvine, California. It offers ideas to help foster workshop definition captures much of the recent learning and research discussions on transformative earthquake engineering re- in earthquake engineering: codes and standards that consider search and achieving earthquake resilience in communities. post-disaster performance with minimal to no disruption, as Over the next 50 years, America’s population will exceed well as the linkages between building and lifeline perfor- 400 million, and much of it will be concentrated in the earth- mance and business, macro-economic, societal, and institu- quake-prone, mega-regions of the Northeast, Great Lakes, tional recovery. But, there is much more work yet to be done, Pacific Northwest, and northern and southern California. particularly in translating research into practice. To achieve an earthquake-resilient nation, as envisioned What the 1994 Northridge, 1995 Kobe, and 2010 Chile by the National Earthquake Hazards Reduction Program, earthquake and the 2005 Hurricane Katrina disasters have earthquake professionals are challenged to strengthen the in common is that they all struck relatively dense, modern physical resilience of our communities’ buildings and infra- urban settings, and collectively illustrate varying degrees of structure while simultaneously addressing the environmen - resilience in modern societies. Resilient communities need tal, economic, social, and institutional resilience of these more than physical resilience, which is best characterized by increasingly dense, complex, and interdependent urban the physical condition of communities’ buildings, infrastruc- environments. Achieving community resilience will require ture, and hazard defenses. They need to have environmental, a whole host of new, innovative engineering solutions, as economic, social, and institutional resilience as well. They well as significant and sustained political and professional also need to do more than withstand disruption; resilient leadership and will, an array of new financial mechanisms communities need to be able to rapidly recover and adapt to and incentives, and concerted efforts to integrate earthquake the new conditions created by a disaster. resilience into other urban design and social movements. We are now familiar with the physical vulnerabilities of There is tremendous need and opportunity for net- New Orleans’ levee system, but Hurricane Katrina struck a worked facilities and cyberinfrastructure in support of basic city that lacked resilience across these other dimensions as and applied research on community resilience. Key ideas well; conditions that likely influenced New Orleans’ lack presented in this paper include developing better models of of adaptive capacity and slow recovery in the five years community resilience in order to establish a baseline and to following the disaster (Public Strategies Group, 2011). measure resilience progress and effectiveness at an urban P rior to Hurricane Katrina, New Orleans’ population scale; developing more robust models of building risk/ (455,000 people in 2005) had been in decline for 40 years, resiliency and aggregate inventories of community risk/ resulting in 40,000 vacant lots or abandoned residences, resiliency for use in mitigation, land use planning, and emer- a stagnant economy, and local budgetary challenges that gency planning; enhancing efforts to upgrade the immense severely affected the maintenance of local services, facili - inventory of existing structures and lifelines to be more ties, and infrastructure, most notably the school, water, and earthquake-resilient; developing a broader understanding sewer systems (Olshansky and Johnson, 2010). In addition, of resiliency-based performance objectives for building and New Orleans’ social fabric was also very fragile. In 2005, lifeline design and construction; building the next generation the city’s median household income of $27,000 was well of post-disaster damage assessment tools and emergency below the national average of $41,000, as were the home - response and recovery “dashboards” based upon sensing net- ownership and minimum literacy rates of 46 and 56 percent, works; and sustaining systematic monitoring of post-disaster respectively (compared with the national averages of 68 and response and recovery activities for extended periods of time. 75 percent, respectively) (U.S. Census Bureau, 2000; U.S. Department of Education, 2003). The city’s poverty rate of Envisioning Resilient Communities, Now and in the Future 23.2 percent was also much higher than the national rate of 12.7 percent, and 29 percent of residents didn’t own cars The National Earthquake Hazards Reduction Program (U.S. Census Bureau, 2004). (NEHRP) envisions: A nation that is earthquake-resilient

OCR for page 43
45 APPENDIX B Although, in aggregate, these statistics might seem like square footage, and 70 billion square feet will be rebuilt or an extreme case in community vulnerability, they are not dis- replaced (Lang and Nelson, 2007). These statistics were de- similar from some of the conditions of, at least portions of, veloped before “The Great Recession” slowed housing starts many of our most earthquake-prone communities in southern from annual rates of more than 2 million in 2005 and 2006 and northern California, the Pacific Northwest, and the cen- to 0.5 million in 2009 and 2010, and pushed annual fore- tral and eastern United States. And, with the exception of a closure rates to more than 3 million (U.S. Census, 2011). The few pockets in northern and southern California, and Seattle, recent recession has also dramatically slowed commercial none of the most densely urbanized and vulnerable parts of development and postponed the upgrade of local facilities our earthquake-prone communities have been impacted by a and infrastructure, much of which was already in sore need recent, large, damaging earthquake. Our modern earthquake of modernization and maintenance before the recent fiscal experience, like most of our disaster experience in the United crisis. States, has largely been a suburban experience, and our engi- To achieve community resilience, now and in the fore- neering and preparedness efforts of the past century have not seeable future, we must take a more holistic approach to our yet been fully tested by a truly catastrophic, urban earthquake. work as earthquake professionals. With physical resilience as In April 2010, the U.S. Census officially marked the the foundation of our communities’ resilience, we also need country’s resident population at 308,745,538, and we are to focus on the environmental, economic, social, and insti- expected to add another 100 million in the next 50 years tutional resilience of our increasingly dense, complex, and (U.S. Census Bureau, 2011). This population growth is interdependent communities. Also, as past as well as future expected to be accommodated in the country’s fifth wave projections suggest, physical resilience can’t be achieved of migration, a wave of re-urbanism that began in the 1980s through expected rates of new construction and redevelop- (Fishman, 2005). By the time the fifth migration is complete, ment. It is going to require a whole host of new, innovative it is expected that 70 percent of the country’s population engineering solutions, as well as significant and sustained will be concentrated within 10 “mega-regions” of the coun- political and professional leadership and will, an array of new try (Barnett, 2007; Lang and Nelson, 2007). Half of these financial mechanisms and incentives, and concerted efforts mega-regions are in earthquake-prone regions of the North- to integrate earthquake resilience into other urban design east (from Washington D.C. to Boston); the Great Lakes and social movements. Otherwise, “an earthquake-resilient (Cleveland, Cincinnati, Detroit, and Chicago/Milwaukee); nation” will remain an idealistic mantra of our profession, Cascadia (Seattle and Portland); northern California (San and the expected earthquakes of the 21st century will cause Francisco Bay Area); and southern California. unnecessary human, socioeconomic, and physical hardship As these metropolitan areas continue to grow, it is for the communities they strike. predicted that development patterns will get increasingly dense as older urban cores are revitalized and the suburban A “Sputnik Moment” in Earthquake Engineering Research land use patterns of the last half of the 20th century become more costly to both inhabit and serve (Barnett, 2007). These In his 2011 State of the Union address, President assumptions are based upon expected increases in energy Obama referred to recent news of technological advances costs, an emphasis on transportation and climate change by other nations as this generation of Americans’ “sputnik policies that promote more centralized development, and moment,” and he called for a major national investment “in the significant fiscal challenges that local agencies are likely biomedical research, information technology, and especially to have in supporting distributed patterns of services. The clean energy technology—an investment that will strengthen demographics of these regions are also likely to shift as our security, protect our planet, and create countless new more affluent younger professionals, aging empty-nesters, jobs for our people” (White House, 2011). Following the and immigrant populations concentrate in the metropolitan Soviet Union’s launch of the “sputnik” satellite into space cores, a trend that is already advanced in Boston, New York, in 1957, the United States responded with a major sustained Chicago, Los Angeles, and San Francisco/Oakland (Nelson investment in research—most visibly through the establish- and Lang, 2007). In general, our population will be older ment of the National Aeronautics and Space Administration and more diverse than previous decades, adding to the social (NASA)—and education. The National Defense Education vulnerabilities of metropolitan areas. Act of 1958 dramatically increased federal investment in To accommodate the next 100 million people, 70 million education and made technological innovation and education housing units will need to be added to the current stock of into national-security issues (Alter, 2011). 125 million; 40 million are likely to be new housing units, It is well known that disasters are focusing events while the remaining 30 million are likely to replace dam- for public policy agenda setting, adoption, and change aged or demolished units on existing property (Nelson and (Birkland, 1997). The September 11, 2001, terrorist attacks Lang, 2007). Also, to accommodate these growing mega- put man-made threats at the forefront of disaster policy economies, 100 billion square feet of nonresidential space making, management, and program implementation. Sep- will likely be added; 30 billion of which is likely to be new tember 11 has also been described by some as the major

OCR for page 43
46 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH Community Resilience focusing event that significantly expanded the size and scope of the federal government as well as its debt (Stone, 2010). Drawing upon the research literature of several social Similarly, Hurricane Katrina has been another focusing event science disciplines, Norris et al. (2008) define community for hazard and disaster management policy and program resilience as a process linking a network of adaptive capaci- implementation. To some extent, it has reversed some of the ties in “economic development, social capital, information trends started after September 11, but disaster recovery and and communication, and community competence.” To build mitigation have yet to regain their former status as officially collective resilience, they recommend that “communities preferred disaster policy responses (Waugh, 2006). must reduce risk and resource inequities, engage local For earthquake engineering research and seismic policy people in mitigation, create organizational linkages, boost making, adoption, and change in the United States, the 1971 and protect social supports, and plan for not having a plan, San Fernando earthquake has been the most recent and salient which requires flexibility, decision-making skills, and trusted focusing event. It led to the formation of the California sources of information that function in the face of unknowns” Seismic Safety Commission in 1975 and the passage of the (Norris et al., 2008). To achieve earthquake resilience, we, National Earthquake Hazards Reduction Act in 1997 and as earthquake engineering researchers and professionals, the formation of NEHRP thereafter (Birkland, 1997). But, need to look beyond earthquakes to other disasters, and even was the 1971 earthquake or any other more recent U.S. earth- outside of disasters, to understand how our work fits in and quake a sputnik moment for the United States? The pairing how to link our work with other initiatives to build adaptive of the 1994 Northridge and 1995 Kobe earthquakes may capacities and incite resiliency-related policy and actions. well have been a sputnik moment for Japan. The tremendous In 2006, earthquake professionals and public policy ad- human loss, economic consequences, and, in some cases, vocates joined forces to develop a set of policy recommenda- surprising causes and levels of damage to structures and tions for enhancing the resiliency of existing buildings, new infrastructure all contributed to Japan’s major investment in buildings, and lifelines in San Francisco (SPUR, 2009). The earthquake engineering and disaster management research, San Francisco Planning and Urban Research Association’s education, and policy reform over the past decade. Will we (SPUR’s) “Resilient City Initiative” chose to analyze the “ex- have to wait until a major catastrophic urban earthquake pected” earthquake, rather than the “extreme” event, because it strikes the United States causing unprecedented human and is a large event that can reasonably be expected to occur once economic losses to have our sputnik moment in earthquake during the useful life of structures and lifeline systems in the engineering research, practice, and policy reform? city. It also defined a set of performance goals—as target states If some of the underlying motivations of a sputnik mo- of recovery within hours, days, and months following the ment stem from shock as well as a sense of being surpassed, expected earthquake—in terms of four community clusters: then is there any way for our earthquake professional com- critical response facilities and support systems; emergency munity to better communicate the lessons from Chile versus housing and support systems; housing and neighborhood Haiti and other disasters around the world to compel a more infrastructure; and community recovery (SPUR, 2009). focused policy and investment in earthquake engineering and Lacking a theoretical model or set of quantifiable mea- risk reduction research, education, and action? What can we sures of community resilience, SPUR relied on expert opinion learn from the biomedical, high-tech, and “green” engineer- to set the target states of recovery for San Francisco’s build- ing movements, as examples, which may have recently had, ings and infrastructure and to assess the current performance or may currently be a part of sputnik moments, in which status of the cluster elements. For example, SPUR set a target policy makers and private investors are motivated to take goal to have 95 percent of residences available for “shelter-in- action in ways that earthquake preparedness has not been place” by their inhabitants within 24 hours after an expected able to do with the same growth trajectory and enthusiasm? earthquake; it also estimated that it would take 36 months for the current housing stock to be available for “shelter-in-place” Ideas for Transformative Earthquake Engineering in 95 percent of residences. But, is 95 percent an appropri- Research and Solutions ate target for ensuring an efficient and effective recovery in the city’s housing sector following an expected earthquake? The remainder of this paper presents ideas to help Does San Francisco really need to achieve all the performance foster workshop discussions on transformative earthquake targets defined by SPUR to be resilient following an expected engineering research and on achieving earthquake-resilient earthquake? Which target should be worked on first, second, communities. It is organized around four dimensions: com- and so forth? And, given all the competing community needs, munity resilience, pre-event prediction and planning, design when is the most appropriate time to promote an earthquake of infrastructure, and post-event response and recovery. Par- resiliency policy agenda? ticular emphasis is given to community-level ideas that might There is tremendous need for networked facilities and utilize the networked facilities and cyberinfrastructure of the cyberinfrastructure in support of basic and applied research George E. Brown, Jr. Network for Earthquake Engineering on community resilience. This includes: Simulation (NEES).

OCR for page 43
47 APPENDIX B • Developing an observatory network to measure, interdependency and how these affect building func- monitor, and model the earthquake vulnerability and tionality, time required to recover various levels of resilience of communities, including communities’ building functionality, and other economic and social adaptive capacities across many physical, social, eco- resilience factors. nomic, and institutional dimensions. Clearer defini- • Developing aggregate inventories and models of tions, metrics, and timescales are needed to establish community or regional risk/resiliency that can be a baseline of resilience and to measure resilience used in mitigation, land use planning, and emergency progress and effectiveness on an urban scale. planning. Local building and planning review pro- • Collectively mapping and modeling the individual cesses and emergency management practices need and organizational motivations to promote earth- tools to assess the incremental changes in community quake resilience, the feasibility and cost of resilience risk/resiliency over time caused by new construction, actions, and the removal of barriers and building of redevelopment, and implementation of different capacities to achieve successful implementation. mitigation policies and programs. Real estate prop- Community resilience depends in large part on our erty valuation and insurance pricing also need better ability to better link and “sell” physical resilience methods to more fully reflect risk and resilience in with environmental, economic, social, and even risk transfer transactions. Within current decision institutional resilience motivations and causes. frameworks and practices, redevelopment of a low- • Developing the quantitative models or methods density, low-rise, but structurally vulnerable neigh- that prioritize and define when public action and borhood into a high-density, high-rise development subsidy are needed (and how much) to fund seismic is likely to be viewed as a lowering of earthquake rehabilitation of certain building or infrastructure risk and an increase in economic value to the com- types, groups, or systems that are essential to a com- munity. But is it really? Tools that more accurately munity’s resilience capacity versus ones that can be value the aggregation of risk across neighborhoods, left to market forces, attrition, and private investment incremental changes in community resiliency, effects to address. of aging and densification of the urban environment • Developing a network of community-based earth- and accumulation of risk over time, and the dynamics quake resiliency pilot projects to apply earthquake of adaptive capacity of a community post-disaster are engineering research and other knowledge to reduce needed. risk, promote risk awareness, and improve commu- • Developing models of the effects of institutional nity resilience capacity. Understanding and develop- practices and governance on community resilience ing effective, alternative methods and approaches to in terms of the preparedness, recovery, and adaptive building local resilience capacity are needed because capacities. This includes modeling the effects of earthquake-prone communities have varying cul- building and land use planning regulatory regimes, tures, knowledge, skills, and institutional capacities. emergency decision-making processes, institutional leadership and improvisational capacities, and post- disaster financing and recovery management policies. Pre-Event Prediction and Planning To date, much of the pre-event research and practice Design of Infrastructure has focused on estimating the physical damage to individual structures and lifeline systems, creating inventories and sce- Achieving community resilience will require enhanced narios for damage and loss estimation, and preparing gov- efforts to upgrade the immense inventory of existing struc- ernment institutions for disaster response. Ideas for “opera- tures and lifelines to be more earthquake-resilient and a tionalizing” a vision of community-level resiliency include: broader understanding of resiliency-based performance objectives for building and lifeline design and construction. • Developing more robust earthquake forecasting and Ideas include: scenario tools that address multiple resiliency per- formance objectives and focus on community-level • Developing enhanced methods for evaluating and resilience impacts and outcomes. retrofitting existing buildings and lifeline systems. • Developing more holistic models of individual build- These methods need to reliably model the expected ing risk/resiliency that extend structural simulations responses of existing buildings and lifelines to differ- and performance testing to integrate information ent levels of ground motions and multiple resiliency on soil and foundation interaction, non-structural performance objectives. Methods need to go beyond systems, and lifeline systems with the structure and estimating the costs to retrofit toward developing contents information and that model post-disaster more robust models that consider the full range of building functionality and lifeline dependency and resiliency benefits and costs of different mitigation

OCR for page 43
48 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH policy and financing strategies. These alternatives mental Design (LEED) program; architects and en- need to think creatively of ways to reuse existing gineers developing green, adaptive, building “skins,” stock, cost-effectively piggy-back on other rehabili- construction materials, and sensing networks; and tation efforts, and incentivize and ease the burden of urban designers working on sustainable community retrofitting existing stock. Current and future political standards and practices. Current efforts to build new challenges are likely to include pressures to preserve “smart” buildings and cities could potentially benefit historic and cultural integrity, and resistance to from the networked facilities and cyberinfrastructure invest limited capital resources in seismic rehabilita- that the earthquake engineering community has devel- tion projects. Concepts of a federal infrastructure oped in managing and processing the sensing data. In bank might be expanded to include all seismically turn, earthquake engineering could potentially assist vulnerable structures and infrastructure, and new in helping to develop better and more cost-effective public-private financing mechanisms may need to be “sensing retrofits” of existing structures and lifeline developed. Mechanisms to effectively communicate systems to be “smarter” and to better integrate disas- the vulnerability of existing structures and lifeline ter resilience into the green building and sustainable systems to owners, occupants, and policy makers community standards and practices. In November to incite and reward action, such as an earthquake- 2010, the Green Building Council reached a major resilience certification system, also need to be care- milestone in its short 10-year life span, having certi- fully assessed. Sustained efforts to build consensus fied more than 1 billion square feet of commercial for standards and actions to evaluate and retrofit building space (Koch, 2010). Since it was introduced existing building and lifeline systems, develop in 2000, the Council’s LEED program has had more guidelines, and transfer knowledge and technology than 36,000 commercial and 38,000 single-family to building officials, owners, and engineers; utility homes participating in the program, of which 7,194 owners, operators, regulators, and engineers; and commercial projects and 8,611 homes have been com- other policy and decision makers are also needed. pleted and certified as LEED compliant (Koch, 2010). • Advancing performance-based earthquake engineer- Although the costs for becoming LEED certified may ing for buildings, lifelines, geotechnical structures, be substantially lower than the costs for enhancing and hazard defenses. Performance-based engineer- seismic performance, it is not a full explanation for ing needs to take a broader look at the integrated the program’s comparative national and marketing performance of a structure as well as the layers of successes. Minimizing damage and reducing the substructures, lifeline systems, and surrounding com- deconstruct/construct cycles of development with munity infrastructure that it depends upon. For ex- higher building performance levels should also be ample, a 50-plus story residential high-rise is, in fact, considered as benefits in building valuation. a neighborhood- or community-vertical, thus making the lifeline conveyance and social resilience of a Post-Event Response and Recovery single structure. Even if the structure is deemed safe following a disaster, lifeline disruptions may impact To date, much of the post-event research and practice evacuations and render the structure uninhabitable has focused on estimating the physical damage and economic with sealed windows and lack of elevator service, as losses caused by earthquakes and aiding government institu- examples. tions in disaster response. Ideas for enhancing community- • To have resilient communities, we cannot think of level capabilities to rapidly recover from disruption and building-specific performance only. Community- adapt to changing conditions include: level performance-based engineering models are needed. These may require a systems approach to • Creating a more integrated multi-disciplinary net- consider the complex interactions of lifeline systems, work and information management system to cap- critical network vulnerability and dependencies, and ture, distill, integrate, and disseminate information dependencies between physical, social, economic, about the geological, structural, institutional, and and institutional sectors of a community, and to de- socioeconomic impacts of earthquakes, as well as velop guidelines and “urban-level design standards” post-disaster response and recovery. This includes for community-level performance. the creation and maintenance of a repository for post- • Making seismic risk reduction and resilience an earthquake reconnaissance data. integral part of many professional efforts to improve • Developing the next generation of post-disaster the built environment, and building new alliances damage assessments. Post-disaster safety assessment and coalitions with interest groups working on these programs are now well institutionalized in local goals. This includes the Green Building Council and emergency management and building inspection the Council’s Leadership in Energy and Environ- departments, with legal requirements, procedures,

OCR for page 43
49 APPENDIX B Acknowledgments and training. The next generation of post-disaster assessments might integrate the sensing networks This paper was developed with the support of the of “smart” buildings and lifeline systems to make National Research Council and the organizers of Grand it more quickly possible for emergency responders, Challenges in Earthquake Engineering Research: A Com- safety inspectors, and system operators, as well munity Workshop. The ideas and opinions presented in this as residential and commercial building occupants paper are those of the author and also draw upon the work themselves, to understand the post-disaster condi- of the National Research Council’s Committee on National tions of buildings or systems and resume occupancy Earthquake Resilience—Research, Implementation, and and operation safely or seek alternatives. The next Outreach, of which the author was a member, and its final generation of assessments could also take a more report (NRC, 2011). In addition, the author acknowledges holistic view of the disaster impacts and losses, the work of members of the San Francisco Planning and focusing on the economic and social elements as Urban Research Association’s “Resilient City Initiative.” The well as the built environment. Just like physical dam- author also appreciates the suggestions and detailed review age assessments, these socioeconomic, or resilience, of this paper provided by workshop Co-chair Chris Poland assessments need to be done quickly after a disaster, and session moderator Arrietta Chakos; Liesel Ritchie for her and also iteratively so that more-informed, and ap- sharing of ideas; and Greg Deierlein and Reggie DesRoches propriately timed, program and policy responses (authors of companion papers for the workshop) for their can be developed. Such assessments need to look at helpful discussion while preparing the paper. Appreciation the disaster-related losses, including the ripple ef- is also extended to the conference Co-chairs, Greg Fenves fects (i.e., lost wages, tax revenue, and income); the and Chris Poland, and the NRC staff for their leadership and spectrum of known available resource capital (both efforts in organizing this workshop. public and private wealth and disaster financing resources) for response and recovery; social capital; References and the potential unmet needs, funding gaps, and shortfalls, to name a few. Alter, J. 2011. A script for “Sputnik”: Obama’s State of the Union challenge. • Developing the next-generation emergency response Newsweek. January 10. Available at www.newsweek.com/2011/01/05/ and recovery “dashboard” that uses sensing networks obama-sputnik-and-the-state-of-the-union.html. Barnett, J. 2007. Smart growth in a changing world. Planning 73(3):26-29. for emergency response and recovery, including Birkland, T. 1997. After Disaster: Agenda Setting, Public Policy, and Focus- impact assessment, resource prioritization and allo- ing Events. American Governance and Public Policy. Washington, DC: cation, and decision making. Research from recent Georgetown University Press. disasters has reported on the use of cell phones, social Fishman, R. 2005. The fifth migration. Journal of the American Planning networking, and Internet activity as a validation of Association 7(4):357-366. Koch, W. 2010. U.S. Green Building Council certifies 1 billion square feet. post-disaster human activity. They also caution that USAToday.com. Green House. November 15. Available at www.content. sensing networks need to be designed to be passive usatoday.com/communities/greenhouse/post/2010/11/green-building- and part of the act of doing something else, rather council-one-billion-square-feet-/1. than requiring deliberate reporting or post-disaster Lang, R. E., and A. C. Nelson. 2007. The rise of the megapolitans. Planning surveys. They also need to be reasonable and statis- 73(1):7-12. Nelson, A. C., and R. E. Lang. 2007. The next 100 million. Planning tically active, culturally appropriate, and conscious 73(1):4-6. of the “digital divide” in different socioeconomic Norris, F. H., S. P. Stevens, B. Pfefferbaum, K. F. Wyche, and R. L. and demographic groups. These systems can also Pfefferbaum. 2008. Community resilience as a metaphor, theory, set push, and not just pull, information that can be of capacities, and strategy for disaster readiness. Dartmouth Medical valuable in emergency response management and School, Hanover, New Hampshire. American Journal of Community Psychology 41(1):127-150. communication. NRC (National Research Council). 2011. National Earthquake Resilience— • Sustained documentation, modeling, and monitor- Research, Implementation, and Outreach . Washington, DC: The ing of emergency response and recovery activities, National Academies Press. including the mix of response and recovery activi- Olshansky, R. B., and L. A. Johnson. 2010. Clear as Mud: Planning for ties; multi-organizational and institutional actions, the Rebuilding of New Orleans. Washington, DC: American Planning Association. funding, interdependencies, and disconnections that Public Strategies Group. 2011. City of New Orleans: A Transformation both facilitate and impede recovery; and resiliency Plan for City Government. PSG, March 1. Available at media.nola.com/ outcomes at various levels of community (i.e., house- politics/other/NOLA%20Transformation%20Plan.pdf. hold, organizational, neighborhood, and regional SPUR (San Francisco Planning and Urban Research Association). 2009. The levels). This is longitudinal research requiring sus- Resilient City: A New Framework for Thinking about Disaster Planning in San Francisco. The Urbanist. Policy Initiative. San Francisco, CA. tained efforts for 5 to 10 years and possibly even Available at www.spur.org/policy/the-resilient-city. longer, which does not fit well with existing research funding models.

OCR for page 43
50 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH Stone, D. 2010. Hail to the chiefs: The presidency has grown, and grown Waugh, W. L. 2006. The political costs of failure in the Katrina and Rita and grown, into the most powerful, most impossible job in the world. disasters. The Annals of the American Academy of Political and Social Newsweek. November 13. Science 604(March):10-25. U.S. Census Bureau. 2000. Census of Housing. Available at www.census. White House. 2010. National Security Strategy. May 27. Washington, gov/hhes/www/housing/census/historic/owner.html. DC. Available at www.whitehouse.gov/sites/default/files/rss_viewer/ U.S. Census Bureau. 2004. Statistical Abstract of the United States. Avail - national_security_strategy.pdf. able at www.census.gov/prod/www/abs/statab2001_2005.html. White House. 2011. Remarks of President Barack Obama in State of the U.S. Census Bureau. 2011. Census Bureau Home Page. Available at www. Union Address—As Prepared for Delivery. January 25. Available at census.gov. www.whitehouse.gov/the-press-office/2011/01/25/remarks-president- U.S. Department of Education. 2003. State and County Estimates of Low barack-obama-state-union-address. Literacy. Institute of Education Sciences. Available at nces.ed.gov/naal/ estimates/StateEstimates.aspx.

OCR for page 43
51 APPENDIX B GRAND CHALLENGES IN LIFELINE EARTHQUAKE Water and wastewater systems support population growth, ENGINEERING RESEARCH industrial growth, and public health. Power systems provide lighting to homes, schools, and businesses and energize Reginald DesRoches, PhD communications. Transportation systems are the backbone Professor and Associate Chair, School of Civil & of mobility and commerce and connect communities. Tele- Environmental Engineering communications systems provide connectivity on the local, Georgia Institute of Technology national, and global scale. Lifeline systems are the basis for producing and deliver- Summary ing goods and services that are key to economic competitive- ness, emergency response and recovery, and overall quality Lifeline systems (transportation, water, waste disposal, of life. Following an earthquake, lifeline systems provide a electric power, gas and liquid fuels, and telecommunica- critical link to communities and individuals, including water tion) are intricately linked with the economic well-being, for putting out fires, roads for evacuation and repopulation of security, and social fabric of the communities they serve, communities, and connectivity for emergency communica- and the nation as a whole. The mitigation of earthquake tions. The resilience of lifeline systems has a direct impact risks for lifeline facilities presents a number of major chal- on how quickly a community recovers from a disaster, as lenges, primarily because of the vast inventory of facilities, well as the resulting direct and indirect losses. their wide range in scale and spatial distribution, the fact that they are partially or completely buried and are there- Challenges in Lifeline Earthquake Engineering fore strongly influenced by soil-structure interaction, their increasing interconnectedness, and their aging and deteriora- The mitigation of earthquake hazards for lifeline tion. These challenges will require a new set of research tools facilities presents a number of major challenges, primar- and approaches to adequately address them. The increasing ily because of the vast inventory of facilities, their wide access to high-speed computers, inexpensive sensors, new range in scale and spatial distribution, the fact that they are materials, improved remote sensing capabilities, and infra- partially or completely buried and strongly influenced by structure information modeling systems can form the basis interactions with the surrounding soil, and their increasing for a new paradigm for lifeline earthquake engineering in interconnectedness. Because of their spatial distribution, the areas of pre-event prediction and planning, design of the they often cannot avoid crossing landslide areas, liquefaction next-generation lifeline systems, post-event response, and zones, or faults (Ha et al., 2010). community resilience. Traditional approaches to lifeline One of the challenges in the area of lifeline systems, earthquake engineering have focused on component-level when it comes to testing, modeling, or managing these vulnerability and resilience. However, the next generation of systems, is the vast range of scales. Testing or modeling research will also have to consider issues related to the im- of new innovative materials that might go into bridges or pact of aging and deteriorating infrastructure, sustainability pipelines could occur at the nano (10–9 m), micro (10–6 m), considerations, increasing interdependency, and system- or milli (10–3 m) level, while assessment of the transporta- level performance. The current generation of the George E. tion network or fuel distribution system occurs at the mega Brown, Jr. Network for Earthquake Engineering Simulation (10+6 m) scale. Multi-scale models required for lifeline (NEES) was predicated on large-coupled testing equipment systems involve trade-offs between the detail required for and has led to significant progress in our understanding of accuracy and the simplification needed for computational how lifeline systems perform under earthquake loading. efficiency (O’Rourke, 2010). The next generation of NEES can build on this progress by A second challenge related to the assessment of the adapting the latest technological advances in other fields, performance of lifeline systems is that many lifeline systems such as wireless sensors, machine vision, remote sensing, have a substantial number of pipelines, conduits, and compo- and high-performance computing. nents that are completely below ground (e.g., water supply, gas and liquid fuel, electric power) or partially underground Introduction: Lifelines—The Backbone of American (e.g., bridge or telecommunication tower foundations) and Competitiveness are heavily influenced by soil-structure interaction, surface faulting, and liquefaction. Hence, a distinguishing feature in The United States is served by an increasingly complex evaluating the performance of lifelines is establishing a thor- array of critical infrastructure systems, sometimes referred ough understanding of the complex soil-structure interaction. to as lifeline systems. For the purposes of the paper, life- A third and critical challenge related to lifeline systems line systems include transportation, water, waste disposal, is their vast spatial distribution and interdependency between electric power, gas and liquid fuels, and telecommunication lifeline systems—either by virtue of physical proximity or systems. These systems are critical to our economic com- via operational interaction. Damage to one infrastructure petitiveness, national security, and overall quality of life. component, such as a water main, can cascade into damage to

OCR for page 43
52 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH a surrounding lifeline component, such as electrical or tele- capabilities, and building or bridge information model - communications cables, because they are often co-located. ing (BIM or BrIM) systems can form the basis for a new From an operational perspective, the dependency of lifelines paradigm for lifeline earthquake engineering in the areas on each other complicates their coupled performance dur- of pre-event prediction and planning, design of the next- ing an earthquake (Duenas-Osorio et al., 2007), as well as generation lifeline systems, post-event response, and com- their post-event restoration. For example, electrical power munity resilience. networks provide energy for pumping stations and control Although the current generation of NEES is predicated equipment for transmission and distribution systems for oil on large-coupled testing equipment and has led to significant and natural gas. progress in our understanding of how lifeline systems per- A fourth challenge is the aging of lifeline systems. Many form under earthquake loading, the next generation of NEES lifelines were designed and constructed 50-100 years ago can build on this progress by adapting the latest technological without special attention to earthquake hazards and are de- advances in other fields, such as wireless sensors, machine teriorating (ASCE, 2009). Moreover, many lifeline systems vision, remote sensing, and high-performance computing. have demands placed on them that are much higher than they In addition, the coupling of seismic risk mitigation with were originally designed to have. Many lifeline systems are other pressing global needs, such as sustainability, will re- already damaged prior to an earthquake, which increases quire a different way of thinking about lifeline earthquake their vulnerability. engineering. Sustainability, in this paper, is defined as the ability to meet the needs of current and future generations by being resilient, cost-effective, environmentally viable, and Recent Advances in Lifeline Earthquake Engineering socially equitable. Lifeline systems account for 69 percent The field of lifeline earthquake engineering has experi- of the nation’s total energy use, and more than 50 percent of enced significant progress over the past decade. Early studies greenhouse gas emissions are from lifeline systems, so their in lifeline earthquake engineering focused on component continued efficient performance is critical for sustainable behavior and typically used simple system models. They development (NRC, 2009). often looked at the effects of earthquakes on the performance of sub-components within each infrastructure system (e.g., Pre-Event Prediction and Planning columns in a bridge). As more advanced experimental and computational modeling facilities came online via the NEES Because earthquakes are low-probability, high- program, the effects of larger systems (e.g., entire bridge) and consequence events, effective planning is critical to making coupled soil-structure systems (e.g., pile-supported wharf) informed decisions given the risk and potential loses. One were assessed (McCullough et al., 2007; Johnson et al., of the key tools in pre-event planning is the use of seismic 2008; Kim et al., 2009). Most recently, advances in model- risk assessment and loss estimation methodologies, which ing and computation have led to the ability to study entire combines the systematic assessment of regional hazards with systems (e.g., transportation networks, power networks, infrastructure inventories and vulnerability models through etc.), including the local and regional economic impact of geographic information systems. earthquake damage to lifeline systems (Kiremidjian et al., The performance of lifeline systems is strongly a func- 2007; Gedilkli et al., 2008; Padgett et al., 2009; Romero et tion of the seismic hazard and the geological conditions on al., 2010; Shafieezadeh et al., 2011). which the lifeline systems are sited. Lifeline systems are strongly affected by the peak ground deformation, which often comes from surface faulting, landslides, and soil Transformative Research in Lifeline Earthquake liquefaction. Development of approaches to quantitatively Engineering predict various ground motion parameters, including peak A new set of research tools is needed to adequately ad- ground displacement, will be important for understanding dress the critical challenges noted above, namely the vast the performance of lifeline systems. This quantitative as- range in scales, complex mix of soil-structure-equipment sessment has traditionally been performed using costly and systems, interdependencies, and aging and deteriorating time-consuming approaches that are only typically done on a lifeline systems. Modeling and managing interdependent local scale. The advent of advanced remote sensing products, systems, such as electric power, water, gas, telecommunica- from air- and spaceborne sensors, now allows for the explora- tions, and transportation systems require testing and simula - tion of land surface parameters (i.e., geology, temperature, tion capabilities that can accommodate the many geographic moisture content) at different spatial scales, which may lead and operational interfaces within a network, and among the to new approaches for quantifying soil conditions and prop- different networks. erties (Yong et al., 2008). The increasing access to high-speed computers, closed- One of the main challenges in regional risk assessment form techniques for near-real-time network analysis, inex- is the lack of reliable and consistent inventory data. Research pensive sensors, new materials, improved remote sensing is needed in finding better ways to acquire data on the vast

OCR for page 43
53 APPENDIX B inventories contained within a lifeline network. Although re- component (e.g., pipes, bridges, substations). Significant searchers have effectively deployed remote sensing technolo- progress has been made in understanding the performance gies following natural disasters (such as LiDAR), research is of the lifeline components using the current generation needed in developing ways that these technologies can be ef- of NEES facilities (Johnson et al., 2008; O’Rouke, 2007; fectively used in acquiring inventory data, including physical Abdoun et al., 2009; Ivey et al., 2010; Shafieezadeh et al., attributes of different lifeline systems and at different scales. 2011). However, additional work is needed in designing Pre-event planning will require that we learn from past these systems, considering their role within a larger net- earthquake events. This will require us to vastly improve work, the interdependent nature of lifeline systems, and the post-earthquake information acquisition and management. trade-offs in terms of cost and safety associated with various Comprehensive and consistent information on the earthquake design decisions. One critical tool for performing this type hazard, geological conditions and responses, structural dam- of analysis is regional risk assessment programs, such as age, and economic and social impacts observed in previous HAZUS or REDARS. These programs have traditionally earthquakes are invaluable in planning for future events. This been used to assess and quantify risks; however, they can also will provide unprecedented information on the performance be the foundation for design of infrastructure systems based of individual lifelines but will also provide critical infor- on system performance. One key element that goes into mation on the interaction among lifeline systems. A major these analyses is a fragility or vulnerability curve. Fragility effort of the future NEES program should be to develop a curves are critical not only for comparing the relative vul- comprehensive effort among professional, governmental, nerability of different systems, but also for serving as input and academic institutions to systematically collect, share, to cost-benefit studies and life-cycle cost (LCC) analyses. and archive information on the effects of significant earth- Although cost-benefit analyses are often conducted for quakes, including on the built and natural infrastructures, scenario events or deterministic analyses, probabilistic cost- society, and the economy. The information will need to be benefit analyses are more appropriate for evaluation of the stored, presented, and made available in structured electronic anticipated return on investment in a novel high-performance data management systems. Moreover, the data management system, by considering the risk associated with damage and systems should be designed with input from the communities cost due to potential seismic damage. Additionally, LCC that they are intended to serve. analyses provide an effective approach for characterizing the The use of regional seismic risk assessment is key to lifetime investment in a system. Models often incorporate pre-event planning for various lifeline systems under condi- costs associated with construction, maintenance, upgrade, tions of uncertainty. For example, detailed information on and at times deconstruction (Frangopol et al., 1997). The the performance of the bridges in a transportation network, LCC models can be enhanced to also include costs associ- coupled with traffic flow models, can inform decision makers ated with lifetime exposure to natural hazards (Chang and on the most critical bridges for retrofit, and which routes Shinozuka, 1996). Such models offer viable approaches for would best serve as evacuation routes following an earth- evaluating the relative performance of different structural quake (Padgett et al., 2010). Significant progress has been systems. Given the increased emphasis on sustainability, the made in understanding the seismic performance of lifeline next generation of LCC models can also include aspects of components (e.g., bridges) via component and large-scale environmental impacts (both in terms of materials usage and testing and analysis; however, much less is known about the construction, and deconstruction resulting from earthquake operability of these components, and the system as a whole, damage) and weigh them against resilience. For example, as a function of various levels of damage. The use of sen- although greater material investment is often required to sors and data management systems would better allow us make infrastructure systems more resilient, this may make to develop critical relationships between physical damage, them less sustainable. Conducting this systems-level design spatio-temporal correlations, and operability. will require access to data on both structural parameters Finally, as infrastructure systems continue to age and de- (e.g., bridge configuration), environmental, and operational teriorate, it will be necessary to quantify the in situ condition data (such as traffic flows). One research challenge will be of these systems so that we can properly assess the increased how to design our infrastructure systems using an “inverse- vulnerability under earthquake loading. A dense network of problem” paradigm. For example, a goal in design might be sensors, coupled with advanced prognostic algorithms, will to have power and telecommunications restored within four enable the assessment of in situ conditions, which will allow hours of an earthquake event. Using this information as a for better predictions of the expected seismic performance constraint, the systems (and subsystems) can be designed to (Kim et al., 2006; Lynch and Loh, 2006; Glaser et al., 2007). achieve these targets. The next generation of BIM or BrIM systems will provide unprecedented information that can be used in the Performance-Based Design of Lifeline Systems performance-based seismic design community (Holness, The earthquake performance of a lifeline system is 2 008). Building information modeling and associated often closely correlated with the performance of a lifeline d ata acquisition sensors (e.g., 3-D scanning and map -

OCR for page 43
70 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH A BUILT ENVIRONMENT FROM FOSSIL CARBON photosynthesis. For the energy economy, we all realize that coal and petroleum and natural gas consist of chemically John W. Halloran reduced carbon and reduced hydrogen from ancient biomass. Professor, Department of Materials Science and Engineering Coal and oil are residues of the tissues of green creatures. It is University of Michigan clear that burning fossil-carbon returns to today’s atmosphere the CO2 that was removed by photosynthesis eons ago. Transformative Materials We do not often consider that our built environment is also predominantly created from fossils. Cement is made These white papers are intended to stimulate discus- from carbonate limestone, consisting of ancient carbon sion in this Workshop on Grand Challenges in Earthquake dioxide chemically combined with calcium oxide in fossil Engineering Research. I am a materials engineer with no shells. Limestone is one of our planet’s great reservoirs of expertise at all in earthquake engineering, so what can I geo-sequestered carbon dioxide, removed from the air long possibly have to say that could, in some way, stimulate the ago. The reactivity of Portland cements come from the alka- discussions of the experts at this workshop? This workshop line chemical CaO, which is readily available because lime- seeks “transformative solutions” for an earthquake-resilient stone fossil rocks are abundant, and the CaO can be obtained society involving, among other important issues, the design by simple heating of CaCO3. However, this liberates about a of the physical infrastructure. I will address points relating ton of CO2 per ton of cement. This is fossil-CO2, returned to to the design of the infrastructure, in terms of the most basic the atmosphere after being sequestered as a solid. Limestone- materials from which we build our infrastructure. I hope to based cement—the mainstay of the built environment—is address transformational change in infrastructural materials not sustainable. that could make our society more resilient not only to the Steel is cheap because we have enormous deposits of sudden shaking of the ground, but also to the gradual chang- iron oxide ore. These iron ores are a kind of geochemical ing of the climate. If you seek a change in the infrastructure fossil that accumulated during the Great Oxygenation Event large enough to be considered transformational for earth- 2 billion years ago, as insoluble ferric oxide formed from quake resilience, can you also make the change large enough oxygen produced by photosynthesis. Red iron oxide is a res- to make a fundamental change in sustainability? ervoir of the oxygen exhaled by ancient green creatures. We I want to attempt to stimulate discussion not only on how smelt iron ore using carbon from coal, so that carbon fossil- to use steel and concrete to make us less vulnerable to shock, ized for 300 million years combines with oxygen fossilized but also to make us less vulnerable to climate change. I hope for 2,000 million years to flood our current atmosphere with to be provocative to the point of being outrageous, because CO2. Every ton of steel is associated with about 1.5 tons of I want you to think about abandoning steel and concrete. I fossil-CO2 liberated in the modern atmosphere. We could, at am going to suggest designing a built environment with more some cost, capture and sequester the CO2 from limekilns and resilient, lighter, stronger, and more sustainable materials blast furnaces, or we could choose to smelt iron ore without based on fossil carbon. carbothermic reduction. I prefer to consider a different way T he phrase “sustainable materials based on fossil to use fossil resources, both in the energy economy and the carbon”—seems like an oxymoron. To explain this, I must built environment. We need something besides steel to resist back up. Fossil resources are obviously not renewable, so are tensile forces, and something besides concrete for resisting not sustainable in the very long run. But in the shorter run, compression. the limit to sustainability is not the supply of fossil resources In this white paper, I consider using the fossil-hydrogen but the damage the fossil resources do to our climate. The for energy, and the fossil-carbon for durable materials, an element of particular concern is fossil-carbon, which was approach called HECAM—Hydrogen Energy and Carbon removed from the ancient atmosphere by living creatures and Materials. This involves simple pyrolysis of coal, oil, or gas stored as fossil-CO2 in carbonate rocks and as reduced fossil- to extract fossil-hydrogen for use as a clean fuel. The fossil- carbon in hydrocarbons like gas, oil, and coal. The difficulty carbon is left in the condensed state, as solid carbon or as is that industrial society liberates the fossil carbon to the carbon-rich resins. Production of enough fossil-hydrogen to atmosphere at rates much faster than contemporary photo- satisfy the needs of the energy economy creates a very large synthesis can deal with it. It is more sustainable to use fossil amount of solid carbon and carbon-rich resins, which can resources without liberating fossil-CO2. Can this be done? satisfy the needs of the built environment. A Modern World Based on Fossil Life Fossil Hydrogen for Energy, Fossil Carbon for Materials Modern industrial society enjoys prosperity in large part Fossil fuels and building materials are the substances because we are using great storehouses of materials fossil- that our Industrial Society uses in gigatonne quantities. To ized from ancient life. As the term “fossil fuels” implies, continue to exploit our fossil resources, and still avoid cli- much of our energy economy exploits the fossil residue of mate change, we could stop burning fossil-carbon as fuel and

OCR for page 43
71 APPENDIX B stop burning limestone for lime. Coal, petroleum, and gas are Petroleum as a hydrogen ore offers about 40 percent of used as “hydrogen ores” for energy and “carbon ores” for its HHV from hydrogen, and is an exceptionally versatile materials, as presented in more detail previously (Halloran, carbon ore. The petrochemical industry exists to manipulate 2007). Note that this necessarily means that only a fraction the C/H ratio of many products producing many structural of the fossil fuel is available for energy. This fraction ranges materials. Indeed carbon fibers—the premier high-strength from about 60 percent for natural gas to about 20 percent for composite reinforcement—are manufactured from petro- coal. This might seem like we are “wasting” 40 percent of leum pitch. Fabricated carbons—the model for masonry- the gas or 80 percent of the coal—but it is wasted only if a like carbon-building materials—are manufactured from valuable co-product cannot be found. If the solid carbon is petroleum coke. used as a building material, the residual carbon could have Coal, with an elemental formulation around CH0.7, is the more value as a solid than it did as a fuel. leanest hydrogen ore (but the richest carbon ore). When coal Natural gas is a rich hydrogen ore, offering about 60 per- is heated in the absence of air (pyrolyzed), it partially melts cent of its high heating value (HHV) from hydrogen combus- to a viscous liquid (metaplast). Hydrogen and hydrocarbon tion. It is a relatively lean carbon ore, and if the hydrogen is gases evolve, which swells the viscous metaplast into foam. liberated by simple thermal decomposition: CH4 = 2H2 + C. In metallurgical coke, the metaplast solidifies as a spongy The solid carbon is deposited from the vapor. Such vapor- cellular solid. Coke is about as strong as ordinary Portland deposited carbons are usually sooty nanoparticles, such as cement concrete (OPC), but only about one-third the density carbon black, which might be of limited use in the built of OPC. A stronger, denser solid carbon can be obtained by environment. However, it may be possible to engineer controlling the foaming during pyrolysis by various methods, new processes to produce very high strength and high- or by pulverizing the coke and forming a carbon-bonded- value vapor-deposited carbons, such as fibers, nanotubes, carbon with coal tar pitch (a resin from coal pyrolysis). of pyrolyic carbons (Halloran, 2008). Pyrolytic carbon has Wiratmoko has demonstrated that these pitch-bonded cokes, exceptional strength. Vapor-derived carbon fibers could be similar to conventional carbon masonry, can be 3-10 times very strong. Carbon nanotubes, which are very promising, as strong at OPC, and stronger that high strength concrete are made from the vapor phase, and the large-scale decom- or fired clay brick, at less than half the density of OPC position of millions of tons of hydrocarbon vapors might and 60 percent the density of clay brick (Wiratmoko and provide a path for mass production. Halloran, 2009). Like petroleum pitch, coal tar pitch can be An independent path for methane-rich hydrocarbon used as a precursor for carbon fibers. gases involves dehydrogenating the methane to ethylene: Although less than 20 percent of the HHV of coal comes 2CH4 = 2H2 + C2H4. Conversion to ethylene liberates only from the burning of the hydrogen, coal pyrolysis still can half the hydrogen for fuel use, reducing the hydrogen energy be competitive for the manufacture of hydrogen for fuel. yield to about 30 percent of the HHV of methane. However, Recently, Guerra conducted a thorough technoeconomic the ethylene is a very useful material feedstock. It can be analysis of HECAM from coal, using a metallurgical coke polymerized to polyethylene, the most important commodity plant as a model (Guerra, 2010). Hydrogen could be pro- polymer. Polyethylene is mostly carbon (87 wt percent C). duced by coal pyrolysis with much less CO2 emission, Perhaps it is more convenient to sequester the fossil-carbon compared to hydrogen from the standard method of steam with some of the fossil-hydrogen as an easy-to-use white resin reforming of natural gas. The relative hydrogen cost depends rather than as more difficult-to-use black elemental carbon. on the price of natural gas, the price of coal, and the market We can consider the polyethylene route as “white HECAM,” value of the solid carbon co-product. Assuming that the solid with the elemental carbon route as “black HECAM.” carbon could be manufactured as a product comparable to Polyethylene from white HECAM could be very useful concrete masonry blocks, the hydrogen from coal pyrolysis is in the built environment as a thermoplastic resin, the matrix cheaper if carbon masonry blocks would have about 80 per- for fiber composites, or a binder for cementing aggregates. cent of market value of concrete blocks (based on 2007 coal It should not be a difficult challenge to produce thermoset and gas prices). grades, to replace hydraulic-setting concretes with chemi- Since the fossil-carbon is not burned in the HECAM pro- cally setting composites. Moreover, if cost effective methods cess, the carbon dioxide emission is much lower. However, can be found for producing ultrahigh molecular weight poly- much more coal has to be consumed for the same amount ethylene (UHMWPE) fibers, we could envision construction of energy. This carbon, however, is not wasted, but rather materials similar to the current SpectraTM or DyneemaTM put to productive use. Because HECAM combines energy fibers, which are among the strongest of all manufactured and materials, comparisons of environmental impact are materials. The tensile strength of these UHMWPE fibers more complicated. For one example (Halloran and Guerra, are on the order of 2,400 MPa, more than 10 times higher 2011), we considered producing a certain quantity of energy than the typical yield strength of grade 60 rebar steel. The (1 TJ) and a certain volume of building materials (185 cubic density of steel is 8 times as large as polyethylene (PE), so meters). HECAM with hydrogen energy and carbon build- ing materials emitted 47 tons of CO2 and required 236 m3 to the specific strength can be 80 times better for UHWMPE.

OCR for page 43
72 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH be removed by mining. Burning coal and using OPC for the States, carbon dioxide from power plants, steel mills, and material emitted 150 tons of CO2 and required 245 m3 to be cement kilns is vented to the atmosphere at no monetary removed from quarries and mines. cost to the manufacturer. It is likely in the future that climate change abatement and greenhouse gas control will become a concern for the building industry. Can Carbon and Carbon-Rich Solids Be Used in the Infrastructure? Could Carbon-Rich Materials Be Better for Earthquake The mechanical properties of carbons appear to be Resilience? favorable. For compression-resistors, carbons have been made that offer significant advantages in strength and A non-specialist like me, at a workshop of experts like strength/density ratio. Carbons are quite inert with respect to this, should not offer an opinion on this question. I simply aqueous corrosion and should be durable. But of course the do not know. However, two of the strongest and lightest properties of these carbons in real environments have never structural materials available for any type of engineering are been tested. For tensile-resistors, carbon-fiber composites or carbon fibers and UHMWPE fibers, which are fossil-carbon UHMWPE-fiber composites should be more than adequate, sequestering materials we contemplate for HECAM. The because they should be stronger and much less dense than specific strength and specific stiffness of fiber composites steel. Durability has yet to be demonstrated. Fire resistance based on fossil-carbon based materials should easily exceed is an obvious issue. The ability to manufacture these mate- the requirements of structural steel. Masonry based on fossil- rials in realistic volumes has yet to be demonstrated. An carbon might easily exceed the performance of ordinary analogue to the setting of cement has not been demonstrated, Portland cement concrete, and could be stronger, lighter, and although conventional chemical cross-linking appears to be more durable. Would this enable a more earthquake-resilient viable. Construction methods using these materials have yet built infrastructure? Would it make a more environmentally to be invented. sustainable infrastructure? I hope this is discussed in this The cost of HECAM materials is not clearly defined, workshop. largely because the materials cost is related to the value of the energy co-product, in the same way that the energy cost The 19th Century as a Guide for the 21th Century is related to the value of the materials co-product. Guerra’s preliminary analysis looks favorable, with each co-product When contemplating any grand challenge, it is useful to subsidizing the other. Fundamentally, durable materials such look back into history. One hundred years ago, concrete and as concrete and steel are worth much more per ton than coal, steel construction was still quite new. One hundred fifty years and about the same as natural gas. Values in 2003 were about ago, structural steel was not available for construction. Two $70/ton for OPC, $220/ton for rebar, $44/ton for coal, and hundred years ago, there was no modern cement. So let me $88/ton for natural gas (Halloran, 2007). Carbon-rich solids go back two centuries, to 1811, and consider what was avail- are lower in density (and stronger), so figuring on the basis able for the built environment. There was no structural steel of volume suggests that converting some of the fossil fuels in 1811. Steel was then a very costly engineering material, into construction materials should be economically feasible. used in swords and tools in relatively small quantities. Steel But none of this has been demonstrated. was simply not available in the quantity and the cost needed Similar carbon materials and composites are known for use as a structural material. There was no Portland cement in aerospace technologies as high-performance, very high- concrete in 1811. Joseph Aspdin did not patent Portland cost materials. Clearly aerospace-grade graphite fibers or cement until 1824. SpectraTM fibers would not possibly be affordable in the But a great deal changed in a rather short time. In tonnages required for infrastructure. For example, the tensile 1810, the Tickford Iron Bridge was built with cast iron, not strength of about 1,400 MPa has been reported for carbon steel. By 1856, Bessemer had reduced the cost of steel, and fibers produced from coal tar after relatively cheap process- Siemens had produced ductile steel plate. By 1872, steel was ing (Halloran, 2007). These are not as strong as aerospace- used to build the Brooklyn Bridge. The Wainwright Build- grade graphite fibers (5,600 MPa), but are comparable in ing in 1890 had the first steel framework. The first glass and strength to the glass fibers now used in construction, which steel building (Peter Behrens’ AEG Turbine Factory Berlin) have a tensile strength of 630 MPa as strands. So we must arrived in 1908. I.K. Brunel used Portland cement in his stretch our imagination to envision construction-grade car- Thames Tunnel in 1828. Joseph Monier used steel-reinforced bon fibers and UHMWPE fibers, perhaps not as strong but concrete in 1867, and the first concrete high-rise was built not nearly as costly as aerospace grade. In the same sense, in Cincinnati in 1893. Perhaps a similar change can occur in the steel used in rebar is not nearly the cost (or the quality) the 21st century, and perhaps our descendents will think us of the steel used in aircraft landing gear. fools to burn useful materials like carbon. Much will also depend on when (or if) there will be an effective cost for CO2 emissions. At present in the United

OCR for page 43
73 APPENDIX B References Halloran, J. W., and Z. Guerra. 2011. Carbon building materials from coal char: Durable materials for solid carbon sequestration to enable hydro - Guerra, Z. 2010. Technoeconomic Analysis of the Co-Production of Hydro - gen production by coal pyrolysis. Pp. 61-71 in Materials Challenges in gen Energy and Carbon Materials. Ph.D. Thesis, University of Michigan, Alternative and Renewable Energy: Ceramic Transactions, Vol. 224, Ann Arbor. G. G. Wicks, J. Simon, R. Zidan, E. Lara-Curzio, T. Adams, J. Zayas, A. Halloran, J. W. 2007. Carbon-neutral economy with fossil fuel-based Karkamkar, R. Sindelar, and B. Garcia-Diaz, eds. John Wiley & Sons, hydrogen energy and carbon materials. Energy Policy 53:4839-4846. Inc., Hoboken, NJ. Halloran, J. W. 2008. Extraction of hydrogen from fossil resources with Wiratmoko, A., and J. W. Halloran. 2009. Fabricated carbon from production of solid carbon materials. International Journal of Hydrogen minimally-processed coke and coal tar pitch as a Carbon-Sequestering Energy 33:2218-2224. Construction Material. Journal of Materials Science 34(8):2097-2100.

OCR for page 43
74 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH UNCERTAINTY QUANTIFICATION AND EXASCALE equally drawn from a consideration of the stochastic forward COMPUTING: OPPORTUNITIES AND CHALLENGES problem, or the stochastic optimization problem. FOR EARTHQUAKE ENGINEERING Uncertainty Quantification: Opportunities and Challenges Omar Ghattas Departments of Mechanical Engineering and Geological Perhaps the central challenge facing the field of com- Sciences putational science and engineering today is: how do we Institute for Computational Engineering & Sciences quantify uncertainties in the predictions of our large-scale The University of Texas at Austin simulations? For many societal grand challenges, the “single point” deterministic predictions issued by most contempo- Introduction rary large-scale simulations of complex systems are just a first step: to be of value for decision making (optimal design, In this white paper we consider opportunities to ex- optimal allocation of resources, optimal control, etc.), they tend large-scale simulation-based seismic hazard and risk must be accompanied by the degree of confidence we have analysis from its current reliance on deterministic earth - in the predictions. This is particularly true in the field of quake simulations to those based on stochastic models. The earthquake engineering, which historically has been a leader simulations we have in mind begin with dynamic rupture, in its embrace of stochastic modeling. Indeed, Vision 2025, proceed through seismic wave propagation in large regions, American Society of Engineers’ (ASCE’s) vision for what and ultimately couple to structural response of buildings, it means to be a civil engineer in the world of the future, as- bridges, and other critical infrastructure—so-called “rupture- serts among other characteristics that civil engineers (must) to-rafters” simulations. The deterministic forward problem serve as . . . managers of risk and uncertainty caused by alone—predicting structural response given rupture, earth, natural events. . . . (ASCE, 2009). Once simulations are and structural models—requires petascale computing, and is endowed with quantified uncertainties, we can formally pose receiving considerable attention (e.g., Cui et al., 2010). The the decision-making problem as an optimization problem inverse problem—given observations, infer parameters in governed by stochastic partial differential equations (PDEs) source, earth, or structural models—increases the computa- (or other simulation models), with objective and/or constraint tional complexity by several orders of magnitude. Finally, ex- functions that take the form of, for example, expectations, tending the framework to the stochastic setting—where un- and decision variables that represent design or control certainties in observations and parameters are quantified and parameters (e.g., constitutive parameters, initial/boundary propagated to yield uncertainties in predictions—demands conditions, sources, geometry). the next major prefix in supercomputing: the exascale. Uncertainty quantification arises in three fundamental Although the anticipated arrival of the age of exascale ways in large-scale simulation: computing near the end of this decade is expected to provide the raw computing power needed to carry out stochastic • Stochastic inverse problem: Estimation of probability r upture-to-rafters simulations, the mere availability of densities for uncertain parameters in large-scale sim- O(1018) flops per second peak performance is insufficient, ulations, given noisy observations or measurements. by itself, to ensure success. There are two overarching chal- • Stochastic forward problem: Forward propagation of lenges: (1) can we overcome the curse of dimensionality the parameter uncertainties through the simulation to to make uncertainty quantification (UQ) for large-scale issue stochastic predictions. earthquake simulations tractable, even routine; and (2) can • Stochastic optimization: Solution of the stochastic we design efficient parallel algorithms for the deterministic optimization problems that make use of statistics of forward simulations at the heart of UQ that are capable of these predictions as objectives and/or constraints. scaling up to the expected million nodes of exascale systems, and that also map well onto the thousand-threaded nodes Although solution of stochastic inverse, forward, or that will form those systems? These two questions are wide optimization problems can be carried out today for smaller open today; we must begin to address them now if we hope models with a handful of uncertain parameters, these tasks to overcome the challenges of UQ in time for the arrival of are computationally intractable for complex systems char- the exascale era. acterized by large-scale simulations and high-dimensional We illustrate several of the points in this white paper with p arameter spaces using contemporary algorithms (see, examples taken from wave propagation, which is just one e.g., Oden et al., 2011). Moreover, existing methods suffer component of the end-to-end rupture-to-rafters simulation, from the curse of dimensionality: simply throwing more but typically the most expensive (some comments are made processors at these problems will not address the basic diffi- on the other components). Moreover, we limit the discussion culty. We need fundamentally new algorithms for estimation of UQ to the stochastic inverse problem. Despite the narrow- and propagation of, and optimization under, uncertainty in ing of focus, the conclusions presented here could have been large-scale simulations of complex systems.

OCR for page 43
75 APPENDIX B To focus our discussion, in the remainder of this section However, we are interested in not just point estimates of the we will consider challenges and opportunities associated best-fit parameters, but also a complete statistical descrip- with the first task above, that of solving stochastic inverse tion of all the parameter values that is consistent with the problems, and employing Bayesian methods of statistical data. The Bayesian approach does this by reformulating inference. This will be done in the context of the modeling the inverse problem as a problem in statistical inference, of seismic wave propagation, which typically constitutes the incorporating uncertainties in the measurements, the forward most expensive component in simulation-based rupture-to- model, and any prior information on the parameters. The rafters seismic hazard assessment. solution of this inverse problem is the so-called “posterior” The problem of estimating uncertain parameters in a probability densities of the parameters, which reflects the simulation model from observations is fundamentally an in- degree of credibility we have in their values (Kaipio and verse problem. The forward problem seeks to predict output Somersalo, 2005; Tarantola, 2005). Thus we are able to observables, such as seismic ground motion at seismometer quantify the resulting uncertainty in the model parameters, locations, given the parameters, such as the heterogeneous taking into account uncertainties in the data, model, and prior elastic wave speeds and density throughout a region of inter- information. Note that the term “parameter” is used here in est, by solving the governing equations, such as the elastic (or the broadest sense—indeed, Bayesian methods have been poroelastic, or poroviscoelastoplastic) wave equations. The developed to infer uncertainties in the form of the model as forward problem is usually well posed (the solution exists, is well (so-called structural uncertainties). unique, and is stable to perturbations in inputs), causal (later- The Bayesian solution of the inverse problem proceeds time solutions depend only on earlier time solutions), and as follows. Suppose the relationship between observable local (the forward operator includes derivatives that couple outputs y and uncertain input parameters p is denoted by nearby solutions in space and time). The inverse problem, y = f(p, e), where e represents noise due to measurement and/ on the other hand, reverses this relationship by seeking to or modeling errors. In other words, given the parameters p, estimate uncertain (and site-specific) parameters from (in the function f(p) invokes the solution of the forward prob- situ) measurements or observations. The great challenge of lem to yield y, the predictions of the observables. Suppose also that we have the prior probability density πpr(p), which solving inverse problems lies in the fact that they are usually ill-posed, non-causal, and non-local: many different sets of encodes the confidence we have in prior information on parameter values may be consistent with the data, and the in- the unknown parameters (i.e., independent of information verse operator couples solution values across space and time. from the present observations), and the likelihood function π(yobs|p), which describes the conditional probability that the Non-uniqueness in the inverse problem stems in part from the sparsity of data and the uncertainty in both measure- parameters p gave rise to the actual measurements yobs. Then ments and the model itself, and in part from non-convexity Bayes’ theorem of inverse problems expresses the posterior probability density of the parameters, πpost, given the data of the parameter-to-observable map (i.e., the solution of the forward problem to yield output observables, given input yobs, as the conditional probability parameters). The popular approach to obtaining a unique πpost (p) = π(p|yobs) = k πpr(p) π(yobs|p) “solution” to the inverse problem is to formulate it as an opti- (1) mization problem: minimize the misfit between observed and predicted outputs in an appropriate norm while also minimiz- where k is a normalizing constant. The expression (1) pro- ing a regularization term that penalizes unwanted features vides the statistical solution of the inverse problem as a of the parameters. This is often called Occam’s approach: probability density for the model parameters p. find the “simplest” set of parameters that is consistent with Although it is easy to write down expressions for the the measured data. The inverse problem thus leads to a non- posterior probability density such as (1), making use of linear optimization problem that is governed by the forward these expressions poses a challenge, because of the high simulation model. When the forward model takes the form dimensionality of the posterior probability density (which is of PDEs (as is the case with the wave propagation models a surface of dimension equal to the number of parameters) considered here), the result is an optimization problem that and because the solution of the forward problem is required is extremely large-scale in the state variables (displacements, at each point on this surface. Straightforward grid-based stresses or strains, etc.), even when the number of inversion sampling is out of the question for anything other than a few parameters is small. More generally, because of the hetero- parameters and inexpensive forward simulations. Special geneity of the earth, the uncertain parameters are fields, and sampling techniques, such as Markov chain Monte Carlo when discretized result in an inverse problem that is very (MCMC) methods, have been developed to generate sample large scale in the inversion parameters as well. ensembles that typically require many fewer points than grid- Estimation of parameters using the regularization ap- based sampling (Kaipio and Somersalo, 2005; Tarantola, proach to inverse problems as described above will yield an 2005). Even so, MCMC approaches will become intractable estimate of the “best” parameter values that simultaneously as the complexity of the forward simulations and the dimen- fit the data and minimize the regularization penalty term. sion of the parameter spaces increase. When the parameters

OCR for page 43
76 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH are a (suitably discretized) field (such as density or elastic iterations, for which the (formally dense, of dimension in the wave speeds of a heterogeneous earth), and when the for- millions) Hessian matrix is never formed, and only its action ward PDE requires many hours to solve on a supercomputer on a vector (which requires a forward/adjoint pair of solves) for a single point in parameter space (such as seismic wave is required (Akçelik et al., 2005). These fast deterministic propagation in large regions), the entire MCMC enterprise methods can be capitalized upon to accelerate sampling of the posterior density πpost(p), via Langevin dynamics. collapses dramatically. The central problem in scaling up the standard MCMC Long-time trajectories of the Langevin equation sample the methods for large-scale forward simulations and high- posterior, and integrating the equation requires evaluation dimensional parameter spaces is that this approach is of the gradient at each sample point. More importantly, the purely black-box, i.e., it does not exploit the structure of the equation can be preconditioned by the inverse of the Hessian, parameter-to-observable map f(p). The key to overcoming in which case its time discretization is akin to a stochastic the curse of dimensionality, we believe, lies in effectively Newton method, permitting us to recruit many of the ideas exploiting the structure of this map to implicitly or explicitly from deterministic large-scale Newton methods developed reduce the dimension of both the parameter space as well over the past several decades. as the state space. The motivation for doing so lies in the This stochastic Newton method has been applied to fact that the data are often informative about just a fraction a nonlinear seismic inversion problem, with the medium of modes of the parameter field, because of ill-posedness parameterized into 65 layers (Martin et al., In preparation). Figure 1 indicates just O(102) samples are necessary to of the inverse problem. Another way of saying this is that the Jacobian of the parameter-to-observable map is typically adequately sample the (non-Gaussian) posterior density, a compact operator, and thus can be represented effectively while a reference (non-derivative) MCMC method (Delayed using a low-rank approximation—that is, it is sparse with Rejection Adaptive Metropolis) is nowhere near converged after even O(105) samples. Moreover, the convergence of the respect to some basis (Flath et al., 2011). The remaining dimensions of parameter space that cannot be inferred from method appears to be independent of the problem dimen- the data are typically informed by the prior; however, the sion when scaling from 65 to 1,000 parameters. Although prior does not require solution of forward problems, and is the forward problem is still quite simple (wave propagation thus cheap to compute. Compactness of the parameter-to- in a 1D layered medium), and the parameter dimension is observable map suggests that the state space of the forward moderate (up to 1,000 parameters), this prototype example problem can be reduced as well. A number of current ap- demonstrates the considerable speedups that can be had if proaches to model reduction for stochastic inverse problems problem structure is exploited, as opposed to viewing the show promise. These range from Gaussian process response simulation as a black box. surface approximation of the parameter-to-observable map (Kennedy and O’Hagan, 2001), to projection-type forward Exascale Computing: Opportunities and Challenges model reductions (Galbally et al., 2010; Lieberman et al., 2010), to polynomial chaos approximations of the stochastic The advent of the age of petascale computing—and forward problem (Narayanan and Zabaras, 2004; Ghanem the roadmap for the arrival of exascale computing around and Doostan, 2006; Marzouk and Naim, 2009), to low-rank 2018—bring unprecedented opportunities to address soci- approximation of the Hessian of the log-posterior (Flath et etal grand challenges in earthquake engineering, and more al., 2011; Martin et al., In preparation). In the remainder of generally in such fields as biology, climate, energy, manu- this section, as just one example of the above ideas, we illus- facturing, materials, and medicine (Oden et al., 2011). But trate the dramatic speedups that can be achieved by exploit- the extraordinary complexity of the next generation of high- ing derivative information of the parameter-to-observable performance computing systems—with hundreds of thou- map, and in particular the properties of the Hessian. sands to millions of nodes, each having multiple processors, Exploiting derivative information has been the key to each with multiple cores, heterogeneous processing units, overcoming the curse of dimensionality in deterministic in- and deep memory hierarchies—presents tremendous chal- verse and optimization problems (e.g., Akçelik et al., 2006), lenges for scientists and engineers seeking to harness their and we believe it can play a similar critical role in stochastic raw power (Keyes, 2011). Two central challenges arise: how inverse problems as well. Using modern adjoint techniques, do we create parallel algorithms and implementations that gradients can be computed at a cost of a single linearized (1) scale up to and make effective use of distributed memory systems with O(105-106) nodes and (2) exploit the power of forward solve, as can actions of Hessians on vectors. These tools, combined with specialized solvers that exploit the fact shared memory massively multi-threaded individual nodes? that many ill-posed inverse problems have compact data mis- Although the first challenge cited is a classical difficulty, fit operators, often permit solution of deterministic inverse we can at least capitalize on several decades of work on problems in a dimension-independent (and typically small) constructing, scaling, analyzing, and applying parallel algo- number of iterations. Deterministic inverse problems have rithms for distributed memory high-performance computing been solved for millions of parameters and states in tens of systems. Seismic wave propagation, in particular, has had a

OCR for page 43
77 APPENDIX B Figure 1 Left: Comparison of number of points taken for sampling posterior density for a 65-dimensional seismic inverse problem to iden - tify the distribution of elastic moduli for a layered medium, from reflected waves. DRAM (black), unpreconditioned Langevin (blue), and Stochastic Newton (red) sampling methods are compared. Convergence indicator is multivariate potential scale reduction factor, for which a value of unity indicates convergence. Stochastic Newton requires three orders of magnitude fewer sampling points than the other methods. Right: Comparison of convergence of stochastic Newton method for 65 and 1,000 dimensions suggests dimension independence. SOURCE: Courtesy of James Martin, University of Texas at Austin. long history of being at the forefront of applications that can for inverse problems, in which the material model changes exploit massively parallel supercomputing, as illustrated, for at each inverse iterations, resulting in a need to remesh re- example, by recent Gordon Bell Prize finalists and winners peatedly. The results in Table 1 demonstrate that excellent (Bao et al., 1996; Akçelik et al., 2003; Komatitsch et al., scalability on the largest contemporary supercomputers can 2003; Burstedde et al., 2008; Carrington et al., 2008; Cui et be achieved for the wave propagation solution, even when al., 2010). To illustrate the strides that have been made and the taking meshing into account, by careful numerical algorithm barriers that remain to be overcome, we provide scalability design and implementation. In this case, a high-order ap- results for our new seismic wave propagation code. This code proximation in space (as needed to control dispersion errors) solves the coupled acoustic-elastic wave equations in first combined with a discontinuous Galerkin formulation (which order (velocity-strain) form using a discontinuous Galerkin provides stability and optimal convergence) together provide spectral element method in space and explicit Runge Kutta a higher computation to communication ratio, facilitating in time (Wilcox et al., 2010). The equations are solved in a spherical earth model, with properties given by the Prelimi- nary Reference Earth Model. The seismic source is a double couple point source with a Ricker wavelet in time, with cen- Table 1 Strong scaling of discontinuous Galerkin spectral tral frequency of 0.28 Hz. Sixth-order spectral elements are element seismic wave propagation code on the Cray XT-5 used, with at least 10 points per wavelength, resulting in 170 at ORNL (Jaguar), for a number of cores ranging from million elements and 525 billion unknowns. Mesh generation 32K to 224K. is carried out in parallel prior to wave propagation, to ensure # proc meshing wave prop par eff that the mesh respects material interfaces and resolves local cores time* (s) per step (s) wave Tflops wavelengths. Table 1 depicts strong scaling of the global 32,460 6.32 12.76 1.00 25.6 seismic wave propagation code to the full size of the Cray 65,280 6.78 6.30 1.01 52.2 XT5 supercomputer (Jaguar) at Oak Ridge National Labora- 130,560 17.76 3.12 1.02 105.5 tory (ORNL). The results indicate excellent strong scalability 223,752 47.64 1.89 0.99 175.6 for the overall code (Burstedde et al., 2010). Note that mesh Meshing time is the time for parallel generation of the mesh (adapted to generation costs about 25 time steps (tens of thousands that local wave speed) prior to wave propagation solution; wave prop per step are typically required), so that the cost of mesh generation is is the runtime in seconds per time step of the wave propagation solve; par eff wave is the parallel efficiency associated with strong scaling; and Tflops negligible for any realistic simulation. Not only is online par- is the double precision flop rate in teraflops/s. allel mesh generation important for accessing large memory SOURCE: Courtesy of Carsten Burstedde, Georg Stadler, and Lucas and avoiding large input/output (I/O), but it becomes crucial Wilcox, University of Texas at Austin.

OCR for page 43
78 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH better latency tolerance and scalability to O(105) cores, while alone on large clusters and supercomputers. This trend will also resulting in dense local operations that ensure better only continue to accelerate. cache performance. Explicit time integration avoids a global Current high-end GPUs are capable of a teraflop per system solve, while the space filling curve-based ordering of second peak performance, which offers a spectacular two the mesh results in better locality. orders of magnitude increase over conventional CPUs. The However, if we consider full rupture-to-rafters simu- critical question, however, is: can this performance be ef- lations beyond wave propagation, new and greater chal - fectively harnessed by scientists and engineers to acceler- lenges arise. Rupture modeling may require dynamic ate their simulations? The new generation of many-core mesh adaptivity to track the evolving rupture front, and and accelerated chips performs well on throughput-oriented historically this has presented significant challenges on tasks, such as those supporting computer graphics, video highly parallel systems. In recent work, we have designed gaming, and high-definition video. Unfortunately, a different scalable dynamic mesh refinement and coarsening algo- picture emerges for scientific and engineering computations. rithms that scale to several hundred thousand cores, present Although certain specialized computations (such as dense little overhead relative to the PDE solution, and support matrix problems and those with high degrees of locality) map complex geometry and high-order continuous/discontinuous well onto modern many-core processors and accelerators, discretization (Burstedde et al., 2010). Although they have the mainstream of conventional scientific and engineering not yet been applied to dynamic rupture modeling, we ex - simulations—including the important class of PDE solvers— pect that the excellent scalability observed in Table 1 will be involve sparse operations, which are memory bandwidth- retained. On the other hand, coupling of wave propagation bound, not throughput-bound. As a result, the large increases with structural response presents much greater challenges, in the number of cores on a processor, which have occurred because of the need to solve the structural dynamics equa - without a concomitant increase in memory bandwidth (be- tions with an implicit method (earthquakes usually excite cause of the large cost and low demand from the consumer structures in their low modes, for which explicit methods market), deliver little increase in performance. Indeed, sparse are highly wasteful). Scalability of implicit solvers to hun - matrix computations often achieve just 1-2 percent of peak dreds of thousands of cores and beyond remains extremely performance on modern GPUs (Bell and Garland, 2009). challenging because of the global communication required Future peak performance increases will continue to come in by effective preconditioners, though progress continues to the form of processors capable of massive hardware multi- be made (Yang, 2006). Finally, adding nonlinear constitu- threading. It is now up to scientists and engineers to adapt tive models or finite deformations into the soil or structural to this new architectural landscape; the results thus far have behavior greatly increases parallel complexity, because of been decidedly mixed, with some regular problems able to the need for dynamic load balancing and possibly local time achieve large speedups, but many sparse unstructured prob- stepping. It is fair to say that the difficulties associated with lems unable to benefit. scaling end-to-end rupture-to-rafters simulations are formi - Here we provide evidence of the excellent GPU perfor- dable, but not insurmountable if present rates of progress mance that can be obtained by a hybrid parallel CPU-GPU can be sustained (and accelerated). implementation of the discontinuous Galerkin spectral On the other hand, the second challenge identified element seismic wave propagation code described above above—exploiting massive on-chip multithreading—has (Burstedde et al., 2010). The mesh generation component emerged in the past several years and presents new and perni- remains on the CPU, because of the complex, dynamic data cious difficulties, particularly for PDE solvers. A sea change structures involved, while the wave propagation solution has is under way in the design of the individual computer chips been mapped to the GPU, capitalizing on the local dense that power high-end supercomputers (as well as scientific blocks that stem from high-order approximation. Table 2 pro- workstations). These chips have exploded in complexity, and vides weak scaling results on the Longhorn GPU cluster at now support multiple cores on multiple processors, with deep the Texas Advanced Computing Center (TACC), which con- memory hierarchies and add-on accelerators such as graphic sists of 512 NVIDIA FX 5800 GPUs, each with 4GB graph- processing units (GPUs). The parallelism within compute ics memory, and 512 Intel Nehalem quad core processors nodes has grown remarkably in recent years, from the single connected by QDR InfiniBand interconnect. The combined core processors of a half-decade ago to the hundreds of cores mesh generation–wave propagation code is scaled weakly of modern GPUs and forthcoming many-core processors. from 8 to 478 CPUs/GPUs, while maintaining between 25K- These changes have been driven by power and heat dissipa- 28K seventh-order elements per GPU (the adaptive nature tion constraints, which have dictated that increased perfor- of mesh generation means we cannot precisely guarantee mance cannot come from increasing the speed of individual a fixed number of elements). The largest problem has 12.3 cores, but rather by increasing the numbers of cores on a million elements and 67 billion unknowns. As can be seen chip. As a result, computational scientists and engineers in- in the table, the scalability of the wave propagation code is creasingly must contend with high degrees of fine-grained exceptional; parallel efficiency remains at 100 percent in parallelism, even on their laptops and desktop systems, let weak scaling over the range of GPUs considered. Moreover,

OCR for page 43
79 APPENDIX B Table 2 Weak scaling of discontinuous Galerkin spectral element seismic wave propagation code on the Longhorn cluster at TACC. #GPUs #elem mesh (s) transf (s) wave prop par eff wave Tflops (s.p.) 8 224048 9.40 13.0 29.95 1.000 0.63 64 1778776 9.37 21.3 29.88 1.000 5.07 256 6302960 10.6 19.1 30.03 0.997 20.3 478 12270656 11.5 16.2 29.89 1.002 37.9 #elem is number of 7th-order spectral elements; mesh is time to generate the mesh on the CPU; tranf is the time to transfer the mesh and other initial data from CPU to GPU memory; wave prop is the normalized runtime (in µsec per time step per average number of elements per GPU); par eff wave is the parallel efficiency of the wave propagation solver in scaling weakly from 8 to 478 GPUs; and Tflops is the sustained single precision flop rate in teraflops/s. The wall- clock time of the wave propagation solver is about 1 second per time step; meshing and transfer time are thus completely negligible for realistic simulations. SOURCE: Courtesy of Tim Warburton and Lucas Wilcox. the wave propagation solver sustains around 80 gigaflops/s rafters deterministic forward earthquake simulations map (single precision), which is outstanding performance for an poorly to modern consumer-market-driven, throughput- irregular, sparse (albeit high-order) PDE solver. oriented chips with their massively multithreaded accel- Although these results bode well for scaling earthquake erators. Improvements in computer science techniques (e.g., simulations to future multi-petaflops systems with massively auto-parallelizing and auto-tuning compilers) are important multi-threaded nodes, we must emphasize that high-order- but insufficient: this problem goes back to the underlying discretized (which enhance local dense operations), explicit mathematical formulation and algorithms. Finally, even if we (which maintain locality of operations) solvers are in the could exploit parallelism on modern and emerging systems at sweet spot for GPUs. Implicit sparse solvers (as required in all levels, the algorithms at our disposal for UQ suffer from structural dynamics) are another story altogether: the sparse the curse of dimensionality; entirely new algorithms that can matrix-vector multiply alone (which is just the kernel of an scale to large numbers of uncertain parameters and expensive iterative linear solver, and much more readily parallelizable underlying simulations are critically needed. than the preconditioner) often sustains only 1-2 percent of It is imperative that we overcome the challenges of peak performance in the most optimized of implementations designing algorithms and models for stochastic rupture-to- (Bell and Garland, 2009). Adding nonlinear constitutive rafters simulations with high-dimensional random parameter behavior and adaptivity for rupture dynamics further com- spaces that can scale on future exascale systems. Failure to plicates matters. In these cases, the challenges of obtaining do so risks undermining the substantial investments being good performance on GPU and like systems appear over- made by federal agencies to deploy multi-petaflops and whelming, and will require a complete rethinking of how we exaflops systems. Moreover, the lost opportunities to harness model, discretize, and solve the governing equations. the power of new computing systems will ultimately have consequences many times more severe than mere hardware costs. The future of computational earthquake engineer- Conclusions ing depends critically on our ability to continue riding the The coming age of exascale supercomputing prom- exponentially growing curve of computing power, which is ises to deliver the raw computing power that can facilitate now threatened by architectures that are hostile to the com- data-driven, inversion-based, high-fidelity, high-resolution putational models and algorithms that have been favored. rupture-to-rafters simulations that are equipped with quan- Never before has there been as wide a gap between the tified uncertainties. This would pave the way to rational capabilities of computing systems and our ability to exploit simulation-based decision making under uncertainty in such them. Nothing less than a complete rethinking of the entire areas as design and retrofit of critical infrastructure in earth- end-to-end enterprise—beginning with the mathematical quake-prone regions. However, making effective use of that formulations of stochastic problems, leading to the manner in which they are approximated numerically, the design of power is a grand challenge of the highest order, owing to the the algorithms that carry out the numerical approximations, extraordinary complexity of the next generation of computing systems. Scalability of the entire end-to-end process—mesh and the software that implements these algorithms—is im- generation, rupture modeling (including adaptive meshing), perative in order that we may exploit the radical changes in seismic wave propagation, coupled structural response, and architecture with which we are presented, to carry out the analysis of the outputs—is questionable on contemporary stochastic forward and inverse simulations that are essential supercomputers, let alone future exascale systems with three for rational decision making. This white paper has provided orders of magnitude more cores. Even worse, the sparse, several examples—in the context of forward and inverse unstructured, implicit, and adaptive nature of rupture-to- seismic wave propagation—of the substantial speedups

OCR for page 43
80 GRAND CHALLENGES IN EARTHQUAKE ENGINEERING RESEARCH that can be realized with new formulations, algorithms, and Cui, Y., K. B. Olsen, T. H. Jordan, K. Lee, J. Zhou, P. Small, D. Roten, G. Ely, D.K. Panda, A. Chourasia, J. Levesque, S. M. Day, and P. implementations. Significant work lies ahead to extend these Maechling. 2010. Scalable earthquake simulation on petascale super- and other ideas to the entire spectrum of computations under- computers. In SC10: Proceedings of the International Conference for lying simulation-based seismic hazard and risk assessment. High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE. Flath, H. P., L. C. Wilcox, V. Akçelik, J. Hill, B. Van Bloemen Waanders, Acknowledgments and O. Ghattas. 2011. Fast algorithms for Bayesian uncertainty quan - tification in large-scale linear inverse problems based on low-rank Figure 1 is the work of James Martin; Table 1 is the work partial Hessian approximations. SIAM Journal on Scientific Computing of Carsten Burstedde, Georg Stadler, and Lucas Wilcox; and 33(1):407-432. Table 2 is the work of Tim Warburton and Lucas Wilcox. This Galbally, D., K. Fidkowski, K. Willcox, and O. Ghattas. 2010. Nonlinear work was supported by AFOSR grant FA9550-09-1-0608, model reduction for uncertainty quantification in large-scale inverse problems. International Journal for Numerical Methods in Engineering NSF grants 0619078, 0724746, 0749334, and 1028889, and 81:1581-1608. DOE grants DEFC02-06ER25782 and DE-FG02-08ER25860. Ghanem, R. G., and A. Doostan. 2006. On the construction and analysis of stochastic models: Characterization and propagation of the errors asso- ciated with limited data. Journal of Computational Physics 217:63-81. References Kaipio, J., and E. Somersalo. 2005. Statistical and Computational Inverse P roblems . Applied Mathematical Sciences, Vol. 160. New York: Akçelik, V., J. Bielak, G. Biros, I. Epanomeritakis, A. Fernandez, O. Ghattas, Springer-Verlag. E. J. Kim, J. Lopez, D. R. O’Hallaron, T. Tu, and J. Urbanic. 2003. Kennedy, M. C., and A. O’Hagan. 2001. Bayesian calibration of computer High resolution forward and inverse earthquake modeling on terascale models. Journal of the Royal Statistical Society. Series B (Statistical computers. In SC03: Proceedings of the International Conference for Methodology) 63:425-464. High Performance Computing, Networking, Storage, and Analysis , Keyes, D. E. 2011. Exaflop/s: The why and the how. Comptes Rendus ACM/IEEE. Mcanique 339:70-77. Akçelik, V., G. Biros, A. Draganescu, O. Ghattas, J. Hill, and B. Van Komatitsch, D., S. Tsuboi, C. Ji, and J. Tromp. 2003. A 14.6 billion degrees Bloeman Waanders. 2005. Dynamic data-driven inversion for terascale of freedom, 5 teraflops, 2.5 terabyte earthquake simulation on the Earth simulations: Real-time identification of airborne contaminants. In Simulator. In SC03: Proceedings of the International Conference for Proceedings of the 2005 ACM/IEEE Conference on Supercomputing, High Performance Computing, Networking, Storage, and Analysis, Seattle, 2005. ACM/IEEE. Akçelik, V., G. Biros, O. Ghattas, J. Hill, D. Keyes, and B. Van Bloeman Lieberman, C., K. Willcox, and O. Ghattas. 2010. Parameter and state model Waanders. 2006. Parallel PDE constrained optimization. In Parallel reduction for large-scale statistical inverse problems. SIAM Journal on Processing for Scientific Computing, M. Heroux, P. Raghaven, and H. Scientific Computing 32:2523-2542. Simon, eds., SIAM. Martin, J., L. C. Wilcox, C. Burstedde, and O. Ghattas. A stochastic Newton ASCE (American Society for Civil Engineers). 2009. Achieving the Vision MCMC method for large scale statistical inverse problems with applica - for Civil Engineering in 2025: A Roadmap for the Profession . Available tion to seismic inversion. In preparation. at content.asce.org/vision2025/index.html. Marzouk, Y. M., and H. N. Najm. 2009. Dimensionality reduction and Bao, H., J. Bielak, O. Ghattas, L. F. Kallivokas, D. R. O’Hallaron, J. R. polynomial chaos acceleration of Bayesian inference in inverse prob- Shewchuk, and J. Xu. 1996. Earthquake ground motion modeling on lems. Journal of Computational Physics 228:1862-1902. parallel computers. In Supercomputing ’96, Pittsburgh, PA, November. Narayanan, V. A. B., and N. Zabaras. 2004. Stochastic inverse heat conduc- Bell, N., and M. Garland. 2009. Implementing sparse matrix-vector multipli- tion using a spectral approach. International Journal for Numerical cation on throughput-oriented processors. In SC09: Proceedings of the Methods Engineering 60:1569-1593. International Conference for High Performance Computing, Network- Oden, J. T., O. Ghattas, et al. 2011. Cyber Science and Engineering: A Report ing, Storage, and Analysis, ACM/IEEE. of the NSF Advisory Committee for Cyberinfrastructure Task Force on Burstedde, C., O. Ghattas, M. Gurnis, E. Tan, T. Tu, G. Stadler, L. C. Wilcox, Grand Challenges. Arlington, VA: National Science Foundation. and S. Zhong. 2008. Scalable adaptive mantle convection simulation on Tarantola, A. 2005. Inverse Problem Theory and Methods for Model petascale supercomputers. In SC08: Proceedings of the International Parameter Estimation. Philadelphia, PA: SIAM. Conference for High Performance Computing, Networking, Storage, Wilcox, L. C., G. Stadler, C. Burstedde, and O. Ghattas. 2010. A high- and Analysis, ACM/IEEE. order discontinuous Galerkin method for wave propagation through Burstedde, C., O. Ghattas, M. Gurnis, T. Isaac, G. Stadler, T. Warburton, and coupled elastic-acoustic media. Journal of Computational Physics L. C. Wilcox. 2010. Extreme-scale AMR. In SC10: Proceedings of the 229:9373-9396. International Conference for High Performance Computing, Network- Yang, U. M. Parallel algebraic multigrid methods—high performance ing, Storage, and Analysis, ACM/IEEE. preconditioners. Pp. 209-236 in Numerical Solution of Partial Dif- Carrington, L., D. Komatitsch, M. Laurenzano, M. M. Tikir, D. Michéa, ferential Equations on Parallel Computers, A. Bruaset and A. Tveito, N. L. Goff, A. Snavely, and J. Tromp. 2008. High-frequency simulations eds., Lecture Notes in Computational Science and Engineering, Vol. 51, of global seismic wave propagation using SPECFEM3D GLOBE on Heidelberg: Springer-Verlag. 62K processors. In SC08: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.