Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 154
A Positron Named Priscilla: Scientific Discovery at the Frontier CLOCKS IN THE EARTH? The Science of Earthquake Prediction by Addison Greenwood Earthquake country. In America this epithet usually evokes images of western California, where the San Andreas fault—perhaps our most notorious geological feature and probably the most intensely studied by scientists—appears for most of its length as the thinnest of lines in the dirt. The fault was indubitably the cause, however, of the earthquake and consequent raging fire in 1906 that devastated San Francisco (see Figure 6.1). After this natural disaster, at least 700 lay dead (the San Francisco city archivist recently reexamined many of the city's records and concluded that the number of deaths may well have been many hundred more) and many of the city's structures were toppled. More than a mere memorial to past events, the San Andreas fault is a harbinger. For the present population of San Francisco, for millions of other Californians who also live along the fault zone, and for hundreds of millions more throughout the world living near other plate boundaries, the past is prologue. The strongest natural forces on our planet, earthquakes are not accidental but incidental
OCR for page 155
A Positron Named Priscilla: Scientific Discovery at the Frontier to the seething, ceaseless dynamics of the great heat engine that is the earth's interior. The next major earthquake in California, "The Big One" natives call it cavalierly, is as inevitable a natural phenomenon as the sunrise, though unfortunately not as predictable. When it arrives, the dead could well number in the tens of thousands, a scenario that charges modern earthquake scientists, called seismologists, with a mission and a sense of purpose far beyond the pursuit of pure science. THE RIDDLES AND RHYTHMS OF 1992: EARTHQUAKE SCIENCE IN THE MIRROR In 1992 the American seismological community was galvanized and challenged by several dramatic events. On the morning of June 28, a meeting attended by many of America's most eminent seismologists was about to get under way at the University of California-Santa Cruz. Their purpose was to review and begin to evaluate a U.S. Geological Survey (USGS)-run project nearly a decade in the running: The Parkfield Earthquake Prediction Experiment. For reasons that will be explained shortly, some seismologists were virtually "certain" (in statistical terms they expressed a 95 percent confidence) that a major quake was due to occur near a small town in Central California before too many years. Writing in Science in 1985, Bill Bakun and Al Lindh of the USGS predicted that an earthquake would occur before 1993 near Parkfield. As of June 1992 the predicted quake had not arrived. But the project had focused on that particular section of the San Andreas fault a level of scientific attention and study that was unprecedented in America; many valuable insights had been collected, and the scientists were meeting to consider these and other implications of the Parkfield experience. Then on the morning the Parkfield meeting was to convene, the earth began to shake. The quake was not the magnitude (M) 6 earthquake predicted for Parkfield, however, but one many times larger (see the Box on p. 158), an M 7.3 quake emanating from a small desert community called Landers, dozens of miles to the south. The Landers quake relegated to a distant second place (in terms of the magnitude of energy released) the state's most famous recent quake—the Loma Prieta M 7.1 quake in 1989, which, as the World Series telecast had just begun, millions experienced vividly via television. As California's largest quake in 40 years, Landers was big news to scientists not because of its devastation—fortunately,
OCR for page 156
A Positron Named Priscilla: Scientific Discovery at the Frontier FIGURE 6.1 San Francisco, on the morning of April 18, 1906. This famous photograph by Arnold Genthe shows Sacramento Street and the approaching fire in the distance. (Photograph courtesy of the Fine Arts Museums of San Francisco, Achenbach Foundation for Graphic Arts.) its effects remained largely in the desert—but because it occurred in the south. The Loma Prieta quake had given seismologists important information about the San Andreas fault around San Francisco, but Landers provided a message to Los Angelenos, whose city's fate rests upon a network of underground faults that share with Northern California only the fundamental fact that all are part of the major fault system defining the boundary between the Pacific and North American plates. Why does the occurrence of a quake many miles from Los Angeles carry a prophetic message for the nation's second-largest city? Will "The Big One" strike there or at San Francisco, or somewhere else along the hundreds of miles of faults throughout California? Will the Parkfield prediction pan out, and when the quake does arrive, will the unprecedented experimental effort devoted to that region pay off? What is happening underground during an earthquake, and why, and, most importantly, where and when? These questions all point to a bottom line that few would dispute, for from a successful methodology of earthquake prediction could come the conservation of billions of dollars and the survival of many people who otherwise might perish in these unavoidable natural disasters. And why unavoidable? The question
OCR for page 157
A Positron Named Priscilla: Scientific Discovery at the Frontier belongs in the same category as, "Why does the sun rise in the east?" The exigencies of a lawful universe, as elucidated by Newton and many more since, provide the scientific context for a sunrise. So too are earthquakes, on Earth at least, inevitable. And many find it ironic that though so much closer and theoretically more accessible to empirical examination than the sun, the internal machinations of the earth remain shrouded in mystery. "The Big One" obviously concerns scientists in a vital way, since predicting it and its effects could save many lives and billions of dollars. But the mythology of "The Big One" (a boon to newspaper sales if ever there was one) has obscured from the public some of the scientific clarity developed in the last decade about earthquakes. Nonetheless, while major strides have been taken to understand the genesis of earthquakes, nature may not be so willing to deliver up her secrets. It is by no means certain that scientists will ever be able to accurately forecast an earth-quake, says geophysicist Bill Ellsworth (USGS-Menlo Park), though he and colleague Duncan Agnew (UC-San Diego) recently compiled a state-of-the-art survey of the current methods and models used to predict earthquakes. Nonetheless, each new quake literally pulls back a veil, and, as was the case when news of Landers arrived at the Parkfield meeting, the excitement among seismologists is palpable that behind this next one may be some scientific signal of a breakthrough in prediction, or at least confirmation for their models about the mechanics of earth-quakes. THE EARTH ITSELF Geology is the study of how the earth has changed through time. Evidence of such changes during the earth's 4.6-billion-year history are preserved and embedded in the rocks that, like an old rusting car bumping its way through city life, will contain clues of their journey through the earth's interior and down through aeons of time. Geological time is a euphemism for millions and billions of years. The journeys of the smallest of rocks and the largest of continents both ride the same juggernaut: internal currents of scorching, coursing heat that actually move masses of earth like great underground river currents, though at a pace of only inches a year. A more accurate image than a river, suggests Ellsworth, is a boiling pot of water, where the sea of bursting bubbles on the surface indicates that convection cells have developed in the pot. For the same reasons of basic physics—convection—hotter rock in the earth's deeper mantle
OCR for page 158
A Positron Named Priscilla: Scientific Discovery at the Frontier Earthquake Measurement Scales Besides where and when, a legitimate prediction must say how big—a parameter known as an earthquake's magnitude. Earthquakes can devastate human society, and the earliest efforts to catalog their size reflect this anthropocentric emphasis, focusing on, according to seismologist Bruce Bolt, damage to structures of human origin, the amount of disturbances to the surface of the ground, and the extent of animal reaction to the shaking. The size of nineteenth-century (and earlier) earthquakes is more than an arcane bit of data, however, even today. Recurrence models developed by seismologists to predict future earthquakes based on the pattern of past ones employ not only the date but also the magnitude of a preinstrumental earthquake. To make use of an intensity scale, first locate all written reports and eyewitness observations whose times and dates are known sufficiently to cluster them into widely placed reactions to the same event. Next, decipher each separate report according to the scale, assign the individual data points an intensity value from I to XII, and plot them on a geographical map of the region. Connect the dots corresponding to each value, and you've constructed an isoseismal map. With the twentieth century came a more scientific measure of magnitude. Seismographs (or seismometers) originated as elaborate and sensitive pendulum-like devices with a ''pen" mounted over a continuous roll of paper, producing a jagged line drawing that reflects the most delicate shaking of the earth's surface due to seismic waves. More modern seismographs use film and digital technology, and their readings are plugged into equations to quantify the energy released in an earthquake. The most famous of these equations was developed by American seismologist Charles F. Richter, who, writes Bolt, defined the magnitude (ML) of a local earthquake as "the logarithm to base ten of the maximum seismic-wave amplitude (in thousandths of a millimeter) recorded on a standard seismograph at a distance of 100 kilometers from the earthquake epicenter" (Bolt, 1988, p. 112). The enormous range of energies involved requires a logarithmic measure, where a rise of one digit yields a tenfold increase in displacement. Earthquakes have been recorded as low as -2 and as high as 9 on the scale, encompassing a range of 10. Compare, for example, the 1966 Parkfield quake (M 6) to the 1989 Loma Prieta (M 7.1) event that hit the San Francisco area. Loma Prieta released 30 times as much energy as Parkfield.
OCR for page 159
A Positron Named Priscilla: Scientific Discovery at the Frontier FIGURE 6.2 Modified Mercalli Intensity Scale (1956 version from Richter, 1958, pp. 137–138). (From Earthquakes by Bruce A. Bolt. Copyright © 1988 by W. H. Freeman and Company. Reprinted with permission.) Intensity value Description I. Not felt. Marginal and long-period effects of large earthquakes. II. Felt by persons at rest, on upper floors, or favorably placed. III. Felt indoors. Hanging objects swing. Vibration like passing of light trucks. Duration estimated. May not be recognized as an earthquake. IV. Hanging objects swing. Vibration like passing of heavy trucks; or sensation of a jolt like a heavy ball striking the walls. Standing cars rock. Windows, dishes, doors rattle. Glasses clink. Crockery clashes. In the upper range of IV, wooden walls and frame creak. V. Felt outdoors; direction estimated. Sleepers awakened. Liquids disturbed, some spilled. Small unstable objects displaced or upset. Doors swing, close, open. Shutters, pictures move. Pendulum clocks stop, start, change rate. VI. Felt by all. Many frightened and run outdoors. Persons walk unsteadily. Windows, dishes, glassware broken. Knickknacks, books, etc., off shelves. Pictures off walls. Furniture moved or overturned. Weak plaster and masonry D cracked. Small bells ring (church, school). Trees, bushes shaken visibly, or heard to rustle. VII. Difficult to stand. Noticed by drivers. Hanging objects quiver. Furniture broken. Damage to masonry D, including cracks. Weak chimneys broken at roof line. Fall of plaster, loose bricks, stones, tiles, cornices, also unbraced parapets and architectural ornaments. Some cracks in masonry C. Waves on ponds, water turbid with mud. Small slides and caving in along sand or gravel banks. Large bells ring. Concrete irrigation ditches damaged. VIII. Steering of cars affected. Damage to masonry C; partial collapse. Some damage to masonry B; none to masonry A. Fall of stucco and some masonry walls. Twisting, fall of chimneys, factory stacks, monuments, towers, elevated tanks. Frame houses moved on foundations if not bolted down; loose panel walls thrown out. Decayed piling broken off. Branches broken from trees. Changes in flow or temperature of springs and wells. Cracks in wet ground and on steep slopes. IX. General panic. Masonry D destroyed; masonry C heavily damaged, sometimes with complete collapse; masonry B seriously damaged. General damage to foundations. Frame structures, if not bolted, shifted off foundations. Frames racked. Serious damage to reservoirs. Underground pipes broken. Conspicuous cracks in ground. In alluviated areas, sand and mud ejected, earthquake fountains, sand craters. X. Most masonry and frame structures destroyed with their foundations. Some well-built wooden structures and bridges destroyed. Serious damage to dams, dikes, embankments. Large landslides. Water thrown on banks of canals, rivers, lakes, etc. Sand and mud shifted horizontally on beaches and flat land. Rails bent slightly. XI. Rails bent greatly. Underground pipelines completely out of service. XII. Damage nearly total. Large rock masses displaced. Lines of sight and level distorted. Objects thrown into the air. * To avoid ambiguity of language, the quality of masonry, brick or otherwise, is specified by the following lettering: Masonry A—Good workmanship, mortar, and design; reinforced, especially laterally, and bound together by using steel, concrete, etc.; designed to resist lateral forces. Masonry B—Good workmanship and mortar; reinforced, but not designed in detail to resist lateral forces. Masonry C—Ordinary workmanship and mortar; no extreme weaknesses like failing to tie in at corners, but neither reinforced nor designed against horizontal forces. Masonry D—Weak materials, such as adobe; poor mortar; low standards of workmanship; weak horizontally.
OCR for page 160
A Positron Named Priscilla: Scientific Discovery at the Frontier region (just above the white hot liquid outer core) moves upward, displacing some of the cooler upper-mantle material (which dives back down to deeper regions), creating a kind of great circular movement. Corresponding to the surface of the pot in this model is the interface where the earth's crustal plates (generally 40 kilometers or so thick, though nearly twice that under some mountains) can actually separate from the earth just below. The chemistry and rheology of this upper-mantle region, called the asthenosphere, permit the convection of heat in the earth to be transmitted to the rigid plates above in the form of movement. The crustal plates that form the earth's surface can be said to slide around on the viscous surface, reacting to these internal heat currents in the earth somewhat as small plastic poker chips might to the convection cells in the boiling pot of water. Instead of water bubbles bursting in the pot and releasing vapor into the air, the rising mantle material cools and forms new crust, as the plates slide around and jam into each other at their edges. The energy from this motion is temporarily stored elastically in the rock sections that press and lock together under friction, though "straining" to continue the motion powered by the heat currents. Eventually, the strain energy stored in this system overcomes the frictional resistance of the rock being crammed together, and the system "breaks," releasing the energy in the form of an earthquake. This model emerges from the reigning paradigm of plate tectonics that has revolutionized modern geology. About a dozen major plates constitute the surface of the earth. The visible continents we inhabit are themselves merely passengers frozen in place atop these slowly shifting rocky rafts of earth. As an entire plate moves only a few centimeters in a year, it is not surprising that people hundreds or thousands of kilometers from a plate's edge, say in Dubuque, Iowa, can remain largely oblivious to the implications of the land beneath their feet not—relative to the center of the earth—being definitively anchored. But those on the very edge of a plate, including millions of Californians, experience plate tectonics viscerally, as patches of rock at the western edge of the North American plate undergo stress, accumulate elastic strain, and eventually and inevitably break loose from rock of the adjacent Pacific plate that is being propelled in a different direction. All such underground shifts are earthquakes, though most occur at levels undetectable except by instruments. Understanding exactly how this occurs, where and when it is most likely, and how much energy will manifest at the surface is the focus of Earth scientists from a number of specialties who come together in search of the keys to earthquake prediction.
OCR for page 161
A Positron Named Priscilla: Scientific Discovery at the Frontier Elementary school children now hear this fairly straightforward description of why earthquakes are inevitable, and yet 30 years ago geophysicists were just beginning to piece it together, marking a new era in seismology, a field born at the end of the nineteenth century with the development of recording instruments designed to measure the shaking of the earth (seismometers or seismographs), and the "pictures" they produce (see Box on p. 162). But even the seismograms don't reveal the picture of plate tectonics described above. They simply don't contain that information. "Very clever people," says Ellsworth, "working with very sparse information, cracked the plate tectonics puzzle in the early 1960s, and the dramatic advances in seismology since then rest on better data gathering through advances in tools and technology, the enhanced power to compute complex numerical solutions, and advances in the theory.'' What Agnew and Ellsworth call the first "modern" theory of earthquake prediction, by G. K. Gilbert in 1883, was conceived without benefit of the plate tectonics model. Gilbert's notion, as idealized in the block-and-spring model, was developed by another American, Harry Fielding Reid, into what has come to be known as the elastic rebound theory. Reid's study of the fault region that broke during San Francisco's notorious 1906 earthquake provided the first solid evidence for the elastic rebound of the crust after an earthquake. After Reid, scientists began to look at individual earthquakes as passing through a seismic cycle that they could examine and analyze quantitatively, searching for patterns that could be used to develop a prediction of the next rupture. As a generalization, stress accumulates slowly over time on a given fault, it reaches a critical point of failure, the fault slips, and energy is released as an earthquake. Go into the lab—or, more often these days, boot up your computer for a simulation—attach a spring to a block, and exert a pull, and you can watch the basic idea on which Gilbert and Reid built the earthquake cycle hypothesis in action. You will find that the amount of force required before the spring slips and the distance the block moves are predictable. But the crucial question must be probed: Predictable in what sense? Ellsworth says that in the late 1970s, as elaborate experiments with block and spring models were being conducted in laboratories around the world (this construction of analog earthquake models is still going strong in such places as the Institute for Theoretical Physics in Santa Barbara), an important conceptualization came from Japanese seismologists Shimizaki and Nakata. They realized that, assuming the spring is being pulled at a constant rate, if the block always moves when a given force is
OCR for page 162
A Positron Named Priscilla: Scientific Discovery at the Frontier Seismic Waves Seismologist Thorne Lay from UC-Santa Cruz defines seismology as "the study of elastic waves and the various sources that excite such waves in the earth and other celestial bodies" (Lay, 1992, p. 153). Tectonic forces develop from convective recycling of the earth's interior, heat conducted to the crust that is converted to energy stored in rocks, under strain, which eventually fracture: An earthquake. Seismographs reveal that earthquakes are constantly occurring, several hundreds of thousands each year. Most rumble at levels undetectable by our unaided senses. Seismographs also show that the energy waves emitted from these earthquakes take several distinguishable forms, each traveling at characteristic speeds. One type, the body waves, travel through the earth, with the primary or P-wave able to traverse both the liquid and solid parts of the interior. The other type of body wave is slower. Called secondary waves (S-wave), these waves shear material sideways as they travel and therefore cannot propagate through the liquid zones of the earth's depths. The other major type of wave moves across the surface of the planet and also comes in two types, the Love wave and the Rayleigh wave. Basic models and equations were developed to recognize and analyze the travel times each of these waves took—beginning at the source where rocks fracture in the earth's crust—to arrive at various recording seismographs spread over the surface of the planet. By triangulation of the times that a specific wave arrived at several stations, a given event could be located within the earth. By comparing discrepancies between the times the various types of characteristic waves were predicted to—and then actually did—arrive at a given station, scientists gleaned a three-dimensional model of the earth's interior (Figure 6.3), and computer power has further enhanced the process, refining the approach now called seismic tomography. For instance, shear waves will not propagate through liquid, but primary body waves will; waves of any type pass more quickly through a colder region than a hotter one; and bulk mineral material adopts patterns of crystal alignment in its atoms that can enhance or retard the flow of waves. "These are just a few examples," said Lay (1992, p. 172), "of the ways in which seismological models have provided boundary conditions or direct constraints on models of planetary composition and physical state.''
OCR for page 163
A Positron Named Priscilla: Scientific Discovery at the Frontier FIGURE 6.3 Paths of earthquake waves through the earth's mantle from earthquake sources at F to seismographs at the surface. All the seismic paths pass through the shaded region and provide a tomographic scan of it. (From Earthquakes by Bruce A. Bolt. Copyright © 1988 by W.H. Freeman and Company. Reprinted with permission.) reached, one can predict when this will occur—what they called time-predictable behavior. Conversely, if the block always springs back to the same point, the result is slip predictable, though it can't be said when (over the course of exerting the force by pulling the spring) this will happen. In terms of prediction, says Ellsworth dryly, the slip-predictable model is very pessimistic. Conversely, as geophysicists continue to improve their ability to actually measure the tectonic stresses that build up in fault zones, the time-predictable model could one day lead to a more precise means of forecasting. THE LANGUAGE OF EARTHQUAKE PREDICTION What is an earthquake prediction? "Any serious prediction," say Agnew and Ellsworth, "must include not only some statement about time of occurrence but also delineate the expected magnitude range [see Box on p. 158] and location." Without these elements, a prediction could not with certainty distinguish between what is being foreseen and what might actually occur, given how much seismicity occurs near plate boundaries, so seismologist and writer Bruce Bolt of the University of California-Berkeley adds a fourth element: ''A statement of the odds that an earthquake of the predicted kind would occur by chance alone and without reference to any special evidence" (Bolt, 1988, p. 160). Seismicity means the distribution of earthquakes in space and over time. "We currently record an average of 40 earthquakes per day in Southern California," said Tom Heaton (USGS-Pasadena), "and there is every reason to believe that there are many more that are too small for us to record."
OCR for page 164
A Positron Named Priscilla: Scientific Discovery at the Frontier The current definitive overview of seismicity in California, edited by geologist Robert Wallace, was published recently by the USGS as Professional Paper 1515. The San Andreas Fault System, California, describes the geological (rocks and their formation), geomorphic (surface features), geophysical (the behavior of matter and forces), geodetic (surveys of the surface), and seismological (waves emanating from earthquakes and explosions in the earth) aspects of this network of faults that extend along much of the length of California, about 1300 kilometers, and perhaps as wide as 150 to 200 kilometers. There is actually an even larger fault system of which the San Andreas is but a part, running up the edge of the Pacific plate from the Gulf of California all the way north to Alaska. The San Andreas fault was discovered in the 1890s by Andrew Lawson of Berkeley. The northern end of the fault and the zone around it begin about 280 kilometers beyond San Francisco in the Pacific Ocean. It runs south-southeasterly just off the California coast, reaching landfall north of the city, and continues under metropolitan San Francisco, not far from the heart of the city. Continuing to the south it runs parallel to the coast about 60 kilometers inland until reaching the San Gabriel Mountains east of Los Angeles and then curls inland a bit more to the south, perhaps 150 kilometers east of San Diego. From an aerial or satellite photo, Wallace has said, the San Andreas looks like "a linear scar across the landscape," anywhere from several hundred meters to a kilometer in width, an effect of the erosion of rock that has been broken underground and then settled, over the aeons. The San Andreas fault proper is the principal plane where the local horizontal displacements in the earth's crust have occurred. In the zone surrounding this shear plane, geophysicists use such terms as fault branches, splays, strands, and segments to describe details in the fault geometry. Whatever shows at the surface is called the fault trace. With the fault terrain thus delineated, a legitimate earthquake prediction must locate where on a given fault an event is expected. Most large earthquakes that occur at plate boundaries are categorized as shallow-focus earthquakes, occurring less than 70 kilometers deep. The word focus is usually replaced by the seismologist's term hypocenter, meaning the point where the rupture begins. Projecting the hypocenter directly above to the earth's surface locates an earthquake's epicenter, which is usually what appears on most seismic maps. The much less frequent intermediate-focus earthquakes occur from 70 to 300 kilometers deep, and the fewer still deep-focus earthquakes below that, though none are known to have originated deeper than about 680 kilometers.
OCR for page 185
A Positron Named Priscilla: Scientific Discovery at the Frontier drama of evolution could have been played. The fact that earthquake stress drops appear to be much smaller "is why there is life on Earth. Otherwise, everything would be dead." Heaton illustrates this prophesy with what physics tells us about a world where the smaller forces (that seem to be observed) were not the operative ones. The fault would slip 100 meters (compared to around 2 meters, for example, at Loma Prieta). The ground itself during this tear would move about 10 meters per second. Heaton hypothesizes: "If your building was strong enough to survive this motion—say a bomb shelter—then certainly you would be killed by a wall running into you at 25 miles an hour. … Nothing would survive the experience. The point is, that although earthquakes seem violent to humans they actually involve stress changes that are small (typically about 15 bars) compared with the confining stress at the depth of earthquakes." The measurements of accumulating strain in the crust over several decades also support this premise, says Heaton: "The rate at which shear strain builds on the San Andreas fault is about 2 × 10-7 per year, or less than 1 bar of shear stress per decade." This is the first part of the dilemma: There would appear to be much more stress at the depths where earthquakes originate than is released during the event. But the essential stress number seismologists want to know is the amount at which the rock actually fails, and thus a second part of the stress dilemma must be explained: Many distinct lines of evidence suggest rocks are shearing at stresses much lower than would theoretically prevail at such depths. One such indicator comes from what is known about friction. If the shear stress was anywhere near 1 kilobar, laboratory experiments indicate, the rocks would melt during the rupture and enormous quantities of heat would accompany the event. But "despite extensive research on this problem, no such heat flow anomalies have been detected," said Heaton. Another piece of data comes from direct observation, for example, at a hole some 3.5 kilometers deep excavated at Cajon Pass, adjacent to the San Andreas. In 1987 Mark Zoback (of Stanford University) and others found the shear stresses at this depth to be relatively low, probably less than several hundred bars, said Heaton. Heaton poses the essential question these anomalies raise: "Clearly there is a problem. … It is impossible that stress on the San Andreas fault steadily builds until a yield stress of more than a kilobar is reached. Why are faults so weak? In laboratory experiments, we have yet to observe yield in rocks at such low shear stresses and such high confining pressures." Possible answers to this conundrum take the form of two
OCR for page 186
A Positron Named Priscilla: Scientific Discovery at the Frontier candidate solutions, each of which, emphasizes Heaton, seriously complicates the simple picture assumed for the characteristic earthquake model. First, the confining pressure in these environments may actually be less than that calculated at the engineer's desk, because of the effects of fluids under high pressure that may be present. Second, his own work and that of others on the physics of dynamic rupture suggest that earthquakes can occur at relatively low stresses. Early in the careers of many of these scientists, only about a decade after plate tectonics began to take hold, seismology appeared to be in the grip of another revolution, based on the dilatancy-fluid diffusion model. In the early 1970s Scholz and many others reported laboratory experiments that appeared to explain some idiosyncracies in 1969 Russian seismological data. As Bolt explains, "It has been found that, under some circumstances, wet rocks under shear strain increase their volume rather than decrease it. This increase in volume … is called dilatancy. There is evidence that the volume increase from pressure arises from opening and extending the many microcracks in the rocks. The ground water that then moves into the microcracks is much less compressible than air, so the rocks no longer close easily under pressure" (Bolt, 1988, p. 65). While this observation has not proven to be the breakthrough in prediction that it once appeared it might be, most geologists believe that the pressure effects of liquids are a crucial ingredient in local faulting. Said Heaton: "Understanding the role of fluids in fault zones is an important problem for which we currently have few certain answers." However, he added, important evidence that fluids are present in fault zones comes from straightforward geological analyses (like those conducted by Rick Sibson and colleagues in New Zealand) of quakes millions and billions of years old whose faults have been buried but later uplifted and revealed, which contain mineral veins that were hydrothermally deposited. Physics demonstrates that a column of liquid reaching to earthquake-focus depths exerts a hydrostatic pressure, but it is smaller than the lithostatic pressure exerted by a comparable column of rock. Ergo you have buoyancy, precluding the possibility that fluids migrate downward. Rather, he continues, we hypothesize that fluids migrate upward from a source deep in the crust, probably due to the metamorphism of hydrated minerals. Upward-rushing fluids would encounter fairly large pressure gradients at such depths, and Heaton does not rule out the possibility that such a powerful confluence of forces could trigger an earthquake by decreasing the effective confining stress. If this were so, he continues, it would be necessary to understand the plumbing of a fault zone in order to predict when earthquakes occur. Heaton's USGS
OCR for page 187
A Positron Named Priscilla: Scientific Discovery at the Frontier colleague James Byerlee and Harvard's Jim Rice have looked into this fluid problem extensively. Grappling with the stress dilemma puts geophysicists into a kind of would-be Walt Disney state of mind, because if only they could be present like some darting, impervious dervish of an animated creature in the midst of a dynamic rupture, many feel confident one or several more Rosetta stones would be revealed. In fact, the armamentarium of instruments sunk into and placed about the surface at Parkfield is the closest attempt yet mounted to descend into precisely that sort of adventure. Meanwhile, they must content themselves with the laboratory simulations, based on the best mix of theory and intuition. But these experiments are hounded by three crucial issues of scaling: Whether simulations can actually match, or at least transfer to, dynamic subterranean conditions; whether the compressed time of laboratory experiments distorts processes occurring over much longer periods; and also whether the lower range of seismic phenomena, about which there is more (and more accessible) data, is qualitatively different from what may be happening in M 6 or larger earthquakes. "After all," observed McNutt, "earthquakes occur all the time." Heaton has already mentioned that the 40 or so daily earthquakes presently cataloged in California are indicative less of total seismicity than of the limits of the instruments. Adds Ellsworth: "So far as we know, earthquakes continue down in size indefinitely. There is some debate about this in the community, as to whether they bottom out at around magnitude zero. But it's a logarithmic scale, obviously. There is evidence of much smaller earthquakes that are seen in very near proximity of sensors that are put in deep bore holes." So, reasons McNutt, "predicting when they occur is not the question so much as predicting when they're going to stop, since that makes the difference between the big and the little earthquakes.'' To answer this question, replied Heaton, we need to understand rupture dynamics. Rupture Dynamics—Propagating Pulses of Slip? When an earthquake will stop, then, is the focus of Heaton's scrutiny, as much as when will it commence. "My question is not when these earthquakes will occur (they happen all the time), but instead, which of these will be a big earthquake? There is far more to the physics than simply initiating rupture. … Big earthquakes are ones in which the rupture area is large. Thus, predicting which of the many small earthquakes will be a big one is a matter of predicting which rupture will
OCR for page 188
A Positron Named Priscilla: Scientific Discovery at the Frontier propagate a large distance." Looked at as a question of energetics, the problem is almost a tautology. "As long as the elastic energy released by sliding exceeds the work necessary to slide the fault surfaces past each other, the rupture will continue to propagate. … If the earthquake releases more energy than it absorbs, then it continues to run." Once the effort to overcome frictional resistance absorbs a crucial amount of energy, in essence diverting it from the work required to keep the slip propagating, "then it stops." That condition, where it stops, concludes Heaton, is really controlled by dynamic frictional sliding. Heaton's enthusiasm comes from his belief that traditional models of friction during slip are inadequate. Slipping friction is exceedingly important, he insists. It can control whether a rupture continues to propagate or dies out, and it is often assumed to be a constant value. The stress dilemma reminds us that the coefficient of friction for a given material is a function of its confining pressure, but—in the local environment of the rupture—this confining pressure is not known for certain, nor are its complexities fully understood by geophysicists, though Terry Tullis from Brown University has thrown much light on this area. Heaton also believes that the slipping friction may actually be a complex function of the slip history itself. The complexity he refers to could arise from several phenomena: The fluids that may be present, from dilatancy, and from dynamic stress variations that would accompany the elastic sound waves that must also be present. It also begins to suggest a solution to the stress dilemma. Given these considerations, he explains, it is not surprising that ruptures can propagate with relatively low driving shear stresses. The important point, Heaton emphasizes, is that the sliding friction is a complex phenomenon, not entirely understood, and certainly not some simple number in a table in an engineer's handbook. Add to this x factor the predicate that the underlying forces are tectonic, straining the materials in ways that are understood also only very generally. Once a slip begins, these two phenomena interact to permit the slip to propagate or not. This is a feedback system that is likely to be unstable, he says; that is, dynamic friction and the slip may be expected to vary (perhaps chaotically) as the rupture propagates along the fault. Notwithstanding the many uncertainties of tectonic drive and dynamic slip, Heaton has developed a candidate slip model, for which there is an elegant and simple analog first conceived by Jim Brune, who directs the seismological lab at the University of Nevada-Reno. "Carpet installers have long realized that it is much easier to move a rug by introducing a wrinkle than it is to slide the entire surface. …The theoretical calculations for
OCR for page 189
A Positron Named Priscilla: Scientific Discovery at the Frontier slip in a continuum show that if a pulse of slip with low frictional stress is introduced into a medium, it will tend to propagate. … These theoretical solutions have only recently been recognized by seismologists," said Heaton, but are now under study. By reexamining the seismogram records of major earthquakes, such as those in California at Imperial Valley in 1979 (M 6.5) and Morgan Hill in 1984 (M 6.2), Heaton and his colleague Stephen Hartzell saw that many ruptures were like a stony zipper, being opened and then rezipped from the rear, with only a very small portion of the fault actually slipping at any brief isolated moment. Their model accounts not only for the few direct observations by people but for some of the anomalies of the stress dilemma as well. Heaton's major premise—not hard for many to grant—is that dynamic friction is complex and variable, that is, heterogeneous throughout the local environment of the fault. Heterogeneous slip distributions indicate, he reasons, that dynamic variations in dynamic friction are to be expected. Given this premise, he believes that (for any of a number of possible reasons, already explained) dynamic friction on the fault is low near the crack front. The rupture begins to radiate, but frictional resistance, which increases at the outer edges of the slip, slows the slip velocity toward the rear of the opening. With less force propelling the rupture from the back, the slip "heals itself," and thus the actual slipping area that proceeds down the fault takes the form of a pulse, whose front has had time to move only a small distance forward. The pulse propagates only as far down the fault line as variations in the dynamic friction that is encountered will permit. This model provides Heaton with what he believes is the final nail in the coffin. The implications of dynamic friction unpredictability as indicated by heterogeneous slip behavior pose major problems for the characteristic earthquake model, concludes Heaton: "If these earthquakes were to repeat themselves over several recurrence cycles, then large slips would accumulate in some regions, but other regions would have very little slip. This is clearly an untenable situation. It suggests that future earthquakes will have different slip distributions, and have large slips in regions of small slip during past earthquakes. In this case the earthquakes would not be characteristic." Rather than look for a specific recurrence interval, Heaton believes that "predicting the time of a large earthquake requires that we must be able to predict the nature of this dynamic friction," about which much is unknown, though clearly it "has interesting and complex physics." A predictable and mathematical model of recurrence times is not feasible here, because the actual cause of earthquakes is not in any sense linear. According to Heaton and his
OCR for page 190
A Positron Named Priscilla: Scientific Discovery at the Frontier colleagues' models, "very long and very short recurrence intervals may be observed for any section of the fault. The key to knowing when a given section will rupture is the ability to predict the nature of dynamic rupture. This may be a very difficult task," he concedes, "perhaps impossible." But Heaton seems to prefer searching the earth itself for direct clues rather than cruising the earthquake catalog for meaningful statistical patterns. If no such underlying pattern exists—as he clearly believes—then successful models will be founded on basic geophysics, and both Laboratory Earth as well as the labs where the physical processes are probed through simulation are where the secrets are buried. THE PARKFIELD "OBSERVATORY" As this piece is being written, 1993 has just begun, ringing down the curtain on the 10-year window predicted for the Parkfield quake. When the Parkfield Working Group met in June on the morning of the Landers quake (two of its members were later to write), "the long wait for the Parkfield earthquake could lead to a high-degree of frustration in the Working Group. However, perhaps because of the significant gains that have and remain to be made, this attitude did not prevail at the meeting" (Michael and Langbein, 1992, p. 9). They invoked the words of G. K. Gilbert, who, 85 years earlier, writing just a year after the great San Francisco quake, had said: It is the natural and legitimate ambition of a properly constituted geologist to see a glacier, witness an eruption, and feel an earthquake. The glacier is always there ready, awaiting his visit; the eruption has a course to run, and alacrity only is needed to catch its more important phases; but the earthquake, unheralded and brief, may elude him through his entire lifetime. Gilbert's words were more wistful than dejected, as he in fact had observed earthquakes. And thus it should be emphasized that the science going on at Parkfield, which—after years of development—now might fairly be regarded as a National Earthquake Prediction Laboratory, is not overly reliant on when the next Parkfield quake will arrive. To be sure, the extensive attention to measuring the rupture will provide irrefutable "facts" that the characteristic hypothesis must be able to accommodate. Ellsworth feels that characteristic earthquakes are but one possible phenomenon and in fact thinks that Heaton's zipper model could well be fortified or disconfirmed by the Parkfield quake. And,
OCR for page 191
A Positron Named Priscilla: Scientific Discovery at the Frontier most importantly, the many phenomena that, almost since the birth of seismology, have been reported as "short-term precursors" of major earthquakes can be evaluated with more scientific forethought and validity than ever before. Wayne Thatcher, a USGS seismologist, clarified the scientific goals of the Parkfield experiment several years ago. Since "no single, simple set of events preceding earthquakes" has been identified, it cannot be claimed (of any precursory signals that might be identified at Parkfield, after the quake) that such a set "would universally precede earthquakes. However, given the strongly focused nature and high sensitivity of the Parkfield monitoring networks, there can be little doubt that new and unexpected features of the earthquake mechanism will be uncovered, and that significant constraints will be placed on the mechanics of the precursory process" (Thatcher, 1988, p. 78). A catalog of these experiments indicates the leading candidates for a precursor. The subject of precursors, says Joann Stock of Caltech, provokes heated controversy and could be explored at great length, but the phenomena involved only may correlate with earthquakes and, if so, nobody knows exactly how. If there is such a thing it will be recorded at Parkfield with much greater forethought and precision than any of the serendipitous and therefore hard-to-rely-on discoveries in the past that have preceded other earthquakes. The incessant flood of data is being continuously recorded in real time and analyzed, much of it with computer software designed to detect anomalies and to notify some of the many scientists who are tethered to the experiment by beepers. Six principal networks are in place: Seismicity. More than 20 advanced seismometers are located within 25 kilometers of Parkfield (three of them buried in boreholes). They provide continuous, high-gain, high-frequency seismic information for quakes down to the incredibly small, M 0.25 (about as much energy as it takes to light a parking lot). Machines to record larger M 3–4 shocks, called accelerometers, also are in place. All of these are hooked up to a complex data-processing array, providing an unprecedented fine-grained picture of seismic waves in the area. Creep. Thirteen creepmeters continuously monitor the slip at the surface, sending an update every 10 minutes to Menlo Park. The movement of the surface has been analyzed and recorded intensively, almost since the 1966 event. Strain. As the crust is being deformed, an array of strainmeters and dilatometers at various depths are detecting "principal strains, shear
OCR for page 192
A Positron Named Priscilla: Scientific Discovery at the Frontier strains, directions of maximum shear, areal strain, and various other strain parameters," said Bakun. Tilt, one of the more difficult changes to record because of solid-earth tides and the instability of near-surface materials, is still part of the search, with four closely spaced shallow borehole tiltmeters near Parkfield. Water wells. Eighteen wells, installed at 13 sites near the fault, are sampled and measured continuously for changes in the level or chemistry of the groundwater. One well near the expected epicenter should be able to detect pore pressure changes and dilatancy if and when it occurs. Magnetic field. Magnetometers are in place to detect changes in the earth's magnetic field due to stress changes in the crust. Electrical resistivity and radio frequency currents. Among the most often cited precursory phenomena are electrical signals recorded just before quakes, and the fluid changes that many think precede rupture could well explain changes in electrical earth currents. These may actually provide the most useful results, says Stock, and they have been the subject of much attention in Russia and China. On October 20, 1992, this array of measuring devices picked up an M 4.7 earthquake, a potential foreshock to the anticipated event and large enough to trigger the automatic warning system that had been established at Parkfield to provide California's Office of Emergency Services an opportunity to prepare people for an impending quake. The system was more or less automatic. That is, seismologists had agreed that M 4.5 shocks (which occur generally every 4 to 5 years) once in about every three times reliably precede shocks of M 6 or more. Thus, they had agreed years before, in establishing the generic system at Parkfield, that a shock of that size would trigger a warning in the context of this history, to wit: There was a 37 percent chance of the quake arriving in the next 72 hours. This "warning system" clearly reveals the foreshock dilemma. With most larger quakes, seismograms reveal a series of ruptures near the hypocenter, leading up to and falling away from the main shock over time. In hindsight, the dilemma disappears, as the main shock nearly always stands out clearly as the largest event, the unambiguous spike in the graph. But in present time the dilemma makes even the most seasoned seismologist nervous because only time will tell whether, say, an M 4.7 event is an isolated quake or among an escalating series of shocks that could grow into a much larger earthquake, even "The Big One." In October it turned out to be part of a smaller series, and the long-awaited Parkfield characteristic earthquake remains a no-show. To the pack of
OCR for page 193
A Positron Named Priscilla: Scientific Discovery at the Frontier national journalists who descended on Parkfield, rummaging around for comprehension as well as flamboyant quotes, the event provided another opportunity to becloud the public's appreciation of the subtleties, and the success, of modern American seismology. In point of fact, the Parkfield Working Group in June had real cause to celebrate the project's success, calling it an excellent observatory site for testing monitoring methods and expressing the belief that it had become, in reality, a National Earthquake Prediction Observatory. Citing a number of significant scientific and sociological results, they believe it should be staffed, funded, and supported as such. An instrument cluster is now being installed in the San Francisco Bay area, based on empirical experience gained at Parkfield. Greater subtlety in the relationship of wave structure to fault behavior has been developed from the experiment and recognized to be operative on other faults in California. The USGS pamphlet distributed to millions after Loma Prieta was modeled on a mailing that had been sent to Parkfield residents. And the alert structure being refined (through experiences like that in October 1992) at Parkfield provides an invaluable model to public planners. The Future Civilization was built atop a planet that, until the nineteenth century, even scientists believed to be permanent in form and recent in origin. And though its mass is conserved, the plate tectonics paradigm portrays the earth as a cauldron boiling with internal turmoil, whose surface volcanoes and earthquakes remind us what a heat engine our planet actually is. The tale told here—of dueling models, warring hypotheses—is testament to the intellectual excitement that fuels modern Earth science. Al Lindh and Bill Ellsworth believe that Parkfield and the search for pattern have already borne rich fruit. Probably not, say Jim Brune and Tom Heaton, bristling with their own belief that earthquake genesis is random and inherently unpredictable. And Kerry Sieh continues to break paths and upset applecarts (and the occasional colleague) with one intriguing find after another; just recently he unearthed records of previously unknown faults running right under the heart of Los Angeles. In 1993 theories of earthquake genesis and prediction are still embryonic and fairly rough, but it is precisely this inviting frontier—waiting to be explored and mapped—and the vital interests of millions of people that fire the energy, enthusiasm, and commitment of this band of modern scientists engaged in nothing less than the effort to better decipher the working drawings of humanity's one common hearth and home.
OCR for page 194
A Positron Named Priscilla: Scientific Discovery at the Frontier BIBLIOGRAPHY Ad Hoc Working Group on the Probabilities of Future Large Earthquakes in Southern California. 1992. Future Seismic Hazards in Southern California; Phase I: Implications of the 1992 Landers Earthquake Sequence. California Division of Mines and Geology, Sacramento, November. Agnew, D. C., and W. L. Ellsworth. 1991. Earthquake prediction and long-term hazard assessment. Reviews of Geophysics (Supplement to U.S. National Report to International Union of Geodesy and Geophysics 1987–1990). (April):877–889. Agnew, D. C., and K. E. Sieh. 1978. A documentary study of the felt effects of the great California earthquake of 1857. Bulletin of the Seismological Society of America 68(6):1717–1729. Bakun, W. H., and A. G. Lindh. 1985. The Parkfield, California, earthquake prediction experiment. Science 229:619–624. Bolt, B. A. 1988. Earthquakes. W.H. Freeman, New York. Brune, J., S. Brown, and P. Johnson. 1991. Rupture mechanism and interface separation in foam rubber models of earthquakes: A possible solution to the heat flow paradox and the paradox of large overthrusts. In Proceedings of the International Workshop on New Horizons in Strong Motion, Santiago, June 4–7, 1991. Technophysics, special issue (1993): ''New Horizons in Strong Motion: seismic studies and engineering practice," Ed., Fernando Lunt, Vol. 218, nos. 1–3, pp. 59–67. DeMets, C., R. G. Gordon, S. Stein, and D. F. Argus. 1987. A revised estimate of Pacific-North America motion and implications for western North America plate boundary zone tectonics. Geophysical Research Letters 14(9):911–914. Ellsworth, W. L. 1990. Earthquake history, 1769–1989. Pp. 153–181 in The San Andreas Fault System, California. Robert E. Wallace, ed. U.S. Government Printing Office, Washington, D.C. Ellsworth, W. L., A. G. Lindh, W. H. Prescott, and D. G. Herd. 1981. The 1906 San Francisco earthquake and the seismic cycle. In Earthquake Prediction—An International Review, Maurice Ewing Series 4. American Geophysical Union, Washington, D.C. Heaton, T. H. 1990. Evidence for and implications of self-healing pulses of slip in earthquake rupture. Physics of the Earth and Planetary Interiors 64:1–20. Heaton, T. H. 1990. The calm before the quake? Nature 343:511–512. Heppenheimer, T. A. 1990. The Coming Quake: Science and Trembling on the California Earthquake Frontier. Paragon House, New York. Lay, T. 1992. Theoretical seismology. In Encyclopedia of Earth System Science, Vol. 4, pp. 153–173. Academic Press, New York. McPhee, J. 1993. Assembling California. Farrar, Strauss and Giroux, New York. Michael, A. J., and J. Langbein. 1992. Parkfield: Learning While Waiting. Summary of the 1992 Santa Cruz Review Meeting. USGS Working Paper, November 24 draft. Rice, J. 1991. Fault stress states, pore pressure distributions, and the weakness of the San Andreas fault. Pp. 475–503 in Fault Mechanics and Transport Properties in Rocks. B. Evans and T. F. Wong, eds. Academic Press, New York.
OCR for page 195
A Positron Named Priscilla: Scientific Discovery at the Frontier Richter, C. F. 1958. Elementary Seismology. W.H. Freeman, San Francisco. Scholz, C. H. 1990. The Mechanics of Earthquakes and Faulting. Cambridge University Press, Cambridge. Shimizaki, K., and T. Nakata. 1980. Time-predictable recurrence model for large earthquakes. Geophys. Res. Lett. 7:279–82. Sibson, R., F. Robert, and F. Poulson. 1988. High-angle reverse faults, fluid pressure cycling, and mesothermal gold deposits. Geology 16:551–555. Sieh, K. 1984. Lateral offsets and revised dates of large prehistoric earthquakes at Pallett Creek, Southern California. Journal of Geophysical Research 89:7641–7670. Sieh, K., M. Stuvier, and D. Brillinger. 1989. A more precise chronology of earthquakes produced by the San Andreas fault in Southern California. Journal of Geophysical Research 94(B1):603–623. Thatcher, W. 1988. Scientific goals of the Parkfield Earthquake Prediction Experiment. Earthquakes and Volcanoes 20(2):78. U.S. Geological Survey. 1988. Configuration and Uses of Existing Networks. Publ. No. 1031. United States Government Printing Office, Washington, D.C. U.S. Geological Survey. 1989. Lessons Learned from the Loma Prieta, California, Earthquake of October 17, 1989. Publ. No. 1045. United States Government Printing Office, Washington, D.C. U.S. Geological Survey. 1992. Goals, Opportunities, and Priorities for the USGS Earthquake Hazards Reduction Program. Publ. No. 1079. United States Government Printing Office, Washington, D.C. U.S. Geological Survey. 1988. Parkfield: The prediction and the promise. Earthquakes and Volcanoes 20(2). United States Government Printing Office, Washington, D.C. U.S. Geological Survey. 1989. The Loma Prieta earthquake of October 17, 1989. Earthquakes and Volcanoes 21(6). United States Government Printing Office, Washington, D.C. Wallace, R. E., ed. 1990. The San Andreas Fault System, California. U.S. Geological Survey Professional Paper 1515. U.S. Government Printing Office, Washington, D.C. Working Group on California Earthquake Probabilities. 1988. Probabilities of Large Earthquakes Occurring in California on the San Andreas Fault. U.S. Geological Survey Open-File Report 88–398. U.S. Government Printing Office, Washington, D.C. Working Group on California Earthquake Probabilities. 1990. Probabilities of Large Earthquakes in the San Francisco Bay Region, California. U.S. Geological Survey Circular 1053. U.S. Government Printing Office, Washington, D.C.
Representative terms from entire chapter: