Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 70
Condensed-Matter and Materials Physics: The Science of the World Around Us 4 What Is the Physics of Life? The study of living matter poses special challenges for condensed-matter and materials physics (CMMP) because the constituent biomolecules are far more complex than the atoms or molecules that form most materials. Researchers are just beginning to see how understanding of materials can be extended to living systems and to recognize the organizing principles that govern living matter. Already, burgeoning understanding is leading to an unprecedented degree of collaboration between CMMP scientists and biologists, on problems ranging from why proteins misfold and form unwanted structures in diseased tissues, as in Alzheimer’s disease, to how the brain works. CMMP will continue to catalyze advances in biology and medicine by providing new methods for quantitative measurement, from rapid genome-sequencing techniques to novel medical diagnostics. At the same time, the study of biological systems broadens the horizons of physics. The unparalleled specificity and robust functioning of biomolecular systems, such as those that enable viruses to assemble or cancer cells to spread, generate new theoretical ideas and inspire the creation of novel materials and devices. Finally, a fundamental characteristic of physics, especially CMMP, is its ability to analyze complex systems by identifying their essential and general features. This conceptual approach has the potential to be indispensable in sifting through the vast trove of accumulating data to tackle the origins of the ultimate emergent phenomena: life and consciousness. OVERVIEW Physics and biology were not always separate subjects. In the 19th century, Helmholtz, Maxwell, Rayleigh, Ohm, and others moved freely among problems
OCR for page 71
Condensed-Matter and Materials Physics: The Science of the World Around Us that are now distinguished as being parts of physics, chemistry, biology, and even psychology. At the beginning of the 21st century, there is a renewed sense of excitement at the interface between physics and biology. Many biologists believe that we stand on the threshold of another revolution, in which biology will become a more quantitative science, a science more like physics itself. At the same time, many physicists have come to view the profound challenges posed by the phenomena of life as much more than opportunities for the “application” of known physics to biology; rather, these striking phenomena encourage the stretching of the boundaries of CMMP and physics itself. This is leading to the blossoming of biological physics as a branch of physics, confronting the phenomena of life from the physicist’s unique point of view. The Committee on CMMP 2010 emphasizes that the challenge here is not for physicists to become biologists, but rather for the physics community to reclaim for its own some of the most inspiring parts of the natural world and to seek an understanding of living systems that parallels the profound understanding of the inanimate world. One cannot overstate the role that the experimental methods of physics have played in contributing to the solution of problems posed by biologists, especially in the second half of the 20th century, and the committee expects that these developments will continue (see Chapter 8). Looking ahead, however, the most important opportunities at the interface of physics and biology arise because physicists bring a different perspective to the phenomena of life. Faced with these same phenomena, biologists and physicists ask different questions and expect different kinds of answers. The goal in this chapter is to communicate the excitement that now surrounds these physicists’ questions about life and to point toward the areas where the most dramatic future developments might occur. Placed in the context provided by the interactions between physics and biology in the 20th century, it is likely that these developments will reshape fundamental understanding of some of the most striking of natural phenomena and expand our ability to exploit this understanding in solving practical human problems. Given the extremely broad range of problems currently being addressed at the interface between physics and biology, the committee cannot pretend to give a complete account of the current state of the field. Instead, in what follows, it focuses on a few conceptual themes and explores how these themes run through a wide variety of phenomena. The committee’s choices are intended to be illustrative, not canonical. AN INTRODUCTORY EXAMPLE: HIGH FIDELITY WITH SINGLE MOLECULES One of the central problems faced by any organism is to transmit information reliably at the molecular level. This problem was phrased beautifully by Schrödinger in “What Is Life?,” a series of lectures given (in 1943) in the wake of the discovery
OCR for page 72
Condensed-Matter and Materials Physics: The Science of the World Around Us that genetic information—our very identity as individual humans and the inheritable identity of each individual organism—really is encoded by structures on the scale of single molecules, not large collections of molecules as one might have expected. The history of these ideas and the involvement of physics and physicists in the rise of molecular biology have been recounted many times. The focus here is on problems that remained unsolved long after the basic facts of deoxyribonucleic acid (DNA) structure and the genetic code were established. As described below, the resolution of these issues involved the appreciation that there is a common physics problem that the organism needs to solve in order to carry out its most basic functions. As everyone now learns in high school, genetic information is stored in the DNA molecule. DNA is a polymer made from four types of subunits: the “bases” adenine (A), thymine (T), cytosine (C), and guanine (G); the sequence of bases along the polymer defines the genome, the full instructions for building the organism. In the double-helical structure of DNA there are two intertwined strands, and the structures of the bases are such that if one strand contains the base A, then on the other strand it is most favorable to find the base T, and similarly if one strand contains C the other base will most likely contain G. This structural complementarity provides the basis for the information stored in DNA to be copied each time a cell divides. The idea of a molecular template involves a wonderful interplay among biology, chemistry, and physics, with implications reaching from DNA replication to modern attempts at the self-assembly of nanostructured materials. But structural complementarity cannot be the whole story. Looking carefully at the bond energies involved, the difference between a correct (A with T or C with G) pairing of bases and an incorrect pairing can be as small as 10 times the thermal energy at room temperature. All else being equal, this means that if polymerization of DNA proceeds to equilibrium, the probability of an incorrect base pairing is exp(−10), or ~0.0001. But this would be a disaster: since even simple bacteria have millions of bases in their DNA, each time a bacterium divided to make two bacteria there would be hundreds of errors. For humans, with billions of base pairs in our genome, there would be roughly one hundred thousand mistakes as we pass our genetic inheritance to our children. In fact, error rates can be as low as one mistake per generation. The problem of precision also arises when the cell needs to “read” the information coded in the DNA. Now a single strand of DNA provides a template for the synthesis of a slightly different polymer, ribonucleic acid (RNA), but the idea is the same. The messenger RNA also has a sequence of bases, and these are read once more, now in groups of three, to determine the sequence of amino acids in proteins, yet another kind of polymer inside the living cell. In each of these several steps from DNA to protein, the cell must distinguish one molecule from another
OCR for page 73
Condensed-Matter and Materials Physics: The Science of the World Around Us to find the correct base or the correct amino acid to incorporate into the growing polymer; in each case the structures and bond energies are not enough to explain the very low error rates that are achieved in living organisms. That these different systems are spoken of as having a common problem of precision in molecular synthesis is itself an insight from physics, specifically from the work of Hopfield in 1974. Historically, DNA replication, the transcription of DNA into RNA, and the several steps of translation from RNA to protein all were different subfields, each with its own complex phenomenology. Hopfield realized that there was a common problem that cuts across these different layers of biological function. In a sense, the problem that he identified is the old problem of Maxwell’s demon. In the 19th century, Maxwell imagined a container filled with gas (as with the air in the room where you are sitting), in which was a little demon who would see individual molecules arriving at the wall of the container, would measure their speed, and if they were moving fast enough would open a trapdoor into another chamber. After a while, the second chamber would be filled only with molecules moving at high speed, and thus it would be warmer than the original container; this temperature difference then could be used to do useful work. Apparently the little demon has made something out of nothing, violating the second law of thermodynamics. More generally, any demon who could sort molecules—not just by their speed, but also by their identity—with perfect accuracy would violate the second law, and we know (since the second law really is true!) that the demon must therefore pay some cost in energy in exchange for the ability to sort molecules. Hopfield realized that what cells needed to solve their problem of accuracy was something like Maxwell’s demon, with the attendant energy cost—living organisms must pay, in energy, for their ability to convey information so precisely from one molecule to another. It had been known before 1974 that each example of accurate molecular synthesis involved a complex sequence of chemical reactions. Hopfield argued that this complexity could be organized once one understood that the key step was energy dissipation: in each case, a chemical reaction that seemingly wastes energy really is essential in allowing the system to proofread, and thus to correct errors. Further, because errors are unlikely, the processes that allow for error correction seem like rather minor side branches in the overall flow of molecules, until one realizes their essential function. Hopfield’s theory of kinetic proofreading, along with the related ideas of Jacques Ninio, made many successful experimental predictions, and the essential idea of kinetic proofreading has proven correct. Subsequent theorists have suggested yet more examples in which biology achieves paradoxically precise function at a molecular level and thus in which some version of proofreading may be at work, from the specificity of cellular signal transduction to the untangling of the strands of the double helix. Recent work has seen a dramatic demonstration of proofreading, with the
OCR for page 74
Condensed-Matter and Materials Physics: The Science of the World Around Us direct observation of the individual molecular events in which mistakes are corrected. This work is part of the continuing development of techniques for observing biological processes at the single molecule level, often using optical trapping methods (Figure 4.1). Several generations of improvements in the design of such optical traps have made possible the observation of stepping motions of the RNA polymerase as it reads the DNA to synthesize messenger RNA; the steps are 3.4 angstrom (Å), the distance from one base to the next along the DNA double helix. Even more dramatically, this stepwise walking can reverse, and the frequency of such backtracking is increased when the surrounding solution contains higher concentrations of the “wrong” nucleotide bases. Almost certainly this reflects the elementary steps of proofreading. Techniques based on fluorescence energy transfer between nearby molecules are giving glimpses of the proofreading steps in translation from messenger RNA to protein, again at the level of single molecules. ORGANIZING OUR THOUGHTS AND OPPORTUNITIES The great discovery of modern biology has been the existence of universality at the molecular level: All organisms use the same rules for reading the information coded in their DNA, and most if not all of the functional molecules act as interchangeable parts. This is a startling confirmation of Francois Jacob’s dictum that “what is true for [the bacterium] E. coli is true for the elephant,” and appeals to the physicist’s desire for universal explanations. But what makes life alive is not FIGURE 4.1 Watching a single molecule read the genetic code. A high-precision, double optical trap (left) makes it possible to observe a single ribonucleic acid (RNA) polymerase (RNAP) molecule as it “walks” along deoxyribonucleic acid (DNA) to synthesize the messenger RNA. A strong trap holds a bead attached to one end of the DNA molecule, and a weak trap holds a bead with the RNAP molecule attached; the distance between the beads changes as the molecule moves, and is monitored by interferometry. In the two panels to the right, it can be seen that the motion occurs in steps close to the 3.4 Å spacing between bases along DNA (indicated by the dashed lines), and there are occasional backward steps. Backward motion is enhanced under conditions in which errors are more likely, as would be expected if these backward steps were associated with proofreading. SOURCE: E.A. Abbondanzieri, W.J. Greenleaf, J.W. Shaevitz, R. Landick, and S.M. Block, “Direct Observation of Base-Pair Stepping by RNA Polymerase,” Nature 438, 460-465 (2005).
OCR for page 75
Condensed-Matter and Materials Physics: The Science of the World Around Us single molecules—it is the way they interact and work together. The next challenge thus is to discover universal principles at this system level. This problem has captured the imagination of many physicists, who are looking at systems ranging from bacteria to brains. As in the example of kinetic proofreading, this search cuts across many different levels of biological organization. In the subsections that follow, two of these conceptual challenges and how they appear in different contexts are discussed: noise in biological systems and balancing fine-tuning and robustness. Because physicists still are searching for principles, these examples are intended to be illustrative rather than exhaustive. NOISE IS NOT NEGLIGIBLE The great poetic images of classical physics are those of determinism and clockwork. They come from the era of science that produced the first precision machines, and to a large extent modern efforts begin by constructing elements—for example, the submicron circuit elements on a modern chip—that approach the image of clockwork and determinism as closely as possible. Strikingly, life operates far from this limit. Interactions between molecules involve energies of just a few times the thermal energy, and biological motors, including the molecular components of muscles, move on the same scale as Brownian motion. Biological signals often are carried by just a few molecules, and these molecules inevitably arrive randomly at their targets. Human perception can be limited by noise in the detector elements of sensory systems, and individual elements in the brain, such as the synapses that pass signals from one neuron to the next, are surprisingly noisy. How do the obviously reliable functions of a life emerge from under this cloud of noise? Are there principles at work that select, out of all possible mechanisms, the ones that maximize reliability and precision in the presence of noise? Molecule Counting in Chemotaxis It is a remarkable fact that single-celled organisms such as bacteria are endowed with sensory systems that allow them to move in response to a variety of signals from the environment, including the concentrations of various chemicals. This process, called chemotaxis, has been an enormously productive meeting ground for physics and biology. In particular, problems of noise are central to understanding chemotaxis. To begin, one has to understand how bacteria swim. Fluid mechanics predicts that changing the size of a moving object is equivalent to changing the viscosity of the fluid; what matters is a combination of parameters (size, speed, and viscosity) called the Reynolds number. A micron-sized bacterium swimming in water has the same Reynolds number as that of a human swimming through
OCR for page 76
Condensed-Matter and Materials Physics: The Science of the World Around Us (slightly) wet concrete. Thus, life at low Reynolds number is very different from everyday experience. As always in physics, interesting problems in one domain have deep connections to other fields. Thus, the problem of self-propulsion at low Reynolds number has a formulation in terms of gauge theories of the sort used to describe the interactions of elementary particles, and a deeper understanding of low Reynolds number flow has been critical to the development of practical microfluidic devices. Bacteria are propelled by long, wispy appendages called flagella (Figure 4.2). One of the most startling discoveries about the physics of life was that these flagella are not “waving” in the water, but actually rotating, driven by nanometer-sized protein motors; power is provided by the difference in chemical potential for hydrogen ions between the inside and outside of the cell. This might seem like a curiosity; FIGURE 4.2 Bacterial motility. (Upper left) Fluorescently labeled flagella can be seen emanating from the cell bodies of Escherichia coli. (Right) A schematic view of the rotary motor, which is located at the base of each flagellum. The motor contains hundreds of parts, assembled from more than 20 different protein components, some of which are labeled here. The inset shows an image of the motor reconstructed from cryoelectron microscopy. (Lower left) When the motors spin together in the counterclockwise direction (as viewed from outside the cell), a stable bundle forms and propels the cell forward. When one or more of the motors spin in the clockwise direction, the bundle is disrupted and the cell tumbles and makes little forward progress. The switching between forward and tumbling motion causes the cell to undergo a (potentially biased) random walk. SOURCES: (Upper left and right [drawing]) H.C. Berg, Harvard University. (Right [inset]) D.J. DeRosier, Brandeis University. (Lower left) L. Turner, Harvard University.
OCR for page 77
Condensed-Matter and Materials Physics: The Science of the World Around Us after all, muscles move because of linear (not rotary) displacements of one molecule relative to another, and the energy for this motion is provided by adenosine triphosphate (ATP) (not protons)—a molecule that is widely used in biochemical reactions that need energy in order to go forward. But all living cells synthesize ATP using the chemical potential difference for protons across a membrane, and the enzyme that carries out this synthesis rotates as it performs its function. Thus, proton-driven rotation is the central step in energy flow for all cellular processes. The intellectual path from the peculiarities of low Reynolds number fluid mechanics to the essence of energy flow in cells provides a remarkable example of the interaction between physics and biology. But why is noise so important? On the scale of bacteria, the transport of molecules is dominated by diffusion, that is, by random motion. Because the concentrations of interesting molecules in the environment are small, and differences in concentration over the scale of microns are even smaller, this randomness is not negligible: even if a bacterium can count every molecule that arrives at its surface, its estimate of whether its environment is rich or poor will be of limited precision. Bacteria themselves are so small that they become disoriented by their own rotational Brownian motion within roughly 10 seconds. Thus, these cells need to make decisions quickly, which limits their ability to average out the effects of noise. The result is that bacteria do not have access to a reliable signal that, for example, would allow them to measure whether the concentration of sugar is higher at their right or their left. The only available strategy is to measure changes of concentration in time along their path, to see if things are getting better or worse. Direct observation of bacterial swimming using a tracking microscope revealed that they indeed swim along roughly straight paths, just long enough to see the disorienting effects of Brownian motion. These straight paths (runs) are interrupted by discrete events (tumbles), after which the cell picks a new direction almost at random. When runs take the cells up the trail of an attractive chemical (or down the trail of a repellent), the time until the next tumble is longer. Quantitative analysis demonstrates that reliable changes in run length occur when the differences in concentration along the run are just slightly larger than the random shot noise in the arrival of molecules via diffusion. This means, in effect, that the cell is counting every single molecule that arrives at its surface, reaching the ultimate in sensitivity. Running and tumbling correspond to different directions of rotation of the flagellar motor, which can be observed more directly in tethered cells. Such experiments show clearly that cells are responding to changes in time, and that the arrival of one additional molecule at the cell surface produces a substantial (~10 percent) change in the probability of clockwise rotation, which triggers tumbles. This probability depends on a “processed” version of the input chemical signals, comparing an average of the concentration over the last second to the average over the previous 4 or 5 seconds. If the cell averaged for much less time, noise would
OCR for page 78
Condensed-Matter and Materials Physics: The Science of the World Around Us then dominate the signal, while much longer times would be useless because of the disorienting effects of Brownian motion. In this sense, the bacterium’s behavioral strategy is matched to the physics of its environment. Genetic techniques have identified a cascade of molecular events that lead from receptors at the cell surface to the rotational bias of the flagellar motor. Combining methods from molecular biology and physics, one can engineer bacteria that produce fluorescent analogs of these molecules and then use a variety of optical methods to observe the individual steps of this amplifying cascade. At the front end of the system, theorists have proposed that sensitivity can be enhanced further by collective interactions among the receptor molecules, and these ideas are being tested through experiments using fluorescence resonance energy transfer to detect associations between pairs of molecules in the cascade. At the output, the motor responds directly to a particular protein; using fluorescence correlation spectroscopy one can measure the absolute concentration of these molecules (~1,000 per bacterium) and show that the motor is extraordinarily sensitive, making a complete transition from clockwise to counterclockwise rotation in response to just a 10 percent change in concentration. The committee expects that, in the next decade, the collaboration between theory and experiment will result in a clear physical picture of how bacteria achieve their ultimate sensitivity to individual molecular events. Noise in the Regulation of Gene Expression Every cell in a person’s body has the same DNA; what makes the cells in the brain different from the cells in the liver is that different sets of genes are expressed as proteins. As an embryo develops, each cell makes decisions to express or not to express each gene, and the sequence of these decisions determines the identity of the cell in the adult organism. Once the organism has developed, each cell has to modulate the expression level of certain genes, making more or less of particular proteins depending on the circumstances, and this same problem arises even in single-celled organisms. To a large extent, the signals that the cell uses to regulate the expression of genes are themselves proteins, called transcription factors, which bind to specific sites along the DNA and enhance or repress the transcription (reading of DNA into messenger RNA) of nearby genes. For many years it has been clear that noise must be interesting in these processes because the relevant molecules are present in very small numbers. In the present decade, a convergence of ideas and methods from physics and biology has made it possible to turn these vague suspicions into precise experiments. If one looks at a large number of bacteria in a test tube, expression levels of particular genes vary widely, but some of this variation may be in response to uncontrolled signals in the environment. What one really wants to know is how accurately a single bacterium (for example) can adjust the expression level of one
OCR for page 79
Condensed-Matter and Materials Physics: The Science of the World Around Us gene, given that all inputs are fixed. One major experimental advance with respect to this question was to engineer a bacterium that uses the same transcription factor to regulate the expression of two genes, one coding for a green fluorescent protein and one coding for a red fluorescent protein (Figure 4.3). Genuine noise in the response to the transcription factor thus is converted into a change in color of the bacterium, shifting the red/green balance, while changes in other variables contribute only to “common mode” variations in the overall brightness of the cells, not to changes in color. Observing these genetically engineered bacteria with state-of-the-art light microscopy demonstrated that the intrinsic noise in gene expression could be as small as 10 percent. This was startling, since the broad distribution of expression levels seen in populations of cells had reinforced the notion that biological systems are very noisy, perhaps so noisy that cells could do no more than to turn genes on and off. In fact, cells have access to a “dial” that can be set accurately to many distinct expression levels. Quantitative measurements of the way in which noise levels vary as a function of the mean expression level have driven theoretical efforts to dissect the contribution of different noise sources. Dynamic measurements are revealing the correlation times of different noise sources, with important consequences for theory. Different experimental strategies provide fluorescence signals related not just to protein concentration but also to the concentration of messenger RNA (mRNA); this has made it possible to see the transcription of mRNA molecules one by one and the resulting bursts of translation of mRNA into protein, giving a very detailed view of noise in these distinct steps. When cells divide, proteins are apportioned to the daughter cells at random; observation of this randomness makes it possible to get an absolute count of the number of molecules in the cell, putting the whole discussion of noise on an absolute scale. Given such absolute measurements, physicists can start to ask if the precision of gene expression is limited by physical principles, as with the example of molecule counting in chemotaxis. Many genes are part of regulatory feedback loops, leading in some cases to multiple stable states that correspond to different “lifestyles” of the cell; noise causes spontaneous transitions among these states, and there is a substantial effort to characterize the interplay between deterministic and noisy dynamics in such systems. In testimony to the unity of physics, the theoretical methods used to understand spontaneous transitions among states of genetic regulatory circuits are borrowed from the instanton methods that were developed in statistical physics and quantum field theory. Multistable systems can be thought of as memory devices, and this “epigenetic” memory can be transmitted from generation to generation. Explorations in this field thus will characterize the capacity for information transmission beyond the genome itself. While the initial studies of gene regulation focused on the “simplest” systems (bacteria), more complex biological systems now seem accessible to physics
OCR for page 80
Condensed-Matter and Materials Physics: The Science of the World Around Us FIGURE 4.3 Intrinsic and extrinsic noise in gene expression. Escherichia coli are engineered to produce two different fluorescent proteins with different colors, but the expression of both proteins is controlled by the same signals. In a population of bacteria under nominally fixed conditions, there are shown (a) wide variations both in the total brightness and the color, suggesting that there is a substantial amount of noise in the system. Under conditions leading to greater expression, and hence brighter images (b), the variations in color are smaller, indicating that the control over expression level has become much tighter. (c) The noise is quantified by the fractional standard deviation in protein concentration, and this is separated into extrinsic components (that change the expression of both proteins) and intrinsic components. As explained in the text, the intrinsic noise can be quite low, so that expression levels can be set by the cell with roughly 10 percent accuracy. SOURCE: M.B. Elowitz, A.J. Levine, E.D. Siggia, and P.S. Swain, “Stochastic Gene Expression in a Single Cell,” Science 297, 1183-1186 (2002). Reprinted with permission from the American Association for the Advancement of Science.
OCR for page 81
Condensed-Matter and Materials Physics: The Science of the World Around Us experiments. Perhaps the most dramatic examples are in the problem of embryonic development. One of the great discoveries in the modern era of molecular biology is that the spatial structure of organisms has its origins in spatial patterns of expression for particular genes that are visible in the embryo at very early times after fertilization of the egg (Figure 4.4). For a physicist, these results pose many questions: How is it possible for these patterns to scale with the size of the embryo, so that the different parts of the adult organisms remain in correct proportion even as the whole organism varies in size? What mechanisms guarantee that the spatial patterns of gene expression are reproducible from embryo to embryo? How reliably can neighboring cells in the embryo respond to the small differences in the concentration of the signaling molecules that drive these beautiful patterns? Is it possible, for example, that the precision with which a developing organism can draw boundaries between the different parts of its body is limited by the fundamental physical rules that govern the counting of individual molecules? Answers to these and other challenging questions will emerge in the next decade. FIGURE 4.4 Genes and development in the fruit fly embryo. Images of the whole embryo, roughly 0.5 millimeter long. Two and one half hours after fertilization of the egg (top left), the thousands of cells on the surface of the embryo appear identical in an electron micrograph. Minutes later (top middle), the embryo undergoes gastrulation, folding in on itself along the bottom and making the cephalic furrow that defines the boundary of what will become the organism’s head. Signals that determine the location of these structures are detectable as spatial patterns in the expression levels of different genes, two of which are made visible here in orange and green (top right). A close-up (bottom left) reveals the tight registration between the patterns of gene expression and the structural patterns such as the cephalic furrow. Additional measurements demonstrate that the stripes of gene expression correspond to the segments of the larval fly’s body (bottom right). SOURCE: Images courtesy of E.F. Wieschaus, Princeton University.
OCR for page 82
Condensed-Matter and Materials Physics: The Science of the World Around Us Signals and Noise in the Brain Sitting quietly in a dark room, a person can detect the arrival of individual photons at his or her retina. This observation has a beautiful history, with its roots in a suggestion by Lorentz in 1911. The exploration of photon counting in vision reaches from the analysis of human behavior down to the level of single molecules, with many physics problems at every level. For example, it is not just that one can count single photons; the reliability of counting seems to be set by “dark noise” in the photodetector cells of the retina, and this noise in turn is dominated by thermal activation of the molecule (rhodopsin) that absorbs the light and whose structural response to photon absorption is the initial trigger for vision. The ability of the visual system to count single photons, down to the limits of thermal noise, inspires a consideration of the reliability of our perceptions more generally. Absorption of a photon by one of the roughly 1 billion rhodopsin molecules in a photodetector cell results in a small pulse of current flowing across the cell membrane. The path from rhodopsin to the current involves a cascade of biochemical reactions that can be thought of as “molecule multipliers”: when one rhodopsin molecule is activated by absorbing a photon, it catalyzes the conversion of many molecules from one state to another, these molecules act as catalysts for another reaction, and so on, until there is a relatively macroscopic change in concentration of a small molecule that binds to channels in the cell membrane and gates the flow of ions (see the discussion of ion channels below). But the number of molecular events that rhodopsin catalyzes depends on the time that it spends in the activated state, and this time must vary randomly each time a photon is absorbed by a different individual molecule. In fact, the variability of the current that flows in response to a single molecule is relatively small (~10 to 20 percent). Perhaps the most interesting possibility is that this problem is solved before it starts—rather than having rhodopsin exit its activated state by a single, necessarily random step (as in radioactive decay), the shutoff of rhodopsin activity could involve many steps, so that the random times required for each of the multiple steps average out to generate a much more precisely defined lifetime. The rhodopsin molecule in fact has a tail with multiple sites that can be modified by the attachment of a phosphate group, and it had long been known that this phosporylation is critical to the shutoff of rhodopsin activity. One can engineer rhodopsin molecules that have different numbers of phosphorylation sites, and reinsert the genes for these engineered molecules into the genome so that the photodetector cells of the retina actually make the modified molecules. The result is that the variability of the single photon response is larger in molecules that have fewer sites, and quantitatively the variance is the inverse of the number of sites, as predicted theoretically. This shows how nature has selected a very special, and seemingly complex, mechanism to do something simple—changing the state of a molecule in many steps rather than just one—in just such a way as to enhance the precision of the system beyond naïve
OCR for page 83
Condensed-Matter and Materials Physics: The Science of the World Around Us expectations based on the random behavior of single molecules. Similarly subtle strategies for noise reduction in other processes remain to be discovered. The example of photon counting encourages researchers to look for other examples in which the function of the nervous system may approach fundamental limits set by noise. Recent work along these lines includes the demonstration that, under certain conditions, the fly’s visual system can estimate motion with a precision limited by noise in the receptor cells of the compound eye, and that this noise in turn is dominated by photon shot noise. Making optimal estimates in the presence of noise requires some prior hypotheses about what to expect, and it has been suggested that illusions—in particular, illusory motion percepts—can be understood, perhaps even quantitatively, as violations of these hypotheses. Effective priors are matched to the statistical structure of real-world signals, and this has led to a flowering of interest both in the characterization of these statistics—an effort that borrows heavily from the conceptual tools of CMMP—and in the neural response to such rich dynamic inputs. In a different direction, it has been suggested that human strategies for movement may be determined by the need to minimize the impact of noise in the control of muscles, optimizing the precision with which one can reach for a target. Noise also limits the capacity of neurons to carry information about the outside world, and it has been suggested that the coding strategies adopted by the brain may serve to make maximum use of this capacity; again this involves a matching of neural computation to the structure of its inputs. Miniaturization of the brain’s components creates additional problems of noise, and hence there is pressure to make efficient use of the available space; several groups are exploring the possibility that brains may literally be shaped by the search for optimal wiring and packing. All of these different ideas of noise, optimization, and matching to the environment are under active investigation; the committee expects substantial developments over the next decade. Many simple sensory inputs (such as a skeleton drawing of a cube) have multiple interpretations, and perceptions switch at random among these, presumably driven by noise somewhere in the nervous system. The origin of this noise is unknown, but the fact that perception fluctuates when the physical input to the visual system is constant has made possible a new kind of experiment, in which one searches for neurons whose dynamics are correlated with the subject’s conscious impression of the image rather than the physical image itself. This is a far cry from Lorentz’s original thoughts about photon counting, but perhaps not so far from Helmholtz’s grand dream of a natural science that unifies the objective and subjective views of the physical world. FINE-TUNING VERSUS ROBUSTNESS Living systems occupy a special corner in the space of all possible physical systems. On the one hand, random combinations of the microscopic parameters will
OCR for page 84
Condensed-Matter and Materials Physics: The Science of the World Around Us not work: random sequences of amino acids will not make proteins that fold into functional structures, random biochemical networks would have chaotic dynamics, and randomly connected networks of neurons are unlikely to correspond to a brain that thinks and remembers. On the other hand, surely not every parameter needs to be finely tuned; alternatively, if fine-tuning is necessary, then there must be some as-yet-uncharacterized layer of dynamics that achieves this tuning more robustly. Physicists have identified this tension between fine-tuning and robustness in several different biological contexts. Protein Folding and the Space of Sequences Proteins are polymers built from amino acids. A typical protein is approximately 200 monomers in length, and with 20 different amino acids, the number of possible proteins is enormous. Over the entire history of life on Earth, only a vanishingly small fraction of these possible polymers has been made. Unlike most polymers, proteins “fold” into compact, nearly unique three-dimensional structures. One formulation of the “protein folding problem” is to predict the three-dimensional structure that is adopted by proteins with a particular amino acid sequence. This is a physics problem, but a complex one, because there are many degrees of freedom and the interactions themselves are highly structured. But there are more general questions: Why do proteins adopt a well-defined, compact structure at all? Is the limited set of proteins observed in nature an accident of evolutionary history, or derivable from physical principles? Some amino acids are polar or even charged, and hence have favorable (hydrophilic) interactions with water. Other amino acids are nonpolar, or oil-like, and have unfavorable (hydrophobic) interactions with water. These interactions drive proteins toward a state in which the hydrophobic amino acids are packed in a dense core, while the hydrophilic amino acids form a shell contacting the surrounding water. But this simple structure can be frustrated by the fact that the different amino acids are linked, covalently, along the length of the polymer. Indeed, a result from statistical mechanics states that a random polymer in which different monomers have competing interactions will be so frustrated that it forms a glass, with many distinct structures having nearly equal energy. This suggests that the ability of proteins to fold into unique structures already restricts dramatically the set of allowed sequences. In the theory of disordered systems, a core area of CMMP, the intuitive notion of “frustration” has become a precise mathematical concept. Are the sequences that occur in natural proteins those which actually minimize frustration in this precise sense? If so, the dynamics leading from random unfolded structures to the compact folded structure will be a smooth, downhill slide on some coarse-grained, effective free-energy surface; rather than there being a specific pathway that the
OCR for page 85
Condensed-Matter and Materials Physics: The Science of the World Around Us protein must follow, many different paths all flow to the unique ground state. These ideas of a smooth “energy landscape” and minimal frustration grow directly out of investigations on problems in CMMP, and over the past decade this theoretical picture of protein folding has scored important successes in connecting directly to a wide variety of experiments. Ideas from statistical physics also have been important in defining the “inverse folding” problem: given a particular compact protein structure, is there an amino acid sequence that folds to this structure as its ground state? In simplified models, some structures are not realizable, others require very specific sequences, and a special set of structures comprises the ground states of many different sequences. These “highly designable” structures have much in common with real protein structures, and this line of research has generated predictions for new proteins that have since been synthesized. Taken together, these results suggest that the sequences and structures of proteins are not frozen accidents of history, but rather are predictable from physical principles: the minimizing of frustration and the maximizing of designability. The minimizing of frustration “tunes” the system to a particular set of possible sequences, but the choice of maximally designable structures means precisely that structures will be robust to substantial sequence variation. Thus, the protein folding problem provides a prototype for the understanding of fine-tuning versus robustness in biological systems. Importantly, modern experimental methods make it possible to explore, quantitatively, large ensembles of sequences; the committee thus expects a rich interaction between theory and experiment over the next decade. Ion Channels and the Computational Function of Neurons In the wires leading to a home appliance or in the chips in a computer, electrical currents are carried by electrons moving through solids (metals and semiconductors). In biological systems, currents are carried by ions, such as potassium, sodium, calcium, and chloride, moving through water. Every living cell has a membrane that surrounds it, defining what is “inside” the cell and making it possible for the cell to control its chemical environment. The physical properties of the membrane are such that it cannot be easily penetrated by ions, and hence it provides a huge resistance to the flow of current. But there are special protein molecules in the membrane, called channels, that allow specific ions to flow into and out of the cell. Most channel molecules can take on several different structures; some structures are “open” and some are “closed,” so that current flow can be gated on or off. Further, the membrane also contains pumps, so that the concentration of ions is different on the inside and outside of the cell, forming an effective battery that will drive currents once the channels open. Finally, the opening and closing
OCR for page 86
Condensed-Matter and Materials Physics: The Science of the World Around Us of channels is sensitive to the voltage or electric field across the membrane. The combination of ionic batteries, voltage-gated channels, and the capacitance of the cell membrane creates complex electrical circuits that are capable of many functions. These circuits can filter and amplify incoming signals, they can oscillate to generate rhythms, and they can generate pulses that propagate along the fingerlike extensions (axons) that reach from one cell to another; these pulses are called action potentials or spikes, and provide the brain’s internal language for communication. These dynamics are generated in different cells using different combinations of channels, leading researchers to ask whether particular functions depend on the fine-tuning of these combinations. It is worth noting that the path to our modern understanding of ion channels depended upon many ideas and methods from physics: The concept of channels emerged from mathematical models of the electrical circuit formed by the axon membrane. These models made quantitative predictions about the flow of particular ions, which were tested using radioactive tracers. For channels to open and close in response to voltage across the membrane, thermodynamics requires that the open and closed states have different charge distributions, so that changes in the state of the channel itself should be accompanied by small currents, and these were observed. Because the channels are individual molecules, they each can make random transitions between their open and closed states, and this should generate an electrical noise in the cell with properties that can be predicted from statistical mechanics; this noise was observed. Finally, with advances in electronics, it was possible to measure the currents that flow through a single channel molecule and to observe the discrete transitions as these molecules open and close. In parallel with these advances, x-ray diffraction has revealed the detailed atomic structure of these remarkable molecules. By 1990, the understanding of channels had reached a significant state, allowing scientists to write down essentially exact equations that describe a neuron’s functional electrical behavior. These dynamics depend on how many channel molecules are present in the membrane; a cell might contain 9 or 10 different kinds of channels, chosen from many hundreds of channel proteins encoded in the genome. Perhaps surprisingly, the mathematical models predict that small changes in the balance among the different kinds of channels can lead to qualitatively different electrical behavior, for example converting a neuron that generates a simple rhythm into one that generates a complex syncopated beat. This is disturbing, not least because the parameters of the model cannot be set by evolution, or once and for all during the course of the brain’s development; cells are always “choosing” how many molecules of each protein to make and how these molecules should be distributed throughout the cell. A team of physicists and biologists took these problems seriously and argued that neurons must have mechanisms to monitor their own electrical activity and feed these signals back to the processes that regulate
OCR for page 87
Condensed-Matter and Materials Physics: The Science of the World Around Us the expression of particular channel molecules and/or their placement in the membrane. They showed theoretically that plausible implementations of this idea could provide robust stabilization of the functional values of the cell’s parameters. Very quickly this general idea was confirmed by experiments showing that neurons could be ripped from their natural environment and placed into solutions with very different concentration of ions, and then after some time they would recover their original rhythms by expressing a different combination of channels. Quite literally, it seems as if the cell “knows” its function and can check to see that it is doing the right thing. The idea of feedback to stabilize the correct numbers of ion channel proteins has launched a whole new field of experiments. In addition to exploring molecular mechanisms of this feedback, analogous phenomena have been discovered in the dynamics of the synapses or connections between neurons. More generally one can think of learning mechanisms in the brain as operating not just to master genuinely new things, but also to “tune” neural circuits continuously so that they maintain their proper functions. New generations of physical measurements that allow researchers to see directly the molecular events at synapses are probing the rules that underlie such continuous learning (Figure 4.5). Recent excitement has focused on the fact that these learning rules are sensitive to the detailed timing of action potentials, responding differently to sequences of spikes that have different causal relations. Theoretical approaches, often grounded in the methods of CMMP, are being used to understand the consequences of these rules for whole networks of neurons. For (seemingly!) simple problems—such as the ability of the nervous system to remember how much force must be applied to the eyes to compensate for rotation of the head—there are major experimental efforts under way to make connections all the way from the submicron events at single synapses, to the collective dynamics of the relevant networks, to the plasticity of behavior in the whole organism, with each step having important interactions with theory. It is reasonable to expect that, in the next decade, the dichotomy of robustness versus fine-tuning will be replaced by a dynamical model of how the brain robustly tunes itself. Adaptation As one steps from a dark room into bright sunlight, one’s eyes respond with an enormous transient that can be literally blinding; after a few moments, however, the world comes back into focus; the same phenomenon happens in reverse as one steps back into the dark. Over some range, one’s image of the world is largely invariant to the absolute intensity of light, instead highlighting variations in space and time. Similarly, a person feels a sudden pressure on the skin, but if this pressure remains constant, the person gradually becomes unaware of it. Although the human brain makes important contributions to these percepts, important aspects
OCR for page 88
Condensed-Matter and Materials Physics: The Science of the World Around Us FIGURE 4.5 Physicists have developed a wide variety of methods for making the electrical and chemical activity of neurons literally visible under the microscope. Here, molecules dissolved in the cell membrane generate second harmonics when stimulated by long-wavelength light, but the efficiency of second harmonic generation (SHG) depends on the voltage difference across the membrane. (a) Image of a single neuron isolated from the sea slug Aplysia and growing in a dish. When the cell generates an action potential, this can be monitored by an electrode placed in the cell body (red traces in b), but the resulting electrical signals also are visible by measuring the intensity of second harmonics (black traces in b). Electrical signals in the cell body (Position 1) are carried for hundreds of microns (to Position 5), but not to the farthest reaches of the cell (Position 6). Turning to a cell from the cortex of a mammalian brain (c), one sees the cell body (now impaled by an electrode) and a tree of dendrites. (d) Zooming in shows the bulb or spine along the dendrite where another cell connects to this one. Observing electrical and chemical signals in these submicron structures gives direct access to the messages that cells use in changing the strengths of their interconnections, the fundamental step in learning. SOURCE: G.J. Stuart and L.M. Palmer, “Imaging Membrane Potential in Dendrites and Axons of Single Neurons,” Pflugers Archiv, 453, 403 (2006).
OCR for page 89
Condensed-Matter and Materials Physics: The Science of the World Around Us of adaptation can be seen in the responses of the single cells that are responsible for the initial conversion of sensory input into the electrical impulses that constitute the internal language of the brain. Strikingly analogous phenomena occur even in bacteria as they migrate in chemical gradients (chemotaxis, as discussed above). A sudden increase, for example, in the concentration of sugar results in a large change in the way that bacteria swim, but eventually this change dissipates; thus, bacteria never mistake a good life for one that is getting better. In all these systems, the relaxation to an “adapted” state is a competition between some rapid process that embodies the response to a sudden change and a slower process that acts with opposite sign. In the case of bacteria, 30 years of genetics, biochemistry, and molecular biology brought researchers to the point at which these processes can be identified with the actions of particular protein molecules. But then the strengths of the competing excitation and adaptation processes will depend on how many molecules of each protein are present. Does cancellation between excitation and adaptation then require the bacterium to have exactly the right numbers of these molecules? Does this mean that adaptation can never be perfect? Remarkably, quantitative experiments indicate that adaptation in bacterial behavior really is perfect. Closer examination of the biochemical network that processes the chemotactic signals showed that, if the proteins involved had certain specific properties, it would be possible to achieve perfect adaptation without fine-tuning of the number of proteins. Combining experimental physics methods for quantitative observation of bacterial swimming with molecular biology methods for engineering bacteria that make different amounts of the relevant proteins, this prediction of “robust perfect adaptation” was confirmed. Encouraged by the example of adaptation, a number of theoretical physicists have explored the problem of robustness in other biochemical networks. In the developing embryo, for example, one would like to know how it is possible for networks of genetic regulatory interactions to generate reproducible spatial structures despite inevitable variations in external conditions. In a similar spirit, the cycle that leads to cell division has been analyzed to identify the classic “states” described by cell biologists as robust attractors of the underlying dynamics. In a slightly different direction, theorists have explored statistical mechanics in the space of network parameters, with the “energy” of reproducing known functional behaviors trading against the entropy that quantifies our intuition about robustness. Interestingly, a similar energy/entropy trade-off has been used to describe the mathematics of learning in brains and machines. Perhaps the ideas of robustness will lead, over the next decade, to a more precise theory, in the spirit of statistical physics, of how evolution, learning, and other regulatory mechanisms select functional biological networks out of the vast range of possible networks.
OCR for page 90
Condensed-Matter and Materials Physics: The Science of the World Around Us FULFILLING THE PROMISE We are in the midst of an explosion of activity at the interface of physics and biology. More than in any previous generation, today’s physicists are learning “the facts of life” and asking new and different questions about these remarkable phenomena. As in other areas of physics, technically challenging, quantitative experiments are making precise our qualitative impressions of these phenomena, and this new experimental power provides fertile ground to test increasingly sophisticated theories. The breadth of this activity is enormous, from the dynamics of single molecules to perception and learning in the brain and from networks of biochemical reactions in single cells to the dynamics of evolution. We have passed the point at which the interaction between physics and biology can be viewed as “merely” the application of known physics. Rather, the conceptual challenges of the phenomena of life are driving the emergence of a biological physics that is genuinely a subfield of physics. Guiding the growth of this field and taking full advantage of the enormous range of opportunities are major challenges for the research community, not least in terms of how it educates itself and its students (see Chapter 8). The committee is optimistic that the coming decade will see at least the outlines of a “physics of life” that brings together the many exciting threads of current research at the borders of physics and biology. The goal is nothing less than the unification in understanding of the animate and inanimate worlds, fulfilling the dreams of our intellectual ancestors.