Page 225

6
New Tools For Research

Scientific progress is predicated on the observation of new phenomena, and there are two basic paradigms for making scientific observations. The first is the Galilean paradigm, which calls for building a better tool, such as a telescope, to investigate a familiar object, which in Galileo's case was Jupiter. The second could be called the Columbian paradigm, which calls for using existing technology, such as a small fleet of ships with the best available equipment, to investigate previously uncharted waters. Most of this report is dedicated to the Columbian paradigm and its technological consequences. This chapter summarizes where the Galilean paradigm has led us in the last 10 years and where building better tools might lead in the coming decade.

What we are really dealing with are new ways of seeing what has been there all along. The suite of small-to-large scale facilities that have enabled condensed-matter physicists to image atoms and electrons is as essential to the condensed-matter enterprise as the network of telescopes and detectors probing optical and cosmic ray spectra are fundamental to astronomy and cosmology. For more than a century, the condensed-matter suite has included small apparatus such as magnetometers and calorimeters. During the last few decades the suite has expanded to include synchrotrons and free-electron lasers, which produce highly coherent light of wavelengths from the far infrared to hard x-rays; nuclear reactors optimized for neutron yields; proton accelerators with targets for neutron and meson production; electron microscopes; and scanning-probe microscopes sensitive to everything from electron densities to magnetization at surfaces. Other exploratory tools include machines for subjecting matter to extreme conditions such as high magnetic and electric fields and pressures or ultralow temperatures.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 225
Page 225 6 New Tools For Research Scientific progress is predicated on the observation of new phenomena, and there are two basic paradigms for making scientific observations. The first is the Galilean paradigm, which calls for building a better tool, such as a telescope, to investigate a familiar object, which in Galileo's case was Jupiter. The second could be called the Columbian paradigm, which calls for using existing technology, such as a small fleet of ships with the best available equipment, to investigate previously uncharted waters. Most of this report is dedicated to the Columbian paradigm and its technological consequences. This chapter summarizes where the Galilean paradigm has led us in the last 10 years and where building better tools might lead in the coming decade. What we are really dealing with are new ways of seeing what has been there all along. The suite of small-to-large scale facilities that have enabled condensed-matter physicists to image atoms and electrons is as essential to the condensed-matter enterprise as the network of telescopes and detectors probing optical and cosmic ray spectra are fundamental to astronomy and cosmology. For more than a century, the condensed-matter suite has included small apparatus such as magnetometers and calorimeters. During the last few decades the suite has expanded to include synchrotrons and free-electron lasers, which produce highly coherent light of wavelengths from the far infrared to hard x-rays; nuclear reactors optimized for neutron yields; proton accelerators with targets for neutron and meson production; electron microscopes; and scanning-probe microscopes sensitive to everything from electron densities to magnetization at surfaces. Other exploratory tools include machines for subjecting matter to extreme conditions such as high magnetic and electric fields and pressures or ultralow temperatures.

OCR for page 225
Page 226 Finally, in the past decade, direct computation or simulation has become an increasingly routine and reliable method for seeing and understanding condensed matter. This chapter consists of sections devoted to each of the tools noted. Each section describes specific accomplishments these tools made possible in the last decade as well as opportunities and challenges for the future. Even though the sections deal with quite distinct facilities and techniques, there are certain overarching themes. An excellent example of an important scientific contribution over the last 10 years has been the effort to unravel the astonishing properties of the high-Tc cuprates and their siblings. It would be very difficult to imagine where our knowledge of the cuprates would be without the atomic coordinates given by neutron diffraction carried out at proton accelerators, the electronic bands given by photoemission at synchrotron sources, the defects found by electron microscopy, the magnetic order and fluctuations discovered using both reactor- and accelerator-based neutron sources, the charge transport measured in extreme pressures or magnetic fields, and the computer calculations of electronic energy levels. The experience with the cuprates shows that each of the facilities used is both unique and indispensable, and that their power is vastly amplified by combining data from the entire suite. In addition to addressing specific scientific problems, another overarching theme includes the invigorating effects of new facilities, be they large national resources such as the Advanced Photon Source, the new hard x-ray synchrotron at Argonne National Laboratory; medium-scale installations such as the newly formed National High Field Magnet Laboratory operated in Florida and New Mexico; or electron microscopes and surface characterization equipment in central materials research facilities. The commercial availability of increasingly powerful workstations, electron microscopes, piezoelectric scanning-probe tools, and superconducting magnets have played an equally important but different role—namely, that of democratizing access to atomic resolution and high magnetic fields by giving individual investigators with small laboratories extraordinary capabilities formerly limited to those with access to large facilities. A final thread linking the tools is a direct product of the information revolution seeded by condensed-matter physics and discussed at length elsewhere in the report—specifically, the proliferation of information the tools provide and the increasingly quantitative nature of the information. The most obvious manifestation is the trend away from simple black-and-white x-y plots and toward digital color images as experimental outcomes. Such images were exotic and laboriously produced 10 years ago. (The original scanning-tunneling microscopy images of silicon surfaces by Binnig and Rorer were actually photographs of cardboard models constructed from chart-recorder traces.) Today, color images are a routine feature of output from all of the techniques and facilities described below. The future holds many opportunities and challenges including raising probe particle brilliance, improving instrumental resolution, extending spectral ranges,

OCR for page 225
Page 227 and diagnosing increasingly complex phenomena in areas from ceramic processing to biology. Less obvious but equally important is the need to continue to collect and take full advantage of the large and quantitative data sets that the tools of today and tomorrow promise. This implies a broad program including elements such as quoting results that had hitherto been considered qualitative in absolute units, modeling strong probe-sample interactions, and taking advantage of the most advanced data collection and display technologies available. Atomic Visualization Through Microscopy A quick glance at the illustrations in this report confirms that atomic visualization underpins much of condensed-matter and materials physics. Knowledge of the arrangements of atoms is a prerequisite for understanding and controlling the physical properties of solids. The techniques needed to visualize atoms in solids themselves challenge our scientific and engineering capabilities. Research in atomic visualization techniques has often lead to improved manufacturing technologies, for example, in semiconductor fabrication and quality control. Tools used for atomic visualization are small enough to fit into an average-sized laboratory and are inexpensive enough to fit into the budget of a small-instrumentation grant, but cooperative usage (as facilities) and especially cooperative instrumental development can be invaluable. Our ability to see atomic arrangements and identify local electronic structure has progressed dramatically in the last decades. The Nobel Prize in Physics of 1986 recognized the development of the two most important techniques for this purpose—scanning-tunneling microscopy and transmission-electron microscopy (TEM) (see Table O.1). Since then there has been astounding progress. The tunneling microscope has given birth to a burgeoning industry of versatile "scanning-probe" microscopes that, while sharing many characteristics with the scanning-tunneling microscope, do not rely on vacuum tunneling for image formation. Whereas the tunneling microscope is sensitive to local electronic states, probe microscopies can examine chemical reactivity, magnetism, optical absorption, mechanical response, and a host of other properties of surfaces on a near-atomic scale. The United States is a leader in research with probe microscopes, and this is the only microscopy area in which we dominate commercially. Probe microscopy is undoubtedly powerful, but it is to a large extent limited to surface imaging. There are interesting exceptions, such as ballistic emission electron microscopy (BEEM) in which fast electrons are injected into a layer and their propagation is influenced by interfacial structure. Other complementary surface microscopy techniques that have grown in the last decade include low-energy electron microscopy (LEEM) and near-field scanning optical microscopy (NSOM). TEM, however, remains the dominant tool used for the microstructural characterization of thin films and bulk materials because its images are not confined to the surface. In the transmission-electron microscope, a high-energy

OCR for page 225
Page 228 electron beam, guided by magnetic lenses, is scattered by a thin specimen. Diffraction makes it possible to study atomic structures inside solids and examine microstructure on scales from 0.1 nm to 100 µm. One example of the innovations achieved in the last decade with TEM is the discovery and structural solution of carbon nanotubes and nanoparticles. There has been significant progress in the last decade in the TEM field as well, for example, improved resolution (now at about 1 Å). Resolution is likely to be improved even further, using innovative aberration-correction techniques. Concomitant with improved spatial resolution in microscopy has been an improvement in efficiency and resolution in spectroscopy with electrons, which has enabled atomic-scale characterization of electronic structure. These techniques are complementary to, and synergistic with, improved neutron and x-ray tools described elsewhere. Despite the undoubted value of improved resolution, a more important frontier in electron microscopy involves the ability to extract reliable quantitative information from images. An example is the use of fluctuation microscopy to go beyond the limits of diffraction in studying disordered materials. We anticipate much progress in the quantitative arena in the next decade. Although the proverb holds that a picture is worth a thousand words (no doubt true aesthetically), in science a few well-chosen words are sometimes worth a thousand pictures. This is because scientific questions involve precise answers, and pictures are by their nature imprecise. However, the theory of high-energy electron scattering is well developed, and continuing improvements in electron image detection and image analysis permit quantitative interpretation of images at the atomic level. We can expect that this capability will eventually reach a level at which nonexperts can use TEM as a quantitative structure analysis tool. Similar progress can be expected in electron spectroscopy. Local spectroscopy allows not only atomic visualization, but also characterization of the electronic and chemical states of individual atoms or groups of atoms. Spectroscopy of surface atoms is the natural result of scanning-tunneling microscopy and can also be obtained (on groups of atoms) using TEM and surface-electron microscopy by electron energy-loss spectroscopy. Near-edge structure observed at characteristic x-ray energies can be used to determine band structure at buried interfaces, for example. Recent work has directly revealed the importance of metal-induced gap states in metal-ceramic bonding. One expects improvements both in the sensitivity of these techniques and in the quantitative modeling and data analysis needed to interpret their results. Ultimately, we need to obtain both atomic positional and chemical information for full structural characterization. Although probe microscopes and some electron microscopes can flourish in the individual-investigator or small-facility setting, some instruments required for the future growth of atomic visualization will be of a scale such that they will need to be located in regional, if not national, centers. With computer network

OCR for page 225
Page 229 access, remote control of the instruments is likely to become widespread. So even though instruments may be located in only a few institutions, accessibility will be universal. It remains desirable to maintain centers of excellence where experts in the appropriate techniques can be available for consultation and collaboration. Also, instrument and technique development could be facilitated on the regional-center scale and should be encouraged because although it has historically been underemphasized, it is critical to scientific and technological success. In addition, centers facilitate education in instrumentation, so critical for industrial competitiveness. Atomic Structure Scanning-probe microscopes have made atomic resolution imaging of surfaces almost routine, with tremendous impact on surface science. We are finally beginning to understand the important subject of thin-film growth, one atom at a time, and can observe how atomic steps can prevent atom migration in one direction compared with another, leading to undesirable roughness in deposited films. Here, there is close interaction between experimental visualization and computer modeling. A particularly exciting development in scanning-probe microscopy has been the imaging of chemical and biochemical molecules and the possibility of monitoring chemical reactions. By choosing one molecule as the tip of the atomic-force microscope (AFM), the forces between molecules can be directly measured and chemical reactions sensed with unprecedented molecular sensitivity. This has already led to new insights into the rheology of macromolecules (see Chapter 5), and we can expect great advances in the near future, especially in the biological sciences. For example, the use of "smart" tips would allow recognition of molecules using specific receptors adhered to the tip. The scanning-tunneling microscope (STM) views the local electronic structure, so careful image simulations must be made to deduce atomic structure. In general, for structural studies on surfaces, the best results have been obtained by a combination of direct STM imaging with diffraction—for example, by x-rays or electrons. The highest directly interpretable spatial resolution for atomic structure has been obtained with TEM (see Box 6.1); instruments capable of resolving 1 Å have recently been demonstrated. The committee notes that, partly because of the ~$50 million price tag for these instruments and partly because of the damage accompanying the high accelerating voltages required, no such instrument can be found in the United States. Researchers' hopes are pinned on lower accelerating voltage approaches to improved TEM resolution, such as holographic reconstruction, focus variation, incoherent Z-contrast, and aberration correction. However, it is troubling that work in these areas is predominantly located in Europe and Japan; a notable exception is work on incoherent Z-contrast imaging (see Box 6.1). A relatively recent study of trends in atomic resolution

OCR for page 225
Page 230 BOX 6.1 Being Certain About Atom Positions at Interfaces Identification of atomic structure at interfaces has been one of the important applications of high-resolution transmission electron microscopy. Interfaces control mechanical strength in ceramics, electrical transport in transistors, corrosion problems in aircraft, tunneling currents in superconductor junctions, and a myriad of other practical materials behavior. Yet, with rare exceptions, interfaces are not amenable to diffraction analysis because they are very thin and not usually uniform. Figure 6.1.1 shows an example of a high-resolution transmission-electron microscope image, using ''z-contrast" of a grain boundary in MgO (courtesy of Oak Ridge National Laboratory), in which atomic columns at the boundary are revealed. Images like this are beginning to be analyzed in a quantitative manner, using accurate measurements of intensity, simulations of electron propagation, and computational modeling of atomic structure, to achieve unprecedented reliability in analysis of interfaces.   Figure 6.1.1 High-resolution transmission electron micrograph, using z-contrast, in MgO. microscopy was published by the National Science Foundation.1 Advances in electron microscopy enable advances in related industrial technologies, especially semiconductors; so the value of U.S. investment in this area extends far beyond atomic visualization. A clear example of the value of improved resolution in TEM is tomography. Tomography has been widely used in biology to reconstruct objects at about 1-nm resolution. Only with a resolution of about 0.5 Å will it be possible to 1National Science Foundation Panel Report on Atomic Resolution Microscopy: Atomic Imaging and Manipulation (AIM) for Advanced Materials, U.S. Government Printing Office, Washington, D.C. (1993).

OCR for page 225
Page 231 reconstruct objects in three dimensions at the atomic scale. This would be particularly exciting for amorphous and disordered materials; knowledge of their atomic structure is limited to statistical averages from diffraction. Instruments to enable this will require ˜0.5 Å resolution combined with high specimen-tilt capability (>45º). Such will be possible either with very high voltages or with aberration correction. Electronic Structure For many research problems in condensed-matter and materials physics, it is important to visualize the electronic structure on a near-atomic scale. STM provides direct information about electronic states at surfaces but is often used for purely structural analysis and has had tremendous impact on surface science. Examples in the report include the germanium "huts" in Figure 2.13. In general, probe microscopy combined with electron microscopy has revolutionized our understanding of thin-film growth and epitaxy (see Chapter 2). STM has been profitably used to examine surface electronic states and chemical reactions on the atomic level. Although detailed electronic structure calculations are needed to interpret STM images in terms of atomic positions, often the electronic structure information is directly useful. For example, Box 6.2 gives an example of direct STM imaging of the electronic states associated with individual dopant atoms in semiconductors. Electron energy-loss spectroscopy in TEM provides an important method to obtain electronic structure from the interior of samples on a near-atomic level. Improvements in the sensitivity of detection, using more monochromatic field-emission electron sources and parallel detection, have led to important advances in the last decade. For example, dopant segregation at semiconductor grain boundaries has been identified. Nanoproperties Of Materials One of the most significant developments of the last decade is the proliferation of scanning-probe techniques for measuring the nanoproperties of materials. Figure 6.1 shows a large variety of signals that are now detectable. Nanomechanical (force) measurements can be used to watch the behavior of individual dislocations; optical measurements can visualize single luminescent states; piezoelectric measurements can identify the effect of defects on ferroelectrics, which have potential for high-density nonvolatile memory; magnetic measurements can show the effect of single atoms on spin alignment in atomic layers; ballistic electron transport can identify the electronic states associated with isolated defects inside a film. We can expect these capabilities to revolutionize our ability to characterize the physical properties of nanoscale materials.

OCR for page 225
Page 232 BOX 6.2 Single Impurity Atoms Imaged in Semiconductor Layers One critical issue, as semiconductor devices are scaled down in size for higher density and speed, is the stochastic nature of the location of dopant atoms. These atoms, which lend electronic carriers to the active semiconductor layers, are typically present in densities of only about 1 in a million. Until recent years, it was an impossible dream to identify the exact location of these dopant atoms, but this has recently proved possible with scanning-tunneling microscopy. Figure 6.2.1 shows detection of the local electronic state generated by the impurity. When a semiconductor structure is cleaved in vacuum, the individual impurity atoms near the surface are clearly visible. The image (courtesy of Lawrence Berkeley Laboratory) shows the position of Si dopants in GaAs as bright spots. Also present in the image are Ga vacancies, which appear as dark spots.   Figure 6.2.1 Local electronic states in GaAs generated by Si impurities. Figure 6.1 Schematic drawing of the signals detected in scanning-probe  microscopy. (Courtesy of the University of Illinois at  Urbana-Champaign.)

OCR for page 225
Page 233 Similar developments have occurred in other imaging systems. A beautiful example of a technique known as ''scanning electron microscopy with polarization analysis," which allows imaging of magnetic monolayers at surfaces in a modified scanning-electron microscope, is discussed in Chapter 1. Atomic Manipulation Whether intended or not, our atomic-scale characterization tools can change the structures they are examining. This can be used to our advantage in manipulating atoms on the atomic scale for making nanostructures. Figure 6.2 shows the classic example of a ring of iron atoms assembled by the tip of a scanning-tunneling microscope. The circular atomic corral shows the resonant quantum states expected from simple theory. The imagination boggles at the possibilities with related techniques. In principle, we can assemble arbitrary structures to test our understanding of the physics of nanostructures and perhaps make useful devices at unprecedented density. Two major issues will need to be addressed before these methods can reach their full potential. First, even when we place atoms where we choose, with few exceptions (such as the Fe atoms in Figure 6.2 at ultralow temperatures), they will not stay there. So, to assemble structures that Figure 6.2 Atomic manipulation. The image shows the atomic scale capability for  patterning that is possible with the scanning-probe microscope. Atoms  of Fe (high peaks) were arranged in a circle on the surface of Cu and  caused resonant electron states (the ripples) to appear in the Cu surface.  The structure is dubbed the "quantum corral." Related structures might  one day be useful for electronic devices, where as many devices as  there are humans in the world could be assembled on an area  the size of a pinhead (1 mm2). (Courtesy IBM Research.)

OCR for page 225
Page 234 retain their integrity, we need to understand the stability of materials on this scale. Second, the speed with which we can pattern structures with a single scanning probe is far too slow to allow practical device fabrication on the scale of modem semiconductor technology. Alternate methods involving massive arrays of tips, projection electron lithography, or other short-exposure techniques must be developed. Conclusions Atomic visualization is a crucial part of condensed-matter and materials physics. It is a thriving area in which advances usually driven by physics and engineering have wide impact on science and technology. Many manufacturing technologies depend on innovations enabled by atomic visualization equipment, so research in the field has important economic value. We expect continued developments, but attention must be paid to nurturing the development of appropriate instrumentation in close connection with scientific experiments. Depending on the nature of the visualization tool, the funding scope ranges from individual investigator to small groups, to national centers of excellence in instrumentation. From our success in probe microscopy, it appears we are stronger at the individual-investgator level but weaker at the medium- and larger-scale instrumentation development levels. A concern is that many new students are attracted by computer visualization rather than experimental visualization. The two methods are obviously complementary, and we are not yet near the point where we can rely only on computer experiments. Thus funding must be maintained at a level sufficient to create opportunities that will attract high-quality students into this field. Neutron Scattering The neutron is a particle with the mass of the proton, a magnetic moment because of its spin-1/2, and no electrical charge. It probes solids through the magnetic dipolar interaction with the electron spins and via the strong interaction with the atomic nuclei. These interactions are weak compared to those associated with light or electrons. They are also extremely well known, which makes it possible to use neutrons to identify spin and mass densities in solids with an accuracy that in many cases is greater than with any other particle or electromagnetic probe. The wavelengths of neutrons produced at their traditional source, nuclear research reactors with moderator blankets of light or heavy water held near room temperature, are on the order of inter-atomic spacings in ordinary solids. In addition, their energies are on the order of the energies of many of the most common collective excitations—such as lattice vibrations—in solids. To image spin and mass densities, condensed-matter physicists usually aim neutrons moving at a single velocity and in a single direction, that is, with well-specified momentum

OCR for page 225
Page 235 and energy, at a sample and then measure the energy and momentum distribution of the neutrons emerging from the sample. Such neutron-scattering experiments have been important for the development of condensed-matter physics over the last half century. Indeed, the impact of the technique has been such that C. Shull (Massachusetts Institute of Technology) and B. Brockhouse (McMaster University) were awarded the 1994 Nobel Prize in Physics for its development (see Table O.1). In previous decades, neutron scattering provided key evidence for many important phenomena ranging from antiferromagnetism, as originally posited by Neel, to unique quantum oscillations (called rotons) in superfluid helium. But what has happened in the last decade in the area of neutron scattering from solids and liquids, and what is its potential for the coming decade? The Past Decade Overview Three major developments of the last decade are (1) the emergence of neutron scattering as an important probe for "soft" as well as "hard" condensed matter, (2) the coming of age of accelerator-based pulsed neutron sources, and (3) the revival of neutron reflectometry. The first development has expanded the user base for neutron scattering far beyond solid-state physicists and chemists, who had been essentially the only users of neutrons. The second development is associated with a method for producing neutrons not from a self-sustaining fission reaction, but from the spallation—or evaporation—that occurs when energetic protons strike a fixed target. As depicted in Figure 6.3, a spallation source consists of a proton accelerator that produces short bursts of protons with energies generally higher than 0.5 GeV, a target station containing a heavy metal target that emits neutrons in response to proton bombardment, and surrounding moderators that slow the neutrons to the velocities appropriate for experiments. Until the mid-1980s, the leading facility of this type was the Intense Pulsed Neutron Source (IPNS) at the Argonne National Laboratory. In the last decade, the clear leader by a very wide margin has been the ISIS facility in the United Kingdom. Successful developments, especially at ISIS, have given the neutron-scattering field growth prospects that it has not had since the original high-flux nuclear reactor core designs of the 1960s. This follows because pulsed sources are more naturally capable of taking advantage of the information and electronics revolutions and because the unit of cooling power required per unit of neutron flux is almost one order of magnitude less than for nuclear reactors. The revival of neutron reflectometry seems at first glance less momentous than the emergence of neutron scattering as a soft condensed-matter probe or the emergence of accelerator-based pulsed neutron sources. However, as so much of modern condensed-matter physics and materials science revolves about surfaces and interfaces, neutron scattering could hardly be considered a vital technique

OCR for page 225
Page 263 Figure 6.14. The plot shows the growth of the number of operations per  second from 1940 to 2010 for the fastest available  ''supercomputers." Objects of different shapes are used  to distinguish serial, vector, and parallel architectures.  All processors until Cray-1 were single-processor machines.  The line marked "three-dimensional Navier-Stokes turbulence"  shows, in rough terms, the extent to which the increased  computing power has been harnessed to obtain turbulent  solutions by solving three-dimensional Navier-Stokes  equations. Turbulence is used here as an example of one  of the grand and difficult problems needing large  computing power. The computing power limits the size  of the spatial domain over which computations can be  performed. The Reynolds number (marked on the right  as Rl) is an indicator of this size. (Courtesy of Los Alamos  National Laboratory.)

OCR for page 225
Page 264 problems) and microprocessors were much slower than discrete component designs. Today memory density has risen enormously and prices have fallen dramatically. Microprocessors have risen several orders of magnitude in speed and gone from 8- to 64-bit word lengths. After some tumultuous history involving exploration of different parallel architectures, shared-memory parallel systems combining many processors communicating via high-speed digital switches are now rapidly developing and have largely replaced pure vector processors. Clock speeds for microprocessors are now so high that memory access time is often far and away the greatest limitation on overall speed. One of the great software challenges now is to find algorithms that can take maximum advantage of parallel architectures consisting of many fast processors coupled together. In addition to hardware advances, the last decade has seen some revolutionary advances in algorithms for the study of materials and quantum many-body systems. Improved algorithms are crucial to scientific computation because the combinatorial explosion of computational cost with increasing number of degrees of freedom can never be tamed by raw speed alone. (Consider the daunting fact that in a brute force diagonalization of the lowly Hubbard model, each site added multiplies the computational cost by a factor of approximately 64.) In the last two decades computational condensed-matter and materials science has moved from the initial exploratory stages (in which numerical studies were often little more than curiosities) into the main stream of activity. In some areas today, such as the study of strongly correlated low-dimensional systems, numerical methods are among the most prominent and successful methods of attack. As new generations of students trained in this field have begun to populate the community, numerical approaches have become much more common. Nevertheless it is fair to say that computational physics is still in its infancy. Pushing the frontiers of computational physics and materials science is important in its own right but also important because training students in this area provides industry and business with personnel who not only have expertise on the latest hardware architectures but also bring with them physicists' methods and points of view in analyzing and solving complex problems. Progress in Algorithms In spite of its great enthusiasm, the committee offers a warning before proceeding. Specifically, numerical methods have become more and more powerful over time, but they are not panaceas. Vast lists of numbers, no matter how accurate, do not necessarily lead to better or deeper understanding of the underlying physics. It is impossible to do computational physics without first being a good physicist. One needs a sense of the various scales relevant to the problem at hand, an understanding of the best available analytical and perturbative approaches to the problem, and a thorough understanding of how to formulate the interesting questions.

OCR for page 225
Page 265 Electronic Structure Algorithms The goal of electronic structure calculations is to compute from first principles or with approximate methods the quantum states of electrons in solids and large molecules. This information is then used to predict the mechanical, structural, thermal, and optical properties of the materials. The outstanding problems are ones of computational efficiency for large-scale calculations and convergence to the thermodynamic limit. Indeed, the calculations are so complex and time consuming that real-time dynamics can be followed only for pico- or nanoseconds. Perhaps the single most dramatic development in the last decade has been the advent of the Car-Parrinello method, which has enormously enhanced the efficiency of electronic structure calculations. This method calls for adjusting the atomic positions and the electronic wave functions at the same time to optimize the Hohenburg-Kohn-Sham density functional. Additional efficiencies come from use of fast Fourier transform techniques to compute the action of the Hamiltonian on the wave functions without the necessity of computing the full Hamiltonian matrix. Another area of intensive investigation has been the search for so-called ''Order N" methods. The idea is to find approximation schemes in which the computational cost rises only as the first power of the number atoms or electrons, as opposed to some higher power (˜3) as is typically the case. So far, this has been attempted only for tight-binding models involving spatially localized orbitals for the electrons. It is not yet clear that the problem will be solvable, but research in this direction is important if we are going to be able to do larger and more complex structures. Other techniques under investigation include adaptive coordinate, wavelet, and direct grid/finite element methods that are useful in situations in which the number of plane waves needed to represent atomic orbitals is very large. The Kohn-Sham local-density functional approximates the many-body exchange-correlation corrections to the energy by a functional of the local density. It has been very successful and is finally winning support within the computational chemistry community. An important area of current research involves generalized gradient expansion corrections to the local-density approximation. In several examples, simple local-density approximations fail to give correct structures but appropriate gradient expansion functionals work. In general, however, it is often still difficult to obtain the chemical accuracy required. Monte Carlo Methods Fermion Monte Carlo techniques continue to be plagued by the "sign problem.'' Because of the sign reversals that occur in quantum wave-functions when two particles exchange places, not all time histories have positive weights in the

OCR for page 225
Page 266 Feynman path integral. This means that the weights cannot be interpreted as probabilities that can be sampled by Monte Carlo methods. The fixed-node approximation attempts to get around this problem by specifying a particular nodal structure of the wave function. This has yielded very useful results in some cases in which the nodal structure is understood a priori. Some workers are now moving beyond small atoms and molecules to simple solids and have obtained good results for lattice constants, cohesive energies, and bulk moduli. Fermion Monte Carlo path integral methods continue to be applied successfully to lattice models such as the Hubbard model, but again the sign problem is a serious limitation. For example, it is still difficult to go to low-enough temperatures to search for superconductivity, even in highly simplified models of high-Tc materials. Bosons, which are much easier to treat numerically, also pose interesting problems. "Dirty boson" models have been used to describe helium films adsorbed on substrates and to treat the superconductor-insulator transition. With this model one makes the approximation that Cooper pairs are bosons and assumes (not necessarily justifiably) that there are no fermionic degrees of freedom at zero temperature. Cluster Algorithms in Statistical Mechanics One serious problem in the Monte Carlo simulation of statistical systems near critical points is the divergence of the characteristic timescales. The computer time needed to evolve the system to a new statistically independent state diverges as some power of the correlation length or system size, MCd+z where d is the dimensionality. Cluster algorithms have been extremely successful on certain classes of problems (such as the Ising and XY models and certain vertex models) and are able to reduce the dynamical exponent zMC to nearly zero. This is accomplished by constructing clusters of spins, and for each cluster, choosing a random value of spin that is assigned to the individual spins it contains. Such a move cannot be implemented in an ordinary Metropolis algorithm because the Boltzmann factor would make the acceptance rate of the move essentially zero. The trick is to have the probability that a cluster grows to a particular size and shape be precisely the Boltzmann factor for the energy cost of flipping the cluster. This is a very tiny probability, but it is canceled by the fact that there are a huge number of different possible clusters that could have resulted from the random-growth process. This has been a very important advance. Unfortunately there are still many cases (such as frustrated spin systems) for which cluster methods cannot (as yet) be applied because of technical problems similar to the fermion minus-sign problem.

OCR for page 225
Page 267 Density-Matrix Renormalization Group A revolutionary development in computational techniques for quantum systems is the "density matrix renormalization group." The essential idea is to very efficiently determine which basis states are the most important to keep to be able to describe the quantum ground state. The procedure is the first one ever found that gives exponentially rapid convergence as the number of basis states is increased. It applies to essentially any one-dimensional model with short-range forces, even random systems without translation symmetry and fermion systems. Using this technique it is now easy to compute ground state energies and correlation functions to 10-digit accuracy on a desktop workstation. Ongoing work is extending the technique to excited states and to higher dimensions. Computational Physics in A Teraflop World In this section we contemplate questions of the future of computation and what can be (optimistically) done with the next factor of 1000 in computing power. Glassy Systems, Disorder, and Slow Dynamics At first sight, these problems do not seem well suited for more computer time. They are too hard. Experimentally, the phenomena are spread over fifteen decades in frequency, and even that dynamical range in the experiments is often not enough to reach firm conclusions. Current simulations span perhaps three decades: one might think that three more won't make an overwhelming improvement. There are two reasons to be optimistic. First, in numerical simulations, it is straightforward to watch individual atoms/spins/automata relax. The last decade has seen tremendous progress in the visualization and study of spin glasses, charge density waves, glassy behavior in martensites, and "real" glasses. We are, however, barely into the scaling region for many of these simulations: even if the scaling region grows only logarithmically with the timescale, three more decades might make the patterns clear. Second, there is every reason to believe that we can get around these slow timescales. There is no reason for our methods for relaxing glassy systems to be as inefficient as nature. Until now we have mainly developed techniques to mimic nature with as little wasted effort as possible. This was sensible for studying systems in which nature relaxes efficiently; when you are barely able to follow the system for a nanosecond, you study systems that relax rapidly. Now that we are turning to problems for which nature is slow (for example, glasses and phase transitions) we are making rapid strides in developing acceleration algorithms. In particular, because we are more likely to gain the next factor of 1000 in computing power by increasing the number of processors rather than through

OCR for page 225
Page 268 raw speed increases, we will naturally learn new algorithms for relaxation to exploit the extra processors. Quantum Chemistry and Electronic Structure of Materials With these problems we are not confused about the physical behavior. For these problems, the answers to the interesting questions inherently demand immense precision. Quantum chemistry is difficult not because the systems are complex and subtle, but because the standards are high. All of chemistry is controlled by reaction and binding energies that are tiny compared to total energies. Electronic-structure calculations for materials face exactly the same problem. We can now study only relatively simple molecules and crystal structures; with the next generations of machines and algorithms, this will change qualitatively. Structured Systems: From Inorganic Industrial Materials to Proteins These are systems for which there are huge ranges of length scales and timescales, which interact in nontrivial ways. We have to understand the physics and materials science on each scale and connect together the properties at different scales. The algorithms appropriate to the models at different scales can be quite different from each other. The category of "industrial materials" includes ceramics, concrete, polycrystalline metals and alloys, and composites. Their important properties are normally almost completely removed from the world of perfect crystals and equilibrium systems often studied by mainstream physics. The wearing properties of steel, the resistance of concrete to cracks, the thermal and electrical properties of polycrystalline metals—all are dominated by the mesostructure, the detailed arrangement of domain walls, pebbles, and grains. Three issues must be confronted to make progress. First, the materials are disordered. Second, they display history dependence; for example, the polycrystalline domains in metals are dependent in detail on how the metal was cast, rolled, and stamped during its manufacture. Third, the systems have a large range of scales. The dynamics of grain boundaries under external strain is determined by the dynamics of the individual line dislocations that make them up. The line dislocations interact logarithmically (in inscrutable ways), and one can only simulate them at the current level of knowledge. Their dynamics, in turn, is determined by atomic-scale motion; the diffusion of vacancies and the pinning to inhomogeneities (and to other line dislocations) are crucial to understanding their motions. It is this enormous range of scales that we can only hope to disentangle with large-scale simulations (see Figure 6.15). Proteins and biomolecules provide similar problems. The molecular biologists separate their structures into primary, secondary, and tertiary precisely as a set of length scales on which the structure is organized. The functional behavior

OCR for page 225
Page 269 Figure 6.15 Million-atom molecular dynamics simulation of ductile behavior  in nanophase silicon nitride, which is being explored for its  extraordinary resistance to fracturing under strain: a 30 percent  strain is required to completely fracture the nanophase system,  while only 3 percent is required for single-crystal silicon nitride.  Shown is the system before it fractures under an applied strain  of 30 percent and a zoom-in to atomic scale visualizing that the  crack front advances along disordered interfacial regions in the  system. It is along the amorphous intercluster regions where the  crack propagates by coalescence of the primary crack with  voids and secondary cracks. (Courtesy of Louisiana  State University.) on the largest scales depends in detail on the dynamics and energetics not only down to the protein level, but even down to the way in which each protein is hydrated by its aqueous environment. Quantum Computers Theoretical analysis of the quantum computer, in which computation is performed by the coherent manipulation of a pure quantum state, has advanced extremely rapidly in recent years and indicates that such a device, if it could ever be constructed, could solve some classes of computational problems now considered intractable. A quantum computer is a quantum mechanical system able to evolve coherently in isolation from irreversible dephasing effects of the environment. The "program" is the Hamiltonian. The "input data" is the initial quantum state into which the system is prepared. The ''output result" is the final, timeevolved state of the system. Because quantum mechanics allows a system to be in a linear superposition of a large number of different states at the same time, a quantum computer would be the ultimate "parallel'' processor. The basic requirement for quantum computation is the ability to isolate, control, and measure the time evolution of an individual quantum system, such as an atom. To achieve the goal of single-quantum sensitivity, condensed-matter experimentalists are pursuing studies of systems ranging from few-electron quantum dots to coherent squeezed photon states of lasers. When any of these reach the desired single-quantum limit, experiments to probe the action of a quantum

OCR for page 225
Page 270 gate could be immediately designed. Recent theory shows in principle how to form different types of gates and provides error-correcting codes to enhance robustness. At this point it is quite unclear if a practical system can be developed, but many clever ideas are being explored. Interesting physics is sure to result and there is at least a remote possibility of a tremendous and revolutionary technological payoff. Several groups have reported an experimental realization of quantum computation by nuclear magnetic resonance (NMR) techniques. The race is now on to demonstrate more complex quantum algorithms, to compute with more quantum bits than the two bits of the first demonstration, and to verify error-correction techniques. Future Directions and Research Priorities Tools for visualizing atoms and electrons have been at the center of condensed-matter and materials physics since Bragg and von Lane first observed x-ray diffraction from crystals nearly 100 years ago. These tools will remain at the center of the field and many others, from catalysis to biochemistry. The last decade has seen great progress in research performed using apparatus of all scales. In the area of medium-scale infrastructure, the three important developments have been widespread access to sophisticated electron microscopes and related equipment, the exploitation of the Cornell nanofabrication center, and the reinvigoration of U.S. high-field magnet research by the founding of the National High Field Magnet Laboratory. Access to equipment has fueled and will doubtless continue to fuel improved understanding and applications of bulk materials, surfaces, and interfaces. Beyond enabling U.S. academe to participate in and thereby greatly accelerate the development of mesoscale (between atomic and macrosopic scales) physics, the Cornell nanofabrication center has been an extraordinarily fertile training ground for the U.S. microelectronics industry. The National High Field Magnet Laboratory will provide access to a scientific frontier—a key site for discoveries and technological developments ranging from magnetic resonance imaging to the quantum Hall effect. Turning finally to large-scale facilities of a type that can only exist at national laboratories, the major events have been the commissioning of third-generation synchrotrons at the Argonne and Lawrence Berkeley laboratories and the decision to recapitalize U.S. neutron science via construction of a pulsed spallation source at Oak Ridge. The synchrotrons will produce the x-rays and light necessary for the United States to compete in emerging areas such as timeresolved protein crystallography. Even though a U.S. scientist (Shull) shared the 1994 Nobel Prize for inventing neutron scattering in the 1950s (see Table O.1), the Europeans have since then established a clear lead. The Oak Ridge source will reestablish U.S. competitiveness in this area, which over the last decade has

OCR for page 225
Page 271 proven so vital for imaging atoms and spins in materials ranging from hightemperature superconductors to polymers. In previous decades, key events in condensed-matter and materials physics have been the exploitation of inventions and investments in large facilities. The inventions and the facilities are devices with the special purpose of being tools for condensed-matter and materials physics. The last decade is unique in that the major event relating to such tools is actually not directly connected with inventions and facilities. Instead, it is the same phenomenon that has profoundly transformed nearly all other aspects of our society—namely, the information revolution. An obvious consequence of the information revolution for condensedmatter and materials physics is the recent progress in computational materials science. Less obvious but equally important is the ability to collect and manipulate progressively larger quantitative data sets and reliably execute increasingly complex experimental protocols. For example, in neutron scattering, data gathering rates and, more crucially, the meaningful information content, have risen in tandem with the exponential growth of information technology What will happen in the next decade? Although we cannot predict inspired invention, we anticipate progress with ever-shrinking and more-brilliant probe beams and increasingly complete, sensitive, and quantitative data collection. One result will be the imaging and manipulation of steadily smaller atomic landscapes. Another will be the analysis and successful modeling of complex materials with interesting properties in fields from biology to superconductivity. The promised performance improvements with applications throughout materials science will come about only if balanced development of both large-scale facilities and technology for small laboratories takes place. For example, determination of the crystal structures of complex ceramics and biological molecules is likely to remain the province of neutron and synchrotron x-ray diffraction, performed at large facilities, while defects at semiconductor surfaces will most likely remain a topic for electron and scanning-probe microscopy, carried out in individual investigators' laboratories and small facilities. Thus, the cases for large facilities and small-scale instruments are equally strong. Although the larger items such as the neutron and photon sources appear much more expensive than those that benefit a single investigator, recent European experience suggests that the costs per unit of output do not depend very strongly on the scale of the investment, provided of course that it is properly chosen, planned, and managed. Information technology is also blurring the difference between large and small facilities, as they all become nodes on the Internet. One important upshot will be that the siting of large facilities as well as the large-versus-small facility debates will largely cease to be of importance to scientists. In addition to the construction of large facilities such as the SNS and APS, healthy research in instrumentation science is crucial to the development of improved tools for atomic visualization and manipulation. Although we have impressive success stories to point at, as in the dominance of the probe micros-

OCR for page 225
Page 272 copy business, we strive for similar success in other areas of instrumentation that are important for both research and manufacturing. In the United States, scientific research and instrumentation have traditionally had an uncomfortable relationship. Although it is very important that instrumentation programs be science-driven and not isolated, sometimes long lead times and the need for expert research in the instrumentation itself (for example, in advanced lithography and electron, x-ray and neutron optics) require that special investment be allocated for instrumentation. The absence of such middle-scale investment as well as a perceived lack of intellectual respectability are key reasons why the nation is lagging in beam technology and science. A solution would be the development of centers of excellence in instrumentation research and education, the latter being an equally important role for this research. A model might be the National High Field Magnet Laboratory, which has recently revived magnet research in the United States. It is also clear that viable centers can exist in already strong centers of materials research. The committee's list of priorities is designed to enable the United States to recapture its leadership in scientific tools for condensed-matter and materials physics and their exploitation. The goals to be achieved by the large neutron and synchrotron facilities are obvious—namely, to duplicate and then to exceed what the Europeans can do today. The recapitalization of the university laboratories will serve the similarly obvious purpose of maintaining the efficiency and quality of university research. The nanolithography investment will maintain user facilities in an area of extraordinary importance in materials research as well the U.S. economy. The medium-scale centers devoted to topics such as electron optics and high magnetic fields will serve not only to develop new technologies in the areas they are specifically devoted to, but also to establish a flourishing culture of scientific instrumentation within condensed-matter and materials physics. Finally, condensed-matter and materials physics needs to take advantage of all available information technology to continue to move toward its central goal of seeing all the atoms and electrons all of the time. Outstanding Scientific Questions • Can we manipulate single atoms fast enough to make devices? • Can we use computation to predict superconductivity in complex materials? • Can we make inelastic scattering using x-rays, neutrons, and electrons as important to materials science and biology as elastic scattering is today? • Can we image and manipulate spins on the atomic scale? • Can we develop a nondestructive subsurface probe with nanometer resolution in three dimensions?

OCR for page 225
Page 273 Priorities • Build the Spallation Neutron Source and upgrade existing neutron sources. • Fully instrument and exploit the existing synchrotron light sources and do R&D on the x-ray laser. • Build state-of-the-art nanofabrication facilities staffed to run user programs for the benefit of not only the host institutions but also universities, government laboratories, and businesses that do not have such facilities. • Recapitalize university laboratories with state-of-the-art materials fabrication and characterization equipment. • Build medium-scale centers devoted to single issues such as high magnetic fields or electron microscopy, • Exploit the continuing explosion in information technology to visualize and simulate materials.