Page 187

Chapter 13
Frontiers of Image Processing for Medicine

Image processing is based on two major categories of manipulation of arrays of two-dimensional data. The first category includes the restoration of one or more objects by compensating for noise, motion, shading, geometric distortions, and other sources of noise degradation associated with the image acquisition system. The second category involves the enhancement of information and data reduction to emphasize, measure, or visualize important features of the image. In recent years, the field of medical imaging has required that the role of image processing expand from the analysis of individual two-dimensional images to the extraction of information from three-dimensional images, multimodality images, and time sequences of three-dimensional images. This change has prompted the development of sophisticated algorithms for interpretation of multi-dimensional data, a task that is far from accomplished, while placing a strong emphasis on processing speed and available memory for the computer systems performing the analysis.

In the traditional approach, data are analyzed by visual evaluation of cross-sectional slices as represented by gray-scale images on radiological film. The data are typically acquired and filmed by technologists using predefined imaging protocols, and the results are read by radiologists. The films are also used to communicate with referring physicians and for reference during patient procedures. Despite a recent trend toward its use, digital representation of medical images has not yet become generally accepted, and its widespread application will require further developments in the hardware



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 187
Page 187 Chapter 13 Frontiers of Image Processing for Medicine Image processing is based on two major categories of manipulation of arrays of two-dimensional data. The first category includes the restoration of one or more objects by compensating for noise, motion, shading, geometric distortions, and other sources of noise degradation associated with the image acquisition system. The second category involves the enhancement of information and data reduction to emphasize, measure, or visualize important features of the image. In recent years, the field of medical imaging has required that the role of image processing expand from the analysis of individual two-dimensional images to the extraction of information from three-dimensional images, multimodality images, and time sequences of three-dimensional images. This change has prompted the development of sophisticated algorithms for interpretation of multi-dimensional data, a task that is far from accomplished, while placing a strong emphasis on processing speed and available memory for the computer systems performing the analysis. In the traditional approach, data are analyzed by visual evaluation of cross-sectional slices as represented by gray-scale images on radiological film. The data are typically acquired and filmed by technologists using predefined imaging protocols, and the results are read by radiologists. The films are also used to communicate with referring physicians and for reference during patient procedures. Despite a recent trend toward its use, digital representation of medical images has not yet become generally accepted, and its widespread application will require further developments in the hardware

OCR for page 187
Page 188 and software for manipulation of image data. The major difficulty in interpreting cross-sectional gray-scale images is that anatomic structures look very different from their three-dimensional appearance. This discrepancy requires the physician to perform a significant mental translation of the data, a task that requires highly specialized training. Although radiologists undergo such training, the visual interpretation of the data sets becomes observer dependent, and others may have more difficulty in visualizing the data. In view of the relatively large size of a typical three-dimensional data set (e.g., 80 x 256 x 256) and the fact that a single imaging examination may include the acquisition of several such data sets, the radiologist can work more efficiently if the information from many slices is concentrated into one rendering. A composite image also facilitates communication with other clinicians and leads to the possibility of generating quantitative rather than qualitative information from the images. These observations make it clear that for medical image analysis, the fundamental mathematical need is the derivation of procedures for extracting the clinically important features from one or more large data sets—for example, quantitative information on tumor volume for each of several studies over a time period, to help gauge the efficacy of different treatments, or a parametric map created to represent rate constants from a time series of tracer movements in the brain or heart. The procedures associated with this type of contemporary image analysis can be separated into several different classes: ·      Image segmentation, ·      Computational anatomy, ·      Registration of multimodality images, ·      Synthesis of parametric images, ·      Data visualization, and ·      Treatment planning. The following discussion presents some of the key mathematical methods being considered for addressing these requirements. See also the related discussions earlier in this report in sections 3.5, 4.3, and 7.2.3.

OCR for page 187
Page 189 13.1 Image Segmentation Segmentation refers to a subclass of enhancement methods by which a particular object, organ, or image characteristic is extracted from the image data for purposes of visualization and measurement. Segmentation involves associating a pixel with a particular object class based on the local intensity, spatial position, neighboring pixels, or prior information about the shape characteristics of the object class. The focus of research into segmentation is to determine logic rules or strategies that accomplish acceptably accurate segmentation with as little interactive analysis as possible. Segmentation is a central problem of image analysis because it is a prerequisite for the majority of analysis methods, including image registration, shape analysis, motion detection, and volume and area estimation. Unfortunately, there is no common method or class of methods applicable to even the majority of images. Most of the segmentation methods being applied to medical images are based on the assumption that the objects of interest have intensity or edge characteristics that allow them to be separated from the background and noise, as well as from each other. When the ranges of pixel intensities associated with different physiologic features are non-overlapping or nearly so, global thresholding (accentuating or deleting all pixels above or below a demarcating threshold of intensity) may be sufficient to provide the classification required. For example, bone is rather easily segmented from x-ray images because of the wide separation of gray-scale levels between the high-signal-intensity bone and other tissues; the intensity levels of the pixels fall into two ranges, and pixels in each can be manipulated to accentuate the difference. Global thresholding is not adequate, however, for differentiating heart muscle from chest tissues or distinguishing between cerebral gray and white matter. Classification of objects based on pixel intensity can also be implemented by use of training points, neural networks, histograms, fuzzy logic, or cluster analysis. Despite a considerable body of literature in this area, relatively few reports have demonstrated reliable segmentation based on the intensity characteristics of a single three-dimensional image. Edge detection is simply implemented by operations that search for changes in intensity gradients; however, this is complicated because intensities in biomedical images typically ramp up or down from the structure of interest to the surrounding structure(s) from which segmentation is to be effected. Even statistical edge-finding techniques fail in most medical imaging applications where the gray-scale levels and textures of the target organ

OCR for page 187
Page 190 and surrounding tissues are similar—which is the usual case. Continuity and connectivity are strong criteria for separating noise from objects and have been exploited quite widely, either directly by the logic rules used in region growing1 or by applying post-processing erosion and dilation to separate small islands of noise or spatially distinct objects with similar intensities. A sophisticated and successful approach to segmentation is based on the spectral attributes of each pixel. LANDSAT image analysis uses the intensity corresponding to different wavelengths to differentiate regions of varying soil or vegetation content from one another. Similar approaches have been applied to MRI, wherein different pulse sequences bring out different characteristics of the magnetic resonance properties of tissues, so that cluster analysis can then be used to segment tissues with similar properties. This approach is very powerful but does require the acquisition of multiple images. The common standard for validating or comparing segmentation methods is to examine hand-drawn contours on successive sectional scans, usually with some aid from region growing and thresholding. The goal of segmentation methods is to automate this tedious procedure, and one strategy is to begin with some model of the object. This model acts as a bound or guide to the processes, to help eliminate some ambiguity about intensity, local pixel values, or edges. The idea of using a priori knowledge about the possible shapes of objects has not been implemented in a successful segmentation system for organs, but this is a fruitful area for segmentation research. 13.2 Computational Anatomy Organ shape analysis, including measurement of volumes and identification of surface morphology, has become feasible through modern non-invasive imaging, particularly MRI. The importance and usefulness of mathematical methods for extracting and characterizing the shape and size of organs and tumors are indicated by the richness of new information to be drawn from these measurements. Neuroanatomy is an example of a field that is beginning to employ these approaches to develop quantitative descriptors of anatomic variability across subjects, age ranges, gender, and species. Knowledge of the range of variability in normal anatomy would allow the detection and quantitative characterization of pathological deviations, for example, 1 Region growing is the process of identifying some pixels in the image that are clearly associated with different structures and then adding to each its neighboring pixels with similar intensities until regions of similar pixel intensities have been built up.

OCR for page 187
Page 191 changes in the cerebral cortex and degeneration of the frontal lobe that might be linked with mental disease. If it is found that different diseases show distinctive, abnormal patterns of morphology, these patterns may ultimately provide quantitative markers for diagnosis or for assessment of the response to treatment. Another use of organ shape analysis is to incorporate the anatomic shape and statistical properties of shape variation into algorithms for segmentation of objects that are "isointense" with respect to surrounding tissue. Indeed, although semi-automatic segmentation methods have been applied in the brain, they are often unable to deal with organs such as the heart, where image quality tends to be degraded by motion and other artifacts. The main challenge in this area is to determine which shape information is appropriate for a particular application and to make sure that the use of such prior assumptions does not bias the estimated values of model parameters. Once a shape model has been fitted to the data, dynamic properties of the object such as degree of bulging or narrowing, local deformation, and strain relationships may be derived and used to distinguish between normal and abnormal physiologic behavior. Computational anatomy also includes the characterization of tissue architecture or surface texture, as in, for example, the analysis of changes in trabecular bone structure associated with osteoporosis. In this case, conventional methods of parameterizing the changes in image intensity have recently been extended by the use of descriptors based on Fourier space images and by fractal analyses. At present these analyses are mainly restricted to two-dimensional slices, but there is great interest in generalizing the approaches to treat three-dimensional data. Major factors influencing the results obtained with this type of analysis are the intrinsic resolution of the images and the ability to separate random noise from spatially coherent features. If these factors are properly taken into account, it is possible that textural parameters might be used to describe the microarchitecture of two- and three-dimensional image data and provide a mechanism for quantifying the severity of disease. 13.3  Registration of Multimodality Images Registration of multimodality images is particularly important for planning surgical and radiation treatment, following changes in tissue morphology associated with disease progression or response to therapy, and relating

OCR for page 187
Page 192 anatomic information to changes in functional characteristics such as glucose uptake, blood flow, and cellular metabolism. The need to perform such registration is well established and has been studied quite widely for the case of registering rigid objects. The techniques that have been reported vary in detail but can be classified based on the features that are being matched. Such features include external markers that are fixed on the patient, internal anatomic markers that are identifiable on all images, the center of gravity for one or more objects in the images, crestlines of objects in the images, or gradients of intensity. One may also minimize the distance between corresponding surface points of a predefined object. The identification of similar structures in images is a prerequisite for many image registration techniques. In some efforts this has been achieved as a manual procedure and in others by automated segmentation. When there is the possibility of tissue deformation between examinations, as is the case with soft tissue structures in the abdomen or pelvis, elastic warping is required to transform one data set into the other. The difficulty lies in defining enough common features in the images to enable specifying appropriate local deformations. Of particular interest, for example, is analysis of wall motion in the heart, which necessitates correlating positions of particular regions as a function of time in order to estimate the variations in stress and strain associated with different pathologies. 13.4 Synthesis of Parametric Images Parametric images can be derived from any series of images using mathematical models that describe physiologic processes. Examples are flow, motion, metabolic rates, MRI relaxation times, and diffusion parameters. The motivation for producing such images is the visualization of the spatial distribution of parameters that are related to metabolic or biologically relevant tissue parameters. The ability to quantify changes in these parameters that reflect disease progression or response to therapy would be extremely valuable in assessing the effectiveness of a treatment and in providing an early indication of the need for an alternative type of therapy. Two examples of situations in which parametric images are able to provide information that is not available from conventional anatomic images are (1) in distinguishing between radiation necrosis and an active tumor through metabolite images calculated from MR spectroscopic imaging data and (2) in estimating flow parameters from velocity-encoded MR images.

OCR for page 187
Page 193 The mathematical research needs for the synthesis of parametric images relate to designing parameter estimation schemes for the physiologic models being considered. The key consideration is the stability and robustness of the algorithms used to fit data at each point in space. Because of the two- or three-dimensional nature of the data, there are inevitably regions where the pixel values have very low signal-to-noise ratios. It is therefore necessary to mask out such regions or to use a fitting algorithm that behaves well for noisy data. For this reason, a wide range of different algorithms have been considered, including constrained least squares, simplex, simulated annealing, and maximum entropy. A particularly difficult situation arises in the analysis of time series of three-dimensional MRI, positron emission tomography (PET), and single photon emission computed tomography (SPECT) data from the brain, where the differences from the baseline are significant because they represent areas of functional activation, although these differences may be on the order of just a few percent. Here, the spatial and temporal correlations may be best studied using a data reduction strategy (e.g., singular value decomposition), which could incorporate continuity and anatomic priors. A complication in the analysis of such data occurs if the subject moves significantly during the data acquisition so that successive images are no longer in correct registration. 13.5  Data Visualization As biomedical imaging advances in terms of the sophistication of data acquisition techniques, the need to develop improved tools for image processing and visualization has become a major bottleneck. This need is particularly acute for the combined interpretation of three-dimensional anatomic and physiologic or metabolic data. The challenge is owing not only to the large size of the available data sets but also to the complexities of the relationships among the different data. One approach to this problem is data reduction or synthesis of parametric images, as has been described in previous sections. Other approaches include the use of color overlays of physiologic parameters onto anatomic structures. Such approaches are useful for making anatomic correlations but have limited scope for providing a quantitative interpretation of the relationships among different parameters. Visualization techniques currently being investigated in computer graphics research and being applied for the analysis of biomedical data include

OCR for page 187
Page 194 surface rendered anatomical displays with rotation and shading, volume rendered cut-outs with enhanced emphasis of particular objects, transparent surfaces within surfaces with color shading and rotation, and reprojection techniques using various weightings of pixels of interest, such as maximum pixel intensity projection and depth weighting. An example that is already in routine use is the MRI angiogram, which is visualized in three dimensions by reprojecting at multiple different angles to form a sequence of images that can be played back in ''cine" mode to simulate the rotation of the vessels. The design and evaluation of methods for representing biomedical image data constitute a most promising area for research, requiring a close interaction between computer scientists and the clinicians who will ultimately interpret the data. 13.6  Treatment Planning The planning of surgical procedures, hyperthermia, cryosurgery, and radiation therapy could benefit considerably from advances in high-speed computing and image processing. The sophistication of currently used techniques derives to a large extent from improvements in the capability for acquiring volumetric images and in the interactive manipulation of those data sets within the treatment room or surgical suite. The need for immediate visualization and direct spatial correlation of structures within the body is addressed in Chapter 12 in the context of the development of interventional procedures. For hyperthermia, cryosurgery, and radiation treatment planning, an additional problem is deciding how to tailor the therapy so that the greatest possible effect is obtained within the target while the surrounding normal tissue is subjected to as low an effect as possible. There are several different stages in planning and implementing such therapy. The first is to define the target to be treated, which in almost all cases requires the visualization of the lesion relative to the normal anatomy using diagnostic imaging modalities such as CT or MRI. Once the size and location of the lesion have been determined either by manual examination of the data or by more sophisticated image segmentation, it is necessary to determine how best to deliver the therapy. The complexity of the computations required to determine optimal delivery of therapy depends on the particular therapy used. For example, when radiation is used, it is necessary to determine the combination of radiation beam geometries, wedge placements, and dose fractionation that would be likely to deliver the max-

OCR for page 187
Page 195 imal dose to the target with a minimal dose to surrounding tissue. This assessment requires sophisticated modeling of radiation dose distributions and may involve optimization over a very large number of different therapy plans. For tissue ablation by heating or freezing, it is necessary to determine through a model the temperature distribution that would provide the maximal effect. A demanding computation is also basic to the capability to interactively monitor the implementation of the ablation plan using a temperature-sensitive parameter such as MR relaxation time. The computational demands associated with modern treatment planning are caused mainly by the large number of different ways in which therapy can be delivered and the time required to simulate multiple three-dimensional dose or temperature distributions. Even when the simulations can be accomplished rapidly with high-performance computing, there is still the issue of identifying a criterion to define which of the many thousands of possible options constitutes the "best" plan. Whether this identification must be achieved using interactive visual refinement to direct the optimization or whether the process can be fully automated remains to be determined for all except the simplest treatment schemes. When the computational problem associated with defining the treatment plan has been solved, the coordinate system of the plan is then registered as accurately as possible with the patient's frame of reference. This registration is typically achieved using stereotactic frames, masks, or external markers and may need to be repeated many times in the case of temperature or dose fractionation. 13.7 Research Opportunities There are numerous research opportunities in the field of contemporary biomedical image processing. · Some of the most challenging research opportunities fall in the area of extending traditional approaches to segmentation and object classification in order to include shape information rather than merely image intensity. These techniques, when combined with the ability to accurately register deformable objects, would make a major contribution to interpretation of images from the heart, abdomen, and pelvis. · A related area for research is deriving quantitative methods for analyzing tissue function and correlating that information directly to

OCR for page 187
Page 196 the anatomy. This research area includes the application of statistical approaches for identification of subtle changes in time series of three-dimensional images obtained for mapping brain function, the use of prior anatomic information to constrain the reconstruction of low signal-to-noise metabolic data, and the derivation of parametric images that accurately describe the kinetics of biologically relevant tracers. · Improved techniques for visualizing multi-dimensional data are critical for establishing the relevance of new types of image data, making the information that they represent accessible to a wide audience, and understanding their relationship to conventional anatomic images. · Radiation therapy and tissue ablation by heating or freezing require precise definition of the anatomic targets and a physical characterization of the processes leading to destruction of abnormal tissues while preserving nearby normal tissues. The need for simulation of the treatment process in three dimensions is a major challenge for high-performance computing and algorithm development. 13.8 Suggested Reading 1. Bezdek, J.C., Hall, L.O., and Clarke, L.P., Review of MR image segmentation techniques using pattern recognition, Am. Assoc. Phys. Med. 20 (1993), 1033-1048. 2. Bracewell, R.N., Two-Dimensional Imaging, Prentice Hall, Englewood Cliffs, N.J., 1995. 3. Maurer, C.R., and Fitzpatrick, J.M., A review of medical imaging registration, in Interactive Image-Guided Neurosurgery, R.J. Maciunas, ed., Amer. Assoc. Neurological Surgeons, Park Ridge, Ill., 17-44, 1993. 4. Robb, R.A., ed., Visualization in Biomedical Computing 1994; Proc. SPIE, vol. 2359, Bellingham, Wash., 1994. 5. Rosenmann, J., and Cullip, T., High performance computing in radiation cancer treatment, in High Performance Computing in Biomedical Research, T.C. Pilkington, B. Loftis, J.F. Thompson, S.L.-Y. Woo, T.C. Palmer, and T.F. Budinger, eds., CRC Press, Boca Raton, Fla., 465-476, 1993.

OCR for page 187
Page 197 6. Thirion, J.-P., Fast Non-Rigid Matching of 3D Medical Images, INRIA research report 2547, INRIA, Le Chesnay, France, 1995. 7. Udupa, J.K., and Herman, G.T., eds., 3D Imaging in Medicine, CRC Press, Boca Raton, Fla., 1991. 8. Udupa, J.K., and Samarasekera, S., Fuzzy connectedness and object definition, Proc. Med. Imaging 2431 (1995), 2-11.

OCR for page 187