Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 133
Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment D The Life Cycle of Technology, Systems, and Programs As noted in Chapter 2, the framework articulated for evaluating and deploying information-based technologies, programs, and systems acknowledges that the proposed inquiries are unlikely to yield definitive answers (i.e., “yes” or “no”) at a given point in time and also that the answers may well change with time due to changes in the operational environment. This reality suggests that the policy regime—that is, what to make of and do with the answers to the questions provided by the framework—must be linked to the program life cycle. (In principle, the complete program life cycle begins with research, goes through development and deployment and then into operations, maintenance, and upgrade, and ends with program retirement.) Mature models exist in other application areas that provide some guidance for how to proceed in this domain. For example, before new pharmaceuticals are approved by the Food and Drug Administration they must pass through multiple stages of testing designed to assess drug efficacy (therapeutic benefit) as well as safety (acceptable risks) in clinical trials. After approval is obtained for deployment, ongoing monitoring evaluates effectiveness and risks in the real-world environment; drugs may be recalled if they fall below acceptable standards. Similarly, product development programs typically rely on increasingly constrained testing regimes that, prior to deployment of a new system, mimic the real-world operating environment as nearly as is possible. Even product recall is not uncommon if, after deployment, a product is deemed to be ineffective or otherwise unacceptable (e.g., for safety reasons).
OCR for page 134
Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment Similar processes exist to guide software development programs. For example, the National Aeronautics and Space Administration has for many years relied on independent verification and validation (IV&V) for safety-critical software applications. And the Software Engineering Institute and others have defined guidelines for verification and validation of large software applications. But these processes do not effectively address the complexities inherent in this class of information-based programs. Multiple versions of what constitutes a program life cycle can be found in literature; here the committee describes a generic model with the following phases: Identification of needs. Analyze the current environment and solutions or processes currently in use; identify capability gaps or unmet needs. Research and technology development. Develop potential solutions to meet the identified needs. Systems development and demonstration. Develop and demonstrate the integrated system. Operational deployment. Complete production and full deployment of the program. Operational monitoring. Provide for ongoing monitoring to ensure that the deployed capability remains both effective and acceptable. Systems evolution. Institute upgrades to improve or enhance system functionality. An effective policy regime should address each of the above phases in turn as indicated below: Identification of needs. During this phase, questions 1 and 2 from the summary of framework critera for evaluating effectiveness in Section 2.5.1 of Chapter 2 should be addressed—that is, the research should proceed only if a clear purpose and a rational basis are established and documented. Measures of effectiveness (benefit) and measures of performance (risk) should be drafted during this phase. Research and technology development. During this phase, testing should occur in a controlled laboratory setting—the equivalent of animal testing in the drug development process or developmental test and evaluation (DT&E) in traditional technology development programs. A key issue in testing information-based programs is access to data sets that adequately simulate real-world data such that algorithm efficacy can be evaluated. Ideally, standardized (and anonymized) data sets should be generated and maintained to support this phase of testing; the data sets maintained by the National Institute of Standards and Technology for
OCR for page 135
Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment use in evaluating fingerprint and other biometric detection algorithms may serve as a useful model.1 The program should proceed beyond this phase only after demonstration that a sound experimental basis exists in a laboratory setting; measures of effectiveness and measures of performance will likely require refinement during this phase of program development. System development and demonstration. During this phase, the program should be field-tested—subjected to the equivalent of human subject trials in the drug development process or operational test and evaluation (OT&E) in traditional technology development programs. The test environment must mimic real-world conditions as nearly as possible, and so both the simulation environment and requisite data sets must be designed and implemented with appropriate oversight. If it is necessary, for example, to use real-world data, then the test regime must provide appropriate protections to guard against inappropriate use of either the data or the results. During this phase of testing, the various elements of question 3 of the effectiveness criteria summary in Section 2.5.1 should be addressed (field-tested? tested to take into account real-world conditions? successful in predicting historical events? experimental successes replicated?), as should questions 4, 6, and 7 (scalability, capability for integration with relevant systems and tools, robustness in the field and against countermeasures). In addition, the development team should respond to questions 8 and 9 (guarantees regarding appropriateness and reliability of data, provision of appropriate data stewardship). Also, given the class of programs under consideration in this report, a requirement for IV&V is needed at this phase of the life cycle. The IV&V process should review results from prior phases of testing and address the inquiries in question 10 (objectivity). Measures of effectiveness and measures of performance should be finalized for use in ongoing monitoring of the program if it is subsequently operationally deployed. Operational deployment. The final gate prior to operational deployment is an agency-level review of all items delineated in the summary of criteria for evaluating consistency with laws and values in Section 2.5.2, assurance that an ongoing monitoring process is in place, and definition of the conditions for operational deployment (e.g., threshold values for key measures). This review process should ensure that compliance is documented and reviewed in accordance with question 12 of the effectiveness criteria summary in Section 2.5.1. 1 See http://www.itl.nist.gov/iad/894.03/databases/defs/dbases.html#finglist for more information.
OCR for page 136
Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment Operational monitoring. Once deployed, ongoing monitoring against established measures is vital to ensure that the system remains both effective and acceptable. If results in real-world operations suggest that, due either to changes in the external environment or to a lack of fidelity in the OT&E environment, system performance does not meet the established thresholds, an immediate agency-level review should be conducted to determine whether operational authorization should be revoked. Systems evolution. In general, information systems evolve or become obsolete. They evolve for many reasons. For example, new technologies may become available whose adoption can make the system more usable, or new applications may be required in a new operating environment, or new capabilities previously planned but not yet incorporated may be deployed. Because system evolution results in new functionality, reapplication of the framework described in Chapter 2 is usually warranted.