Click for next page ( 198

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 197
A Collaborative Contracting Strategy this report, the panel has recommended rigorous evaluation strategies for assessing the projects of community-based organizations (CBOs) and counseling and testing sites. In making these recommendations, we have also noted that some projects may be unable or unwilling to participate in evaluation research because they do not have the funding, time, or appropnately trained personnel to undertake the necessary tasks. In this appendix, the pane] lays out a possible strategy that would ensure projects of the necessary resources to conduct evaluation research and that would also lead to separate but mutually informing communities of evaluation experts. The proposed tactic for evaluating projects located in CBOs and In counseling and testing sites is to use a contract bidding procedure rather than the request-for-proposal process. The contract to be bid on would describe Me following: . demographic characteristics of the target population; the scope of the program; endpoint objectives behavioral, psychological, or biologi- c~; program content objectives; policy objectives; evaluation objectives for formative and process evaluations and for evaluating whether the project makes a difference and what works better; and evaluation methods. 197

OCR for page 197
198 ~ APPENDIX A A prospective contractor, in collaboration with a competent evaluation group, would explain in detail its approach to designing a program to meet the contract requirements. The prospective contractor would bid on the contract, relating the bid to program design and to the contract re- quirement. The evaluation procedure, responsibilities, and budget would be predetermined and included as a contract requirement. The evaluation processes, including random assignment, monitoring, and data collection and analyses, would be dictated primarily by the scope of the program and by whether outcomes are to be assessed internally (as the responsi- bility of the contractor) or externally (as the responsibility of an outside evaluator to analyze multiple CBO contractors). The contracting option also engenders further choices. Among these is deciding whether to develop separate contractual arrangements with each project that agrees to evaluation or to develop a single large con- tract to cover the evaluation of a sample of sites. Developing separate contractual arrangements may involve an independent evaluation team submitting evaluation proposals that show how the CBO would colIabo- rate with such a team and how the evaluation would be calTied out. The independence of the evaluation team is justified on grounds of credibil- ity and scientific integrity; however, the collaboration with the CBO is essential. For CDC to contract separately with six to eight CBO-evaluator groups would be feasible but managerially burdensome. Nevertheless, the strategy arguably is sensible on scientific grounds. In effect, over tile long run the approach builds separate but mutually ~nforTning communi- ties of experts. The current dependence of AIDS research on only a few universities and research institutes is often sound strategy for massive evaluation but does little to develop local capacity for routinely and Ic- cally generated high-quality evaluation. Local evaluative capacity avoids dependence on a single pnncipal investigator who makes decisions about the evaluation of a range of complex projects. Campbell (1987:402) argues that splitting large studies into two or more parallel studies is desirable on grounds that it increases the "size and autonomy of a mutu- ally monitoring scientific community." The latter is essential in building scientific understanding in prevention program research and evaluation. Contracting with one evaluation group that collaborates with, per- haps, six to eight sites is also feasible, to judge from work by Hubbard and colleagues (1988), among others. This approach is managerially less burdensome than contracting independently with evaluator-project com- binations, but has the disadvantages of being vulnerable to the will of a single decision maker (i.e., the principal investigator) and of not building

OCR for page 197
CONTRACTING 199 lock expense ~ me design, 1mplemen~1ion, Id Isis of ebb wows -~ Has ~ REFERENCES C a, D. (1987) Guldel~es For monhodug me sciendOc commence of p~venOve ~~~emion Joseph camp: ~ exercise in me stole of scientific tidily. ^~: In, I, ~~ &389~30. H~b~, R. L., Dresden, ha. E., C~=gh, E., Robe, J. ha, ad Gi=-, H. M. (1988) Role of ~g-~use =~eD1 ~ limiting me saw of IDS. Revi Nachos D~ 10:377-384.