ever has been successful,8 although at least one committee member believes that the general level of intrahospital correlation is probably underestimated.
Assuming that some of the above issues have been adequately addressed, one arrives at the question of the content and appearance of publicly disclosed information. The structure, level of detail, and other properties of such information will differ by the disclosure media used, by the nature of the information, by the type of provider or practitioner under consideration, and by the level of confidence that can be placed in the numbers, statistics, and inferences to be presented. Some of the more problematic factors in presenting data are noted here. The committee does not take a formal stand on how these matters might be resolved, however, because it believes that those decisions need to be governed by local considerations.
One difficulty in presenting data involves how and in what order HDOs elect to identify or list institutions, clinicians, or other providers. The most obvious choice is to do so alphabetically. This option has the advantage of making it easy to find a given provider and would probably be the likely approach when publicly disclosed information is purely descriptive. It has the related disadvantage, however, of complicating the task of comparison when the issues of interest involve evaluative information.
Other approaches are nonalphabetic. HDOs might, for instance, order providers of interest on a noncontroversial or descriptive variable; for a given region, these might be the number of beds for institutions, the number of free-standing clinics for HMOs, or the number of primary care physicians for PPOs. This method, however, does not have the advantages of alphabetic ordering and still has the disadvantages noted above. A variant is to sort providers on the basis of an essentially descriptive variable, such
As evidence of the difficulty of developing index measures in quality analyses, Cleveland Health Quality Choice (CHQC, 1993) has attempted to avoid the methodological pitfalls of trying to combine independent measures of quality. In its recent report, the Cleveland group provides data separately for various quality measures, which include intensive care mortality and length of stay (LOS), medical mortality (for acute myocardial infarction, congestive heart failure, stroke, pneumonia, and chronic obstructive pulmonary disease) and LOS, and surgical mortality and LOS. The group did report global patient satisfaction measures, in addition to separate indicators of satisfaction with such elements of care as admissions, ancillary services, billing, food, and nursing care, but in this case the entire approach to assessing patient satisfaction (including the estimation of a global measure) is based on known instruments with proven reliability and validity.