As in previous ACR surveys, among nuclear medicine specialists, the ACR was interested only in those who had major ties to radiology; this concept of a major tie to radiology was operationalized as holding American Board of Radiology (ABR) certification and/or being a member of the ACR (Sunshine et al., 2002). On this basis, approximately two-thirds of the original sample of nuclear medicine specialists were omitted from consideration.

The total sample of interest, which was composed of the four strata of interventionalists, all other allopathic radiologists, osteopathic radiologists, and nuclear medicine specialists of interest, consisted of 3,090 physicians. From these, 1,924 usably complete responses were received. In addition, not in the form of completed questionnaires, the ACR received information that 21 addressees were deceased, 6 were no longer practicing in the United States, and 6 were not radiologists. The response rate was thus (1,924+6)/(3,090−21−6)=63 percent.

Responses were weighted so that the weighted statistics would be representative of the answers that would have been received if all physicians in the United States in the four strata had been surveyed and had responded. The weighting process has been described previously (Sunshine et al., 2002). To begin, logistic regression analysis was employed to determine how many different sets of weights were to be used in each of the four strata. For the 2,743 physicians in the “all other allopathic radiologists” stratum, the analysis showed that ACR membership and age had statistically significant effects on the response rate, while sex, geographic region, and listing in the Masterfile as a “radiologist,” “diagnostic radiologist,” or “radiology subspecialist” did not. Accordingly, 10 weighting categories, based on whether or not a physician was an ACR member and his/her age, were used, and responses in each category were weighted by the reciprocal of the category’s response rate. A similar logistic analysis of the 202 interventionalists in the sample resulted in two weighting categories, based on whether or not the physician was an ACR member. Because logistic regression showed no statistically significant effect, only one weighting category was used for the nuclear medicine specialists of interest and one for the osteopathic radiologists. After all responses in each weighting category were given a weight equal to the reciprocal of the response rate for that category, these weights were multiplied by the reciprocal of the sampling rate to complete the process of making responses representative of the entire U.S. population of radiologists. For example, if a weighting category had a response rate of 65 percent and it was part of a stratum that had been sampled at the general 8 percent sampling rate, then all responses in that weighting category were given a weight of (1/0.65)×(1/0.08)=19.23.

Data Quality Improvement

Every survey has some deficient data—that is, missing items, responses not in accordance with directions given by the questionnaire, and responses that are inconsistent or have other problems. The leading tool to minimize data deficiencies in this survey was the designation of the 12 items on the questionnaire judged most crucial as “core questions.” When questionnaires were returned, CSR checked that these 12 items were indeed answered, and made three designated consistency checks involving them. If there were any problems with the core items, CSR telephoned the respondent to obtain the missing response(s) and/or resolve the consistency problems.

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement