The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Treatment of Posttraumatic Stress Disorder: An Assessment of the Evidence
A Challenge to Internal Validity
The available evidence on PTSD treatment is limited in that relatively few high-quality randomized controlled (by placebo, wait list, or equivalent) trials, or RCTs, have been performed for most modalities. The committee excluded a large volume of studies that were case reports and case-series, and controlled studies without randomization. The remaining studies varied in their adherence to current standards of design quality, had problems with sample size, assessor blinding or independence, high dropout rates, and had short or no follow-up after treatment concluded.
A characteristic of most studies of PTSD reviewed by the committee is a high degree of attrition of participants from assigned treatment, whether pharmacologic or psychotherapeutic. This may be due to the underlying condition and patient characteristics that may make adherence to any form of therapy difficult, or it may be due to improvement or worsening of symptoms. High degrees of dropout are common in studies of a broad range of psychological conditions. In a review of studies by Khan (2001a, b), dropout rates in trials of antidepressants averaged 37 percent, similar between treatment and placebo arms, and were in the 50 to 60 percent range for trials of antipsychotics, somewhat greater in treatment than in placebo, and intermediate among active controls.
A particularly difficult challenge is the assessment of efficacy in the face of different rates of dropout for different study treatments. As an illustration of this challenge (Figure 5-1), consider a study of an intervention with identical 50 percent remission rates in the intervention and control arms. Assume that 25 percent of patients who undergo the treatment but who are not improving fail to return for follow-up evaluation (perhaps due to treatment side effects) versus 5 percent among nonimproving control subjects. When the analysis focuses only on those with follow-up evaluations, this ineffective intervention will appear effective (67 percent remission rate versus 53 percent for controls). The point of the illustration is not that a study with dropouts is invalid, but rather, that an improper analysis (in this case, among completers only) in the face of differential dropout rates that are related to the clinical course can produce a biased result.
If outcome data are not obtained from patients who drop out from treatment, outcome data from those participants will be missing. It is critical to recognize that dropout from treatment does not necessarily mean that outcome data “must” be missing. With aggressive and systematic follow-up procedures, outcome data can still be obtained from many subjects who discontinue treatment. This was demonstrated in studies by Schnurr and colleagues (2003, 2007) where outcomes were successfully measured in a high proportion of participants who discontinued treatment. The committee viewed missing outcome data partly as a result of choices made in study