The latest issue of Neuropsychopharmacology has a provocative article by Undurraga and Balderassini regarding a meta-analysis they conducted of some thirty years of depression trials, an area that has grown increasingly and expensively frustrating as trials have been commonly compromised by high placebo response rates, obscuring whatever modest treatment response might have been attributable to the drug. The authors note that placebo response rates have indeed increased, and attempt to explain why the divergence between drug and placebo appears weaker on the part of newer antidepressants–in the era that tricyclics were tested, the placebo response tended to be lower, amplifying the apparent effect of the drug. The authors pose, and attempt to answer, the question of why this trend is occurring. They confirm that the size and complexity of clinical trials have increased, with the result that huge (our term) ‘assembly line’ trials require sites to have less-trained staff administer massive and complicated test protocols to large numbers of patients, with a resultant loss of data quality. They note that less depressed patients may now be enrolling, which we would note, fits the oft-cited phenomenon of ‘ATM patients’, who enroll in trials because they need the money, not the treatment. The problematic issue of nontraditional populations and the uncertain validity they add to the process (e.g. Dimebon, e.g. TC-5214) is also noted in the article.
Counter to all trends governing clinical trial design over the past decade, the authors suggest that smaller trials, involving fewer sites, might actually have a better chance of yielding data substantiating a treatment effect. Their guidance for the clinical trial ‘sweet spot’ is that 2-10 sites be utilized, and just 30-75 patients. When one considers the dozens of sites, and in some cases, thousands of patients enrolled in recent mega-pivotal trials, this is a highly contrarian viewpoint. But it makes sense for those who have decried the anonymity and nonaccountability that pervades large trials, where both patients and site staff may feel more like cogs in an infernal machine than participants in an important process. Reducing the number of sites reduces variability, and allows clinical investigators to have a much more ‘hands on’, personal degree of contact with the actual process.
This will doubtlessly stir some resistance in CROs and sites whose economic livelihood has gradually come to depend upon volume. But quality work should also be valued, and some of the savings that Pharma might (gratefully) achieve could also be used to reward work well done. NIR has previously noted that, when it comes to clinical trials, ‘Speed kills.’ As it turns out, bigger is also not better, and the personal accountability and involvement that modestly sized trials allow could accentuate quality and discriminating power. Of course, industry statisticians will kvetch about the anticipated loss of statistical power, but this is an issue where an early-adopter will face far less fiscal risk than they do when a massive trial fails. This could save time and money–it will be interesting to see who puts it to the test first.