Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

"It was my pleasure to welcome forty young psychiatrists to the ECNP Oxford school again last week. It was a very stimulating occasion with high levels of interaction and almost constant sunshine. It always forces me to revamp my own thinking about the topics that others cover (which I attended) and my own talks about the use of drugs in psychiatry. It provides me with a kind of mental landmark for where I think we are in various areas." Prof Guy Goodwin

And a message from the other president

In listening to the superb session on meta-analysis, I was again struck by the ambiguous status of drug trials in psychiatry – our evidence base. There have been a lot of treatment trials in psychiatry and the many thousands of patient years they represent are available in one form or another for data synthesis. We are conditioned to think of the results in terms of a single statistic: the mean change in a rating scale for an active drug of interest versus that for a comparator (often placebo). How good is this evidence? Are clinical trials simply too artificial to be true?

The good news is that network meta-analysis seems to find that trials are generally coherent. In other words the old canard that drug A beats drug B, drug B beats drug C, but drug C beats drug A is broadly untrue. Effect sizes can be successfully combined to rank drugs by efficacy and acceptability (drop out). Furthermore, my own experience in relation to mania, for example, accords with the order the trials generate. This is one way in which the overall validity of trials is confirmed. If the whole ensemble was a mess of incoherent loops, it would suggest we had a serious problem. Instead we have effect sizes for different drugs that can be successfully compared, which suggests that they are derived on average from similarly designed trials in similar patient groups.

So this is supportive of our evidence base, and we might assume that the effect sizes we see are indeed generalizable to clinical practice. In the case of the antipsychotics, average values of around 0.5 seem OK (‘moderate’ in the terminology of traditional meta-analysis, but ‘clear to the naked eye’) and are certainly superior to many drug trials conducted in general medicine. So maybe we should just relax and keep repeating that the drugs work.

The bigger concern is the antidepressants. The antidepressant literature highlights the problem of the placebo response, or rather, the changes taking place in the placebo arm of clinical trials; not all of which is due to placebo/expectation of course. These response rates have risen remorselessly over the last 30 years. What does this mean? It seems to represent a vicious circle, where demand from companies to do trials quickly has driven up patient and site numbers, together with the treatment response rates. This effect is most striking in the USA sites. The most parsimonious explanation seems to be that perverse incentives of all kinds conspire to drive into studies patients who will respond to anything. The main variable contributing to the effect size is the site, not the drug!

This has probably resulted in the failure of potentially useful drugs and a potential loss of credibility for those of us who use medicines to treat depression because if one lumps all studies together, effect sizes are small (and getting smaller) rather than moderate in size. The issue for practice is where the profile of your patient maps on the continuum between high and low placebo responding. And of course, we do not have good a priori methods to predict who sits where. So evidence based psychiatry remains ultimately an underpinning but abstract source of reassurance in the presence of this kind of uncertainty. It tells us things we know and things we don’t know. But it cannot tell us what to do and it currently works with average effects that are ultimately of limited interest to the individual patient.

The next step in meta-analysis must be the move to the level of individual patient data and patient stratification. The ECNP summit almost 4 years ago identified the need but it has proved difficult to kick start the process of data collection and curation. The opportunity is still there nevertheless. The question for the future will be not by how much a drug works in a large population, but will it work for the person in front of me (and perhaps why). We need answers.

Best regards,

Guy Goodwin, ECNP President