How the Quest for Simple Answers Misleads Us

black and white maze

A political proclamation in Nature reminds us that when scientific findings are translated into simpler terms, the facts often get lost along the way. And that’s what steered the development of ICI’s Learn/Unlearn section.

When we’re experiencing emotional or mental difficulties, we’re often understandably eager to find quick and simple answers to our questions. Will an antidepressant make me feel better – yes or no? How much will stimulant drugs help my child perform better in school – a little, a lot, or not at all? How hard will it be to stop using this sleep medication I’ve been taking? 

And there are a lot of people and companies more than willing to meet this desire we have. Just a click away, for example, WebMD – one of America’s most popular medical information sites – will tell you that antidepressant drugs “lift your mood and ease the sadness and hopelessness you might feel.” Answers don’t come much simpler than that! Yet in this context, what does “lift” a mood “or “ease” sadness actually mean? And how much lifting and easing are we talking about, and what percentage of the time? And how were these ineffable, subjective experiences even measured? Unfortunately, WebMD has so severely oversimplified what the scientific research says that many knowledgeable experts would argue their claim is manipulative and unsupported by the facts.

Such oversimplification can be dangerous – and this week a powerful reminder of that appeared in an article in Nature, one of the world’s leading medical journals. The article was co-signed by 854 scientists and backed by the American Statistical Association, and called for a radical change in how medical and psychiatric researchers report their findings. Simultaneously, the latest edition of the American Statistical Association’s own journal published over forty articles diagnosing the problem and calling for change.

The target of these scientists’ ire is a very common and extremely influential oversimplification in medical research: Statistical significance. Generally, the level of statistical significance or non-significance of a medical research finding relates to the likelihood that the same finding would be produced by random chance, and is calculated through a mathematical formula. (Writers at Vox and Bloomberg also reported on the Nature article and attempted to explain statistical significance in some detail for lay readers.) While this may seem like just gobbledygook for math geeks, if you’ve ever taken a prescription drug, then your life and health have been affected by determinations of statistical significance. This is because statistical significance calculations have become the dominant way that contemporary medical and psychiatric researchers “prove” drugs are safe or effective. If the positive impacts of a drug are statistically significant, then the drug is often declared “effective”. If the differences in adverse effects experienced by a drug group and a placebo group are “not statistically significant,” then the drug is purportedly “safe.” 

Statistical significance has become extremely popular with scientists, news media, businesses, government regulators, politicians, and ordinary people alike precisely because it produces these kinds of good or bad, yes or no, headline-grabbing, decision-guiding answers. Unfortunately, the experts in Nature point out, it’s all largely a mirage. The cut-off for significant versus non-significant is a completely arbitrary convention, and the methods for calculating significance often aren’t particularly reliable. Furthermore, statistical significance leaves far too much out – how the study was done, how many patients participated, how large the effects were, how different factors in the study influence each other, and much more. Worse, the authors argue, innumerable studies have shown that many medical researchers, driven by furthering their careers or satisfying their funders or making profits, are now deliberately manipulating their data to get above or below the cut-off for statistical significance.

In an official statement on their website, the American Statistical Association shows how strong their members’ perspectives are: 

…we conclude that it is time to stop using the term ‘statistically significant’ entirely… a label of statistical significance does not mean or imply that an association or effect is highly probable, real, true, or important. Nor does a label of statistical nonsignificance lead to the association or effect being improbable, absent, false, or unimportant.

They argue that scientists should avoid simplistic yeses and nos and instead explain more precisely and clearly what was studied, what was actually found, and what the full range of reasonable interpretations could be. Unfortunately, even articles in the American Statistical Association’s own journal express doubt that their call for change will succeed against the enormous political, economic, social and psychological forces aligned against it. 

So what does this mean for the millions of people currently taking a psychiatric drug, or considering starting one? It means that the information they’re relying on to make their choice may be very unreliable or even misleading. 

It’s for this reason that we developed Inner Compass Initiative’s Learn/Unlearn section, and why we included so much “explanatory” content. For example, rather than saying that a drug works or doesn’t to improve your mood, as if this were a simple statement of fact based on universally shared and unquestioned understandings, ICI’s examinations of the safety and effectiveness of different classes of psychiatric drugs always include discussions of how researchers actually measured people’s moods and inner experiences, and how much change and what types of changes typically occurred. Our goal is to allow you to see more clearly what researchers actually meant when they used words like “significantly improved.” We hope that this, in turn, allows you to make more informed decisions about whether you yourself think a drug is truly “safe” and “effective” based on how you define those terms for yourself, in your own circumstances.

gray dividing line

Rob WipondRob Wipond co-founded Inner Compass Initiative and contributed research, writing and editing for the websites of both Inner Compass Initiative and The Withdrawal Project. Since 1998 he has been a freelance investigative journalist. Read more here.

Comments

 

If opinions entered the mainstream, opinions became the truth. Just look at the Wikipedia pages on psychotropic drugs. There is never discussion about it, but just fix ideas that increase themselves in many way. For example I found by the site of the beck institute of Rome an article about bipolar that have a less brain volume than normal persons. I wrote them if they could told me how much bipolar persons take part at the study that had never taken psychotropic drugs. They did not answered. We are convinced the pharmacology industry tests the drugs. Yes, but for maximal 8 weeks and just one drug at time. So newspapers and internet sites can write for every study or trial: “the study comprehended also sane volunteers” and so on. And if an opinion is established, you never ask where this knowledge comes from, and if they in the past really answered the questions they have to answer. Psychiatry as we know is not based on scientific evidences. But his grown is just the result of marketing, I think. Thanks