If it’s not statistically significant, you should not really state anything

Pharma Make The Most of A Negative Result
[Via Latest Blogs]

A misleading piece of statistical rhetoric has appeared in a paper about an experimental antidepressant treatment. The study is published in the Journal of Affective Disorders. JAD is a respectable mid-ranked psychiatry journal – yet on this occasion they seem to have dropped the ball badly.

The study examined whether the drug armodafinil (Nuvigil) improved mood in people with bipolar disorder who were in a depressive episode. In a double-blind trial, 462 patients were randomized to treat

[More]

Scientific research usually has some sort of statistical measurement to differentiate the results from random noise. This frequentist approach essentially tells us how many times a false positive might occur through random chance – the p value.

Mostly arbitrarily, researchers have settled on a p value of 0.05. So to be statistically significant, an the chances of a false positive must be less than 5 out of 100.

In fact, this is quite weak. Work suggests that a much better threshold for biological systems should be a p value less than 0.01 (a 1 out of a 100 chance.)

But if this p value was actually used, many researchers would not be able to publish. So, even though we know that using a cutoff of 0.05 allows many false positives through and reduces the reproducibility of work, it still gets used.

But in this paper, the p value was 0.24. No way is that significant. Yet this is what the abstract wrote:

FDA-approved bipolar I depression treatments are limited. Adjunctive armodafinil 150 mg/day reduced depressive symptoms associated with bipolar I disorder to a greater extent than adjunctive placebo, although the difference failed to reach statistical significance.

No. No No. With a p value of 0.24, you really cannot say anything. Stating that is is associated with reduced symptoms is misleading.

I’d never have let that phrase make it through the review process.

And then they simply ignored other associations that did not fit their preconceived notions.

Feynman warned about the many biases science can suffer from, including confirmation bias. This paper is a great example of how biases can often unconsciously appear. 

Image: Rob