Stanford study shows that other studies are often incorrect

Research shows small studies may overestimate the effects of many medical interventions 
[Via Scope Blog ]

John Ioannidis, MD, DSc, chief of the Stanford Prevention Research Center, has been involved in long-standing efforts to improve preclinical studies and the ways in which biomedical research is carried out and reported. In the process, he has authored several pieces examining weaknesses of the current system.

Now new research from Ioannidis and collaborators at Yale’s School of Medicine and Oswaldo Cruz Hospital in Brazil shows that most medical interventions have only small or modest incremental effects, but that those effects are frequently overestimated by small studies.

In the study, which will be published tomorrow in the Journal of the American Medical Association, researchers examined data from the Cochrane Database of Systematic Reviews, a collection of review articles on health-care studies from around the world. Their analysis included trials that concluded a medical intervention had a very large treatment effect, defined as a positive or negative effect that was five times greater than the effect experienced by the control group. Overall, they scrutinized more than 3,000 reviews covering 85,000 meta-analyses on medical topics. Researchers’ findings are explained by my colleague in a release:


New drugs have to go through 3 layers of clinical trial before approval: Phase 1 – safety and tolerance (can people take the drug); Phase 2 – efficacy (done with small numbers of people to see if there is any sort of benefit)’; and, Phase 3 – does the drug really work (done with a statistically large group of people).

This is done because small group studies can often be overwhelmed by an apparent benefit that disappears with larger studies. There are many reasons for this but is the main reason it is better to wait for the Phase 3 results before getting excited. Only 1 out of 10 or so drugs make it completely through this process to a successful drug.

This study found that 9% of the small studies done for medical interventions, not just drugs, held up when repeated, particularly with higher numbers. Only about 1 out of 10 made it through. Pretty much the same for drug trials.

This is not too surprising to researchers but nice to see documents. First, only positive results tend to be published. Small studies inherently have greater error than larger ones. It is kind of natural, then, to only publish those that look like astounding results but they may really only be due to statistics. When another study is done, reversion to the mean makes that effect disappear.

Think of flipping a coin. You might get 4 heads in a row, making you believe that somehow the coin is some sort of wonderful piece of metal. Wow, it comes up heads 100% of the time. Publish and look out tenure, here I come.

It is only when you flip it 100 times you see that heads only comes up 50% of the time. But by that time, the researcher is on to something else to get tenure.

Thus it is nice that this new study is getting some press. Scientists know this and know why, whereas a small study can be very interesting and worth further examination, it most likely requires a lot more study to see if it is real.

The initial paper is simply a ranging shot, something you need to do in order to move forward. And as we learn more, we get better with these ranging shots. Instead of having to do 10 to get one correct hit, perhaps we can get it down to 1 in 5.

Image: Kevin Dooley

One thought on “Stanford study shows that other studies are often incorrect

  1. I would like to see a study done of all the sociological studies that are done using, maybe, 20 people and purporting to show how large groups of people will act! Entire marketing departments of businesses are built on some of these small scale studies.

Comments are closed.