If you have "no place to go," come here!

"Why Most Published Research Findings Are False"

Fascinating article from The Public Library of Science:

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

Interesting, if true, and certainly throws a new light on the work of our professional classes. But is it true?

I can't assess the methodology of the article. Readers?

No votes yet


reslez's picture
Submitted by reslez on

as it should be. Your takeaway should be not that science is useless or impossible to do right. The point is there are ways to do it right, but a lot of the time we are doing it wrong. We give our scientists perverse incentives (publish or perish). Innate human bias takes care of the rest. Some fields are worse than others: medicine, psychology, economics. Amusingly these are also the most likely to be reported in popular media.

I would ignore almost all research studies mentioned in the press, particularly if they involve a very small number of research subjects. These studies are reported only because they are interesting to discuss at cocktail parties. The results are unlikely to be replicated. (Which means they are false.)

The author of the article you cited was profiled in The Atlantic: Lies, Damned Lies, and Medical Science

Here's a basic overview of problems: Warning Signs in Experimental Design and Interpretation. In my opinion, one of the biggest problems we have now is the drive for publication which leads to "Taking p too Seriously" (in the above link).

Some amusing further discussion here:

The basic nature of ‘significance’ being defined as p<0.05 means we should expect ~5% of studies or experiments to be bogus, yet there are too many positive results (psychiatry, biomedicine, biology, ecology and evolution, psychology, economics, sociology, gene-disease correlations) given effect sizes (and positive results correlate with per capita publishing...)

More links:

twig's picture
Submitted by twig on

a couple other points. One, quite a few studies are financed by groups who want a specific outcome and don't mind bending rules to get it. iirc, clinical trials on the wonders of pomegranate may be in this category, but there are many studies supported by the dairy, egg, etc. councils and similar groups. If they're not happy with the results, they can bury the research. Or sometimes the data can be massaged into something "positive" for them.

Two, if you read studies on the effectiveness of some medications, you'll see that the placebo was often as effective, sometimes more so, than the actual medicine. As you might imagine, no one mentions that in the media reports. So people end up taking medicine (probably expensive), which most likely has some potentially serious side effects, when nothing would have been just as good.

goldberry's picture
Submitted by goldberry on

I haven't found that to be the case for my particular situation. The methods in the literature we used are pretty reproducible. And then there are the papers that are just not very well written. Something gets lost in translation from the original Mandarin. I'm reading a paper like that right now. It just leaves me scratching my head. But the authors deposited their structures in a public database so they're reliable. I just can't figure out the mechanism because their nomenclature is so confusing. The good thing is that the person who sent it to me told me it was going to be somewhat vague and he's a professor at a local university. So, if he found it somewhat lacking, then i'm not totally to blame for having to reread the same paragraph three times before it makes sense.

That is not to say that there aren't some stinkers out there. Derek Lowe posted something on Reservatrol today ( that looks like a stinker and he has documented other examples. The comment he makes about western blots is an inside joke to chemists who have to sit through presentations of biologists where it looks like the same picture of a gel gets passed around between biologists. Too funny. Er, I guess you have to be there. Usually, it's the poor labrat who has to follow the method, unsuccessfully, who finds out that the results were fabricated.

But what is really troublesome is the incredible amount of filler out there. As Reslez says, it's publish or perish for most of us. That used to be said of academia but now industry is demanding a lot of publications from scientists as the price of staying employed. That leaves us in a bind. We can't publish stuff that hasn't been patented yet or approved by the lawyers. What remains is application development and sometimes, that's just very narrow with few earth shaking results. It's really unfair to the industrial scientist whose job is something quite different from an academic. Doing work that leads to a publication frequently takes you away from stuff you need to do to solve a particular problem that can't be published yet. So, there's a lot of bits and pieces of in the literature that isn't of much value. It can be mind numbing.

Aeryl's picture
Submitted by Aeryl on

On how the popularization of studies tends to differ wildly from the actual study results, and many times does posts on the studies themselves, and how selective and predetermined they are. Interesting stuff, you should definitely read if you aren't already.

I read one today that seems to jive, a decades long study on thousands of marijuana users and what effect it has on lung function(little to none). Performed by the National Institutes of Health.