A new study calculates a low probability that real effects are actually being detected in psychology, neuroscience and medicine research paper — and then explains why.
Slashdot reader ananyo writes:
The average statistical power of papers culled from 44 reviews published between 1960 and 2011 was about 24%. The authors built an evolutionary computer model to suggest why and show that poor methods that get “results” will inevitably prosper. They also show that replication efforts cannot stop the degradation of the scientific record as long as science continues to reward the volume of a researcher’s publications — rather than their quality.
The article notes that in a 2015 sample of 100 psychological studies, only 36% of the results could actually be reproduced. Yet the researchers conclude that in the Darwin-esque hunt for funding, “top-performing laboratories will always be those who are able to cut corners.” And the article’s larger argument is until universities stop rewarding bad science, even subsequent attempts to invalidate those bogus results will be “incapable of correcting the situation no matter how rigorously it is pursued.”
Read more of this story at Slashdot.