Solving the replication problem in social science

Today, a team of researchers has made a good contribution to solving the replication problem in social science. 

Previously, I wrote about what science is and why it is important (see Improving science). In that piece I used the picture below to describe some essential parts of the scientific process. Each of these parts are links in the chain of the scientific process and play an essential role in making science function well. In several of these links there are some serious weaknesses which threaten the quality of science.

One of those threat has to do with replication studies. In a replication study a study is repeated using the same methods but by different researchers and with different subjects. Doing replication studies is of great importance to scientific research. Replications contribute importantly to the self-correcting nature of science. Here are a few ways in which replication studies make science better and more honest. First, the fact that replication studies are a normal part of the scientific process, forces scientist to be open and transparent with respect to their methods and data. Second, the fact that others will probably replicate your study reduces the chance of fiddling. Third, replication studies make it possible to check whether remarkable findings may be the result of coincidences or not.

A few years ago, it has become clear that there is a problem with respect to replications (also see this article). One aspect of this problem was that many journals were hardly interested in publishing replication studies (which made doing them unattractive to scientists). Another aspect of the problem was that replication studies which had been conducted often demonstrated that findings from psychological studies were not robust (previously found effects were not found again in replication studies).

An international team of 270 researchers has now replicated 100 psychological and social scientific studies which had been publishes in leading journals (see here). Here is the abstract of that paper which was published today in Science:
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams. 
Briefly put: many of the effects that were found in the original studies were not found in the replication studies and many of the effects that were found in the replication studies were weaker that the effects that were found in the original studies. 

What does this mean? This new research is a step in the good direction. It shows how important replication studies are and it show that the robustness of findings in social science leaves much to be desired. Science has to be self-correcting. That faulty findings are sometimes reported is inevitable but a well functioning replication practice helps to identify those faulty findings. Journals must start to value replication studies more. Scientists must become more transparent about their methods (and preferably publish them even before they conduct their studies so that fiddling will become much harder.


Coert Visser said…
According to two Harvard professors and their collaborators, a 2015 landmark study showing that more than half of all psychology studies cannot be replicated is actually wrong: