Unmaskers unmasked? A different perspective on replication studies

A paper by Bryan, Yeager & O'Brien (2019) sheds a new light on the replication crisis in psychology. An image that has emerged here and there of original authors as tinkering cheats and replicators as holy defenders of scientific morality must be revised. Replicators often take too many liberties, both in the design of their research and in the way in which they analyze data.

Replicators have an interest in replication failure

Trust in psychological science has come under pressure since 2011 due to many failed attempts to replicate previous research findings. This has led to greater skepticism among lay people about psychology and to stricter methodological demands on researchers. Understandably, much attention has been paid to the integrity and methodological quality of the original authors.

A conflict that these original authors faced is between the desire to work scientifically with integrity and well and to find significant results. Finding significant results was important because scientific journals have long had very little interest in publishing insignificant results. This made it tempting for original authors to arrange their experimental designs and data analysis methods in such a way that the chance of significant results was as high as possible.

In recent years, the implicit assumption has always been that replication researchers were not faced with such a conflict. But this assumption is incorrect. Magazines are now more interested in publishing failed replications than successful replications. That is why there is a conflict for replicators between the desire to work scientifically with integrity and well on the one hand and the desire to publish non-significant results on the other. Original authors may be tempted to show significant results where there are none (false positives) while replicators may be tempted to mask significant results (false negatives).

Replicators allow themselves too much freedom in research design and data analysis

Bryan et al. explain that the demands placed on replications are now mainly focused on the use of larger samples, comparable experimental materials and robustness checks on the data analysis. But they argue and demonstrate that these requirements leave many degrees of freedom for replicators to produce zero results. The freedoms may relate to, for example, the context and time in which the research is situated, the type of respondents and the way the respondents were recruited. The effect of these kinds of deviations from the original publications may be that there is no real replication.

The authors describe in detail how two replications of a study by Bryan et al. (2011) used different types of freedoms and reported zero results. They then re-analyzed the results of the second study and found strong evidence for the effect found by the original author. This effect found in the replication study data was initially masked by some incorrect choices in the replicators study.

Unmaskers unmasked?

The struggle in psychology in recent years seemed to be one between discoverers and unmaskers. The discoverers were the original authors who reported an effect in their study. The unmaskers were the noble defenders of science who showed that discoverers often fiddled with results in order to get their studies published. This picture is misleading. 

It is right to look very critically at original publications. Science advances because different scientists critically assess each other's work and conduct replication research.

But the idea that only those who say yes can have opportunistic interests is wrong. Naysayers also have an interest. First, replicators may have a general opportunistic interest. There is a sizeable market for unmasking publications: magazines are more interested in publishing unsuccessful publications than successful ones. Second, replicators may have a specific opportunistic interest in disproving original authors' findings. That interest may be that they have an alternative model, theory they want to defend and that is at odds with the model, the theory of the original author.

This last thought sometimes comes to mind when I read work of research that refutes mindset (read more) or deliberate practice research (read more). In these studies, claims and research methods of the original authors (in these cases, Anders Ericsson and Carol Dweck) are misrepresented, explained, and replicated. A recent example concerns a replication study of Mueller & Dweck (1998) by Li & Bates (2019). As Dweck & Yeager (2019) demonstrate, the replicators make essential mistakes in their research. Dweck & Yeager show that a simple re-analysis of Li & Bates' data does show the original effect.