Estimating the reproducibility of psychological science

If you’re a psychologist, this news has to make you a little nervous—particularly if you’re a psychologist who has published articles since 2008....as a group of researchers have already begun to check your work!

The group, includes our own Dr Cyril Pernet who are conducting what they’ve dubbed the Reproducibility Project, which aims to replicate every study from three journals (Psychological Science, the Journal of Personality and Social Psychology, or the Journal of Experimental Psychology: Learning, Memory, and Cognition) for one specific year (2008).

The project is part of Open Science Framework, a group interested in scientific values, and its stated mission is to “estimate the reproducibility of a sample of studies from the scientific literature.” Or is this a more polite way of saying “We want to see how much of what gets published turns out to be bunk.”?

INTRODUCTION Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. Scientific claims should not gain credence because of the status or authority of their originator but by the replicability of their supporting evidence. Even research of exemplary quality may have irreproducible empirical findings because of random or systematic error.

RATIONALE There is concern about the rate and predictors of reproducibility, but limited evidence. Potentially problematic practices include selective reporting, selective analysis, and insufficient specification of the conditions necessary or sufficient to obtain the results. Direct replication is the attempt to recreate the conditions believed sufficient for obtaining a previously observed finding and is the means of establishing reproducibility of a finding with new data. We conducted a large-scale, collaborative effort to obtain an initial estimate of the reproducibility of psychological science.