It has been suggested that there is crisis in science concerning the reproducibility of data [1]. New research findings are usually published based on data collected only by the group reporting the new findings, which raises the probability of bias in the results as well as reducing their likely validity. It also creates a temptation to tamper with or falsify data given the incentives to publish. It is unlikely that any prestigious journal would publish work that simply demonstrates that previously published findings can be reproduced consistently. Yet, when they have tried to reproduce published data from experiments, many researchers have been unable to do so [2], which perhaps perversely makes the attempt to reproduce results publishable. However, if no one has attempted to reproduce a published dataset then it stands until demonstrated to not be reproducible, which implies that much of the data in the published literature could be irreproducible and hence of dubious value. This is a bigger problem than it might seem, because most scientific and technological innovation is built on the findings of fundamental research. Hence, we are building on shaky foundations if results are not reproducible. Similarly, the transition from prototypes to reliable products is dependent on achieving reproducibility in the real-world of results obtained with a prototype in the laboratory. I have been discussing these issues with a close collaborator for a number of years and last week we published a letter, in Open Research Europe, summarizing our views. In ‘Achieving reproducibility in the innovation process’ [3], we propose that a different approach to reproducibility is required for each phase of the innovation process, i.e., discovery, translation and application, because reproducibility has different implications in each phase. The diagram, reproduced from the paper (CC-BY-4.0), shows our ideas schematically but follow the link to read and comment on them.
References
[1] Baker, M. (2016). Reproducibility crisis. Nature, 533(26), 353-66.
[2] Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T. H., Huber, J., Johannesson, M., … & Wu, H. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2(9), 637-644.
[3] Whelan M & Patterson EA, (2025). Achieving reproducibility in the innovation process, Open Research Europe, 5:25. https://doi.org/10.12688/openreseurope.19408.1
