We question whether analytical tools such as Common Workflow Language, which aim to make computational methods “reproducible and shareable”, can stand the test of time (see Nature 573, 149–150; 2019). The long-term validity of computational results will not be testable if the original code cannot be run many years later.

Considering the rapidity of transformations in operating systems and programming languages, it is hard to predict the lifetime reproducibility of a particular code. We have therefore organized the Ten Years Reproducibility Challenge (see go.nature.com/2bwcukq). Researchers are invited to test code reproducibility by trying to rerun a code created for a scientific paper they published more than ten years ago. The codes can address any scientific domain (statistical analysis, numerical simulation or data processing, for example) and be written in any language.

The challenge closes in April 2020. Our hope is that the results will offer insights into long-term causes of non-reproducibility.