Reproducibility
Reproducibility is one of the cornerstones of the scientific method. If I say "here's what I have done and that's what I found", then someone else should be able to retrace my steps and arrive at the same outcome. Without reproducibility, it is very difficult to verify someone else's work, or to build on it in subsequent work.
Over the last 10 to 20 years, many scientific disciplines have experienced a reproducibility crisis, consisting of the discovery that results that everybody expected to be reproducible are in fact not. Most of these cases fall into one (or both) of two categories.
1. Statistical reproducibility is about re-doing an experiment, using a different sample (people, bacteria, electrons, ...), applying the same inference techniques, and finding close enough results. Statistical irreproducibility points to insufficient sample sizes, mistakes in applying statistical inference, inappropriate use of statistics, or various forms of fraud (data manipulation, p-hacking, ...).
2. Computational reproducibility is about re-running a computation and getting identical results. Computational irreproducibility is due to an incomplete record of what was actually computed. That can be due to bad bookkeeping (e.g. no version control for source code), or due to the complexity of today's software stacks, which make it very difficult to know all the software that contributed to a computation, and even more difficult to reconstruct that stack identically.