Why multiple hypothesis test corrections provide poor control of false positives in the real world

2021 
Most scientific disciplines use significance testing to draw conclusions from experimental or observational data. This classical approach provides theoretical guarantees for controlling the number of false positives across a set of hypothesis tests, making it an appealing framework for scientists who wish to limit the number of false effects or associations that they claim exist. Unfortunately, these theoretical guarantees apply to few experiments and the actual false positive rate (FPR) is much higher than the theoretical rate. In real experiments, hypotheses are often tested after finding unexpected relationships or patterns, the data are analysed in several ways, analyses may be run repeatedly as data accumulate from new experimental runs, and publicly available data are analysed by many groups. In addition, the freedom scientists have to choose the error rate to control, the collection of tests to include in the adjustment, and the method of correction provides too much flexibility for strong error control. Even worse, methods known to provide poor control of the FPR such as Newman-Keuls and Fisher's Least Significant Difference are popular with researchers. As a result, adjusted p-values are too small, the incorrect conclusion is often reached, and reported results are less reproducible. Here, I show why the FPR is rarely controlled in any meaningful way and argue that a single well-defined FPR does not even exist.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    53
    References
    0
    Citations
    NaN
    KQI
    []