Promoting learning from null or negative results in prevention science trials

2020 
There can be a tendency for investigators to disregard or explain away null or negative results in prevention science trials. Examples include not publicizing findings, conducting spurious subgroup analyses, or attributing the outcome post hoc to real or perceived weaknesses in trial design or intervention implementation. This is unhelpful for several reasons, not least that it skews the evidence base, contributes to research "waste", undermines respect for science, and stifles creativity in intervention development. In this paper, we identify possible policy and practice responses when interventions have null (ineffective) or negative (harmful) results, and argue that these are influenced by: the intervention itself (e.g., stage of gestation, perceived importance); trial design, conduct, and results (e.g., pattern of null/negative effects, internal and external validity); context (e.g., wider evidence base, state of policy); and individual perspectives and interests (e.g., stake in the intervention). We advance several strategies to promote more informative null or negative effect trials and enable learning from such results, focusing on changes to culture, process, intervention design, trial design, and environment.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    3
    Citations
    NaN
    KQI
    []