Methodologically rigorous risk of bias tools for non-randomized studies had low reliability and high evaluator burden

2020 
Abstract Objective To assess the real-world inter-rater reliability (IRR), inter-consensus reliability (ICR) and evaluator burden of the Risk of Bias (RoB) in Non-randomized Studies (NRS) of Interventions (ROBINS-I), and the ROB Instrument for NRS of Exposures (ROB-NRSE) tools. Study design and Setting A six-center cross-sectional study with seven reviewers (2 reviewer pairs) assessing the RoB using ROBINS-I (n=44 NRS) or ROB-NRSE (n=44 NRS). We used Gwet’s AC1 statistic to calculate the IRR and ICR. To measure evaluator burden, we assessed the total time taken to apply the tool and reach consensus. Results For ROBINS-I, both IRR and ICR for individual domains ranged from poor to substantial agreement. IRR and ICR on overall RoB were poor. Evaluator burden was 48.45 min (95% CI 45.61 to 51.29). For ROB-NRSE, the IRR and ICR for the majority of domains were poor, while the rest ranged from fair to perfect agreement. IRR and ICR on overall RoB were slight and poor, respectively. Evaluator burden was 36.98 min (95% CI 34.80 to 39.16). Conclusions We found both tools to have low reliability, although ROBINS-I was slightly higher. Measures to increase agreement between raters (e.g. detailed training, supportive guidance material) may improve reliability and decrease evaluator burden.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    5
    Citations
    NaN
    KQI
    []