[Application of Coefficient for Evaluating Agreement in disordered multi-classification data].

2021 
OBJECTIVE To assess the performance of the Coefficient for Evaluating Agreement (CEA) established based on AC1 coefficient in evaluating the consistency between two raters for disordered multi- classification outcome data in comparison with the Kappa coefficient. METHODS The diagnostic test data generated by random sampling and Monte Carlo simulation were used for resampling with different parameter combinations (including sample size, proportion of specified events in the population, accidental evaluation rate and number of categories) to compare the mean square error, variance, and variance of the mean of Kappa, AC1 and CEA. The distribution description of CEA was obtained by random sampling for 1000 times from the population. RESULTS The inconsistency of the incidental evaluation rate caused substantial fluctuation of the mean square error of CEA. Compared with the Kappa coefficient, AC1 and CEA was more stable when the population contained extreme proportions of the specified events. For small samples and inconsistent evaluation rates by chance, the variance and the expectation of variance became obviously expanded for Kappa coefficient and showed smaller changes for CEA. CEA showed nearly a normal distribution for a large sample size. CONCLUSION Kappa, AC1 and CEA are all the most strongly affected by the accidental evaluation rate, followed then by sample size. For disordered multi-classification outcome data, CEA is more robust against the variations of sample size and accidental evaluation rate.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []