Proper likelihood ratio based ROC curves for general binary classification problems.

2018 
Everybody writes that ROC curves, a very common tool in binary classification problems, should be optimal, and in particular concave, non-decreasing and above the 45-degree line. Everybody uses ROC curves, theoretical and especially empirical, which are not so. This work is an attempt to correct this schizophrenic behavior. Optimality stems from the Neyman-Pearson lemma, which prescribes using likelihood-ratio based ROC curves. Starting from there, we give the most general definition of a likelihood-ratio based classification procedure, which encompasses finite, continuous and even more complex data types. We point out a strict relationship with a general notion of concentration of two probability measures. We give some nontrivial examples of situations with non-monotone and non-continuous likelihood ratios. Finally, we propose the ROC curve of a likelihood ratio based Gaussian kernel flexible Bayes classifier as a proper default alternative to the usual empirical ROC curve.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    4
    Citations
    NaN
    KQI
    []