Towards Self-explainable Classifiers and Regressors in Neuroimaging with Normalizing Flows

2021 
Deep learning-based regression and classification models are used in most subareas of neuroimaging because of their accuracy and flexibility. While such models achieve state-of-the-art results in many different applications scenarios, their decision-making process is usually difficult to explain. This black box behaviour is problematic when non-technical users like clinicians and patients need to trust them and make decisions based on their results. In this work, we propose to build self-explainable generative classifiers and regressors using a flexible and efficient normalizing flow framework. We directly exploit the invertibility of those normalizing flows to explain the decision-making process in a highly accessible way via consistent and spatially smooth attribution maps and counterfactual images for alternate prediction results. The evaluation using more than 5000 3D MR images highlights the explainability capabilities of the proposed models and shows that they achieve a similar level of accuracy as standard convolutional neural networks for image-based brain age regression and brain sex classification tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    1
    Citations
    NaN
    KQI
    []