An explainable deep learning framework for characterizing and interpreting human brain states

2023 
Deep learning approaches have been widely adopted in the medical image analysis field. However, a most of existing deep learning approaches focus on achieving promising performances such as classification, detection, and segmentation, and much less effort is devoted to the explanation of the designed models. Similarly, in the brain imaging field, many deep learning approaches have been designed and applied to characterize and predict human brain states. However, these models lack interpretation. In response, we propose a novel domain knowledge informed self-attention graph pooling-based (SAGPool) graph convolutional neural network to study human brain states. Specifically, the dense individualized and common connectivity-based cortical landmarks system (DICCCOL, structural brain connectivity profiles) and holistic atlases of functional networks and interactions system (HAFNI, functional brain connectivity profiles) are integrated with the SAGPool model to better characterize and interpret the brain states. Extensive experiments are designed and carried out on the large-scale human connectome project (HCP) Q1 and S1200 dataset. Promising brain state classification performances are observed (e.g., an average of 93.7% for seven-task classification and 100% for binary classification). In addition, the importance of the brain regions, which contributes most to the accurate classification, is successfully quantified and visualized. A thorough neuroscientific interpretation suggests that these extracted brain regions and their importance calculated from self-attention graph pooling layer offer substantial explainability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []