Learning graph-based relationship of dual-modal features towards subject adaptive ASD assessment

2023 
Autism Spectrum Disorder (ASD) has been identified as one of the most challenging and intriguing problems in neurodevelopment of children. Recent research suggest that conventional assessment on the basis of explicit behavior observations can be well complemented by evaluation of intrinsic neurophysiological states via analyses of brain imaging data such as electroencephalogram (EEG). However, before any more objective and comprehensive insights for a joint ASD assessment may be obtained, research challenges still remain: (1) how to characterize the interaction relationship of features rooted from recordings in different modalities; and at the same time (2) how to adapt to the individuality of subjects. This study develops a graph-based solution towards individualized assessment of ASD subjects via construction of the relationship of the dual-modal features of eye-tracking recordings and EEG: (1) : A shallow encoding module as a variant of Multi-Level Perception (MLP) derives the initial intra- and inter-modal relationship matrix of the features in both modalities; and (2) : A model based on Deep Graph Convolutional Networks (D-GCN) fuses the global information of dual-modal features to learn the final relationship matrix, which is refined in the process of parameter optimization under the regulation of ASD classification. The resulted sample-specific matrices can then be exploited to address the individuality of subjects under examination. Experimental results indicate that: (1) the proposed method is superior to both the single-modal and multi-modal counterparts in ASD classification; (2) it excels in mining hidden connections among features in different modalities in comparison with mainstream methods for correlation measurement; and (3) it holds potentials in mitigating the uncertain variations brought by the individuality of subjects in ASD assessment.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []