Facial emotions are accurately encoded in the neural signal of those with Autism Spectrum Disorder: A deep learning approach.

2021 
Abstract Background Individuals with autism spectrum disorder (ASD) exhibit frequent behavioral deficits in facial emotion recognition (FER). It remains unknown whether these deficits arise because facial emotion information is not encoded in their neural signal, or because it is encoded, but fails to translate to FER behavior (deployment). This distinction has functional implications, including constraining when differences in social information processing occur in ASD, and guiding interventions (i.e., developing prosthetic FER-vs.-reinforcing existing skills). Methods We utilized a discriminative and contemporary machine learning approach - Deep Convolutional Neural Networks (CNN) – to classify facial emotions viewed by individuals with and without ASD(N= 88) from concurrently-recorded electroencephalography signals. Results The CNN classified facial emotions with high accuracy for both ASD and non-ASD groups, even though individuals with ASD performed more poorly on the concurrent FER task. In fact, CNN accuracy was greater in the ASD group, and was not related to behavioral performance. This pattern of results replicated across three independent participant samples. Moreover, feature-importance analyses suggest that a late temporal window of neural activity (1000–1500ms) may be uniquely important in facial emotion classification for individuals for ASD. Conclusions Our results reveal for the first time that facial emotion information is encoded in the neural signal of individuals with (and without) ASD. Thus, observed difficulties in behavioral FER associated with ASD likely arise from difficulties in decoding or deployment of facial emotion information within the neural signal. Interventions should focus on capitalizing on this intact encoding rather than promoting compensation or FER prosthetics.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    2
    Citations
    NaN
    KQI
    []