Classification of visual comprehension based on EEG data using sparse optimal scoring

2021 
Objective Understanding and differentiating brain states is an important task in the field of cognitive neuroscience with applications in health diagnostics (such as detecting neurotypical development vs. Autism Spectrum or coma/vegetative state vs. locked-in state). Electroencephalography (EEG) analysis is a particularly useful tool for this task as EEG data can detect millisecond-level changes in brain activity across a range of frequencies in a non-invasive and relatively inexpensive fashion. The goal of this study is to apply machine learning methods to EEG data in order to classify visual language comprehension across multiple participants. Approach 26-channel EEG was recorded for 24 Deaf participants while they watched videos of sign language sentences played in time-direct and time-reverse formats to simulate interpretable vs. uninterpretable sign language, respectively. Sparse Optimal Scoring (SOS) was applied to EEG data in order to classify which type of video a participant was watching, time-direct or time-reversed. The use of SOS also served to reduce the dimensionality of the features to improve model interpretability. Main results The analysis of frequency-domain EEG data resulted in an average out-of-sample classification accuracy of 98.89%, which was far superior to the time-domain analysis. This high classification accuracy suggests this model can accurately identify common neural responses to visual linguistic stimuli. Significance The significance of this work is in determining necessary and sufficient neural features for classifying the high-level neural process of visual language comprehension across multiple participants.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    2
    Citations
    NaN
    KQI
    []