Interpretable Deep Learning Models for Single Trial Prediction of Balance Loss

2020 
Wearable robotic devices are being designed to assist the elderly population and other patients with locomotion disabilities. However, wearable robotics increases the risk from falling. Neuroimaging studies have provided evidence for the involvement of frontocentral and parietal cortices in postural control and this opens up the possibility of using decoders for early detection of balance loss by using electroencephalography (EEG). This study investigates the presence of commonly identified components of the perturbation evoked responses (PEP) when a person is in an exoskeleton. We also evaluated the feasibility of using single-trial EEG to predict the loss of balance using a convolution neural network. Overall, the model achieved a mean 5-fold cross-validation test accuracy of 75.2 % across six subjects with 50% as the chance level. We employed a gradient class activation map-based visualization technique for interpreting the decisions of the CNN and demonstrated that the network learns from PEP components present in these single trials. The high localization ability of Grad-CAM demonstrated here, opens up the possibilities for deploying CNN for ERP/PEP analysis while emphasizing on model interpretability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []