Decoding Speech Evoked Jaw Motion from Non-invasive Neuromagnetic Oscillations

Speech decoding-based brain-computer interfaces (BCIs) are the next-generation neuroprostheses that have the potential for real-time communication assistance to patients with locked-in syndrome (fully paralyzed but aware). Recent invasive speech decoding studies have demonstrated the possibility of speech kinematics decoding, where articulatory movements were decoded from the brain activity signals for speech synthesis, as an alternative solution to direct brain-to-speech mapping. As a starting point toward a non-invasive speech-neuroprosthesis, in this study, we investigated the decoding of continuous jaw kinematic trajectories directly from non-invasive neuromagnetic signals during speech production. The compensatory jaw behavior exhibited by patients with amyotrophic lateral sclerosis (ALS) is prevalent, hence, accurate decoding of the jaw kinematics could be a path for developing efficient communicative BCIs for these patients. Using magnetoencephalography (MEG), we recorded brain signals and jaw motions simultaneously from four subjects as they spoke short phrases. We trained a long short-term memory (LSTM) regression model to successfully map the brain activity to jaw motion with about 0.80 average correlation score across all four subjects. In addition, we also examined the decoding performance of specific frequency bands within the neural signals and found that the Delta (0.3–4Hz) and high-gamma (62–125 Hz and 125–250 Hz) frequencies independently can account for the major contributions in jaw motion decoding. Experimental results indicated that the jaw kinematics can be successfully decoded from non-invasive neural (MEG) signals.
    • Correction
    • Source
    • Cite
    • Save