Neural encoding of phrases and sentences in spoken language comprehension

2021 
Speech stands out in the natural world as a biological signal that communicates formally-specifiable complex meanings. However, the acoustic and physical dynamics of speech do not injectively mark the linguistic structure and meaning that we perceive. Linguistic structure must therefore be inferred through the human brains endogenous mechanisms, which remain poorly understood. Using electroencephalography, we investigated the neural response to synthesized spoken phrases and sentences that were closely physically-matched but differed in syntactic structure, under either linguistic or non-linguistic task conditions. Differences in syntactic structure were well-captured in theta band ([~] 2 to 7 Hz) phase coherence, phase connectivity degree at low frequencies (< [~] 2 Hz), and in both intensity and degree of power connectivity of induced neural response in the alpha band ([~] 7.5 to 13.5 Hz). Theta-gamma phase-amplitude coupling was found when participants listened to speech, but it did not discriminate between syntactic structures. Spectral-temporal response function modelling suggested different encoding states in both temporal and spectral dimensions as a function of the amount and type of linguistic structure perceived, over and above the acoustically-driven neural response. Our findings provide a comprehensive description of how the brain separates linguistic structures in the dynamics of neural responses, and imply that phase synchronization and strength of connectivity can be used as readouts for constituent structure, providing a novel basis for future neurophysiological research on linguistic structure in the brain.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    122
    References
    0
    Citations
    NaN
    KQI
    []