Learning boosts the decoding of sound sequences in rat auditory cortex

2021 
Abstract Continuous acoustic streams, such as speech signals, can be chunked into segments containing reoccurring patterns (e.g., words). Noninvasive recordings of neural activity in humans suggest that chunking is underpinned by low-frequency cortical entrainment to the segment presentation rate, and modulated by prior segment experience (e.g., words belonging to a familiar language). Interestingly, previous studies suggest that also primates and rodents may be able to chunk acoustic streams. Here, we test whether neural activity in the rat auditory cortex is modulated by previous segment experience. We recorded subdural responses using electrocorticography (ECoG) from the auditory cortex of 11 anesthetized rats. Prior to recording, four rats were trained to detect familiar triplets of acoustic stimuli (artificial syllables), three were passively exposed to the triplets, while another four rats had no training experience. While low-frequency neural activity peaks were observed at the syllable level, no triplet-rate peaks were observed. Notably, in trained rats (but not in passively exposed and naive rats), familiar triplets could be decoded more accurately than unfamiliar triplets based on neural activity in the auditory cortex. These results suggest that rats process acoustic sequences, and that their cortical activity is modulated by the training experience even under subsequent anesthesia.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    1
    Citations
    NaN
    KQI
    []