Computational models of speech perception by cochlear implant users

2017 
Cochlear implant (CI) users have access to fewer acoustic cues than normal hearing listeners, resulting in less than perfect identification of phonemes (vowels and consonants), even in quiet. This makes it possible to develop models of phoneme identification based on CI users’ ability to discriminate along a small set of linguistically-relevant continua. Vowel and consonant confusions made by CI users provide a very rich platform to test such models. The preliminary implementation of these models used a single perceptual dimension and was closely related to the model of intensity resolution developed jointly by Nat Durlach and Lou Braida. Extensions of this model to multiple dimensions, incorporating aspects of Lou’s novel work on “crossmodal integration,” have successfully explained patterns of vowel and consonant confusions; perception of “conflicting-cue” vowels; changes in vowel identification as a function of different intensity mapping curves and frequency-to-electrode maps; adaptation (or lack ther...
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []