Analysis of multisource non-linguistic sound recognition among cochlear implant subjects

2021 
Naturalistic sounds carry rich information related to situational context or subject/system properties that contribute towards an acoustic awareness of the ambient environment. Sensorineural hearing loss reduces the functionality of the cochlea, auditory nerve, or central auditory pathways leading to degradation in auditory processing. Cochlear Implants (CIs) have been widely used to restore impaired auditory function and research advancements have focused on improving speech recognition and overall hearing-related quality of life. However, relatively few studies have investigated non-linguistic Sound Recognition (SR) among CI subjects. In this study, the recognition of sounds in CI users is assessed in a competing condition involving at least two non-linguistic sound sources. Furthermore, an end-to-end audio source separation neural network, SuDoRM-RF, with negative and permutation invariance training, is used for audio source recovery and comparatively assessed for potential identification improvement of non-linguistic sounds among CI users. Objective metrics such as classification accuracy, scale-invariant signal-to-distortion-ratio, and other audio quality related measures are used for assessment and subjective evaluation and compared against listener testing with CI subjects. The proposed study can model multi-source non-linguistic sound problems, the cocktail party effect and potentially provide an effective simulation for realistic listening test scenarios. [Study supported by NIH DC010494-01A.]
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []