Automated machine learning based speech classification for hearing aid applications and its real-time implementation on smartphone

2020 
Deep neural networks (DNNs) have been useful in solving benchmark problems in various domains including audio. DNNs have been used to improve several speech processing algorithms that improve speech perception for hearing impaired listeners. To make use of DNNs to their full potential and to configure models easily, automated machine learning (AutoML) systems are developed, focusing on model optimization. As an application of AutoML to audio and hearing aids, this work presents an AutoML based voice activity detector (VAD) that is implemented on a smartphone as a real-time application. The developed VAD can be used to elevate the performance of speech processing applications like speech enhancement that are widely used in hearing aid devices. The classification model generated by AutoML is computationally fast and has minimal processing delay, which enables an efficient, real-time operation on a smartphone. The steps involved in real-time implementation are discussed in detail. The key contribution of this work include the utilization of AutoML platform for hearing aid applications and the realization of AutoML model on smartphone. The experimental analysis and results demonstrate the significance and importance of using the AutoML for the current approach. The evaluations also show improvements over the state of art techniques and reflect the practical usability of the developed smartphone app in different noisy environments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    1
    Citations
    NaN
    KQI
    []