HuRAI: A brain-inspired computational model for human-robot auditory interface

2021 
Abstract The deep learning era endows immense opportunities for ubiquitous robotic applications by leveraging big data generated from widespread sensors and ever-growing computing capability. While the growing demands for natural human-robot interaction (HRI) as well as concerns for energy efficiency, real-time performance, and data security motive novel solutions. In this paper, we present a brain-inspired spiking neural network (SNN) based Human-Robot Auditory Interface, namely HuRAI. The HuRAI integrates the voice activity detection, speaker localization and voice command recognition systems into a unified framework that can be implemented on the emerging low-power neuromorphic computing (NC) devices. Our experimental results demonstrate superior modeling capabilities of SNNs, achieving accurate and rapid prediction for each task. Moreover, the energy efficiency analysis reveals a compelling prospect, with up to three orders of magnitude energy savings, over the equivalent artificial neural networks that running on the state-of-the-art Nvidia graphics processing unit (GPU). Therefore, integrating the algorithmic power of large-scale SNN models and the energy efficiency of NC devices offers an attractive solution for real-time, low-power robotic applications.
    • Correction
    • Source
    • Cite
    • Save
    54
    References
    0
    Citations
    NaN
    KQI
    []