The IANET Hardware Accelerator for Audio and Visual Data Classification

2020 
There are several instances during driving where audible data is of importance, though often ignored. Today's deaf or acoustically impaired drivers face challenges during driving in various countries. They are vulnerable as they can't hear the siren or vehicle horn and depend on other drivers around them to act. Processing audio and providing feedback would be equally valuable to any driver or autonomous vehicle. This paper addresses the gap in existing technology by integrating audio or acoustic and image or visual processing units with the help of efficient hardware design and architecture of the Convolutional Neural Networks (CNNs). These processing units are integrated into a single module, IANET, that makes use of two CNN accelerators, one for audio and the other for image processing units. The hardware is implemented in various fixed-point representations to observe the accuracy and stability of network classifiers at each representation. The hardware accelerators for image and audio classification achieve a throughput of 30 frames per second (fps) at 180 MHz and 1 fps at 20 MHz, respectively. This paper presents the power and area-efficient hardware implementation of IANET.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []