Evaluation of Task Mapping on Multicore Neural Network Accelerators

2016 
Deep neural networks are widely used for many applications such as image classification, speech recognition and natural language processing because of their high recognition rate. Since general-purpose processors such as CPUs and GPUs are not energy efficient for such neural networks, application specific hardware accelerators for neural networks (a.k.a. neural network accelerators or NNAs) have been proposed to improve the energy efficiency. There are many studies to increase the energy efficiency of NNAs, but few studies focus on task allocation on the accelerators. This paper provides the first exploration of task mapping to cores within NNAs for the increased performance. Intuitively, a well-tuned task mapping has less amount of communication between cores. To confirm this assumption, we tested two types of task mappings that generate different amount of communication between cores on an NNA. Our experimental results show that the number of communication between cores strongly affects the execution cycle of the NNA and the most effective task mapping differs depending on the size of neural networks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []