Efficient Time-Multiplexed Realization of Feedforward Artificial Neural Networks

2020 
This paper presents techniques and design structures to reduce the time-multiplexed hardware complexity of a feed-forward artificial neural network (ANN). After the weights of ANN are determined in a training phase, in a post-training stage, initially, the minimum quantization value used to convert the floating-point weights to integers is found. Then, the integer weights related to each neuron are tuned to reduce the hardware complexity in the time-multiplexed design avoiding a loss on the ANN accuracy in hardware. Also, at each layer of ANN, the multiplications of integer weights by an input variable at each time are realized under the shift-adds architecture using a minimum number of adders and subtractors. It is observed that the application of the post-training stage yields a significant reduction in area, latency, and energy consumption on the time-multiplexed designs including multipliers. Moreover, the multiplierless design of ANN whose weights are found in the posttraining stage leads to a further reduction in area and energy consumption, increasing the latency slightly.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []