Convolution without multiplication: A general speed up strategy for CNNs

2021 
Convolutional Neural Networks (CNN) have achieved great success in many computer vision tasks. However, it is difficult to deploy CNN models on low-cost devices with limited power budgets, because most existing CNN models are computationally expensive. Therefore, CNN model compression and acceleration have become a hot research topic in the deep learning area. Typical schemes for speeding up the feed-forward process with a slight accuracy loss include parameter pruning and sharing, low-rank factorization, compact convolutional filters and knowledge distillation. In this study, we propose a general acceleration scheme that replaces the floating-point multiplication with integer addition. To this end, we propose a general accelerate scheme, where the floating point multiplication is replaced by integer addition. The motivation is based on the fact that every floating point can be replaced by the summation of an exponential series. Therefore, the multiplication between two floating points can be converted to the addition among exponentials. In the experiment section, we directly apply the proposed scheme to AlexNet, VGG, ResNet for image classification, and Faster-RCNN for object detection. The results acquired from ImageNet and PASCAL VOC show that the proposed quantized scheme has a promising performance, even with only one item of exponential. Moreover, we analyzed the eciency of our method on mainstream FPGAs. The experimental results show that the proposed quantized scheme can achieve acceleration on FPGA with a slight accuracy loss.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    0
    Citations
    NaN
    KQI
    []