Improved Model Compression Method Based on Information Entropy

2021 
The rapid development of deep learning has promoted more and more complex neural network models that require high computing power. Even though researchers have proposed various lightweight network models such as MobileNet, SqueezeNet and ShuffleNet, the amount of calculation is still huge. In order to further reduce the amount of model calculations, model compression is an effective means to reduce the amount of model parameters and calculations. Channel pruning is the most effective and direct means to accelerate model calculations and reduce model parameters. However, due to its radical approach, the effect of pruning is affected by the basis for determining the importance of the channel, and the accuracy cannot be guaranteed. Furthermore, pruning When the filter is smaller than the set threshold value is completely deleted, it is possible to discard important parameters. Therefore, this article intends to propose a channel pruning model compression method based on information entropy. The actual test results give convincing experimental results, which prove the effectiveness and practicability of the method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []