Identity-linked Group Channel Pruning for Deep Neural Networks

2021 
Channel pruning is a commonly used model compression in convolutional neural network. The structured pruning using sparse constraints can automatically learn the importance of parameters during the training process by imposing sparse constraints on parameters. However, existing pruning methods based on sparse constraints cannot process the final convolutional layer of the residual module with complex connections. Due to the existence of residual connection, if the final convolutional layer of the residual module is pruned, the sparse channel of the feature map from residual connection does not correspond to the feature map from module output, which will cause the parameters to be unable to be pruned. This paper studies this problem and proposes an identity association group pruning algorithm, which we call IGP. IGP groups the parameters and channels that generate the corresponding feature maps, uses Group Lasso to sparse the same group of parameters as a whole, and forces the sparseness of the parameters with sparse correlation to be consistent with each other. Experiments show that when IGP compresses ResNet56 60% parameters, the model performance only drops 0.36 %, which is better than the existing pruning method based on sparse constraints. In the case of high compression ratio, IGP can compresses ResNet-50 compressesed with 87% parameters and the performance drops only 0.76%, which is 5.17 % higher than the existing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []