UFKT: Unimportant filters knowledge transfer for CNN pruning

2022 
As the deep learning models have been widely used in recent years, there is a high demand for reducing the model size in terms of memory and computation without much compromise in the model performance. Filter pruning is a very widely adopted strategy for model compression. The existing filter pruning methods identify the unimportant filters and prune them without worrying about information loss. They try to recover the same by fine-tuning the remaining filters, limiting their performance. In this paper, we tackle this problem by utilizing the knowledge from unimportant filters before pruning to minimize information loss. First, the proposed method identifies the unimportant and important filters by exploiting the lower and higher importance, respectively, using the -norm of filters. Next, the proposed custom UFKT-Reg regularizer () transfers the knowledge from unimportant filters before pruning to remaining filters, notably to a fixed number of important filters. Hence, the proposed method minimizes information loss due to the removal of unimportant filters. The experiments are conducted using the three benchmark datasets, including MNIST, CIFAR-10, and ImageNet. The proposed filter pruning method outperforms many recent state-of-the-art filter pruning methods. An improvement over the baseline in terms of accuracy is observed even after removing 95.15%, 62.28%, and 62.39% of the Floating Point OPerations (FLOPs) from architectures LeNet-5, ResNet-56, and ResNet-110, respectively. After pruning 53.25% of FLOPS from ResNet-50, only 1.02% and 0.47% of drops are observed in top-1 and top-5 accuracies, respectively. The code used in this paper will be publicly available at ( ).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []