Learning the number of filters in convolutional neural networks

2021 
Convolutional networks bring the performance of many computer vision tasks to unprecedented heights, but at the cost of enormous computation load. To reduce this cost, many model compression tasks have been proposed by eliminating insignificant model structures. For example, convolution filters with small absolute weights are pruned and then fine-tuned to restore reasonable accuracy. However, most of these works rely on pre-trained models without specific analysis of the changes in filters during the training process, resulting in sizable model retraining costs. Different from previous works, we interpret the change of filter behaviour during training from the associated angle, and propose a novel filter pruning method utilising the change rule, which can remove filters with similar functions later in training. According to this strategy, not only can we achieve model compression without fine-tuning, but we can also find a novel perspective to interpret the changing behaviour of the filter during training. Moreover, our approach has been proved to be effective for many advanced CNN architectures.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []