COMPRESSION OF DNNS USING MAGNITUDE PRUNING AND NONLINEAR INFORMATION BOTTLENECK TRAINING

2021 
As Deep Neural Networks (DNNs) have achieved state-of-the-art performance in various scientific fields and applications, the memory and computational complexity of DNNs have increased concurrently. The increased complexity required by DNNs prohibits them from running on platforms with limited computational resources. This has sparked a renewed interest in parameter pruning. We propose to replace the standard cross-entropy objective - typically used in classification problems - with the Nonlinear Information Bottleneck (NIB) objective to improve the accuracy of a pruned network. We demonstrate, that our proposal outperforms cross-entropy combined with global magnitude pruning for high compression rates on VGG-nets trained on CIFAR10. With approximately 97% of the parameters pruned, we obtain an accuracy of 87.63% and 88.22% for VGG-16 and VGG-19, respectively, where the baseline accuracy is 91.5% for the unpruned networks. We observe that the majority of biases are pruned completely, and pruning parameters globally outperforms layer-wise pruning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []