Selective Fine-Tuning on a Classifier Ensemble: Realizing Adaptive Neural Networks With a Diversified Multi-Exit Architecture

2021 
Adaptive neural networks that provide a trade-off between computing costs and inference performance can be a crucial solution for edge artificial intelligence (AI) computing where resource and energy consumption are significantly constrained. Edge AIs require a fine-tuning technique to achieve target accuracy with less computation for pre-trained models on the cloud. However, a multi-exit network, which realizes adaptive inference costs, requires significant training costs because it has many classifiers that need to be fine-tuned. In this study, we propose a novel fine-tuning method for an ensemble of classifiers that efficiently retrain the multi-exit network. The proposed fine-tuning method exploits individualities by assembling the output of the intermediate classifiers trained with distinct preprocessed data. The evaluation results show that the proposed method achieved 0.2%-5.8%, 0.2%-4.6% higher accuracy with only 77%-93%, 73%-84% training computation compared to the entire fine-tuning of classifiers on the pre-modified CIFAR-100 and Imagenet, respectively, although it depends on assumed edge environments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []