Adjustable super-resolution network via deep supervised learning and progressive self-distillation

2022 
With the use of convolutional , Single-Image Super-Resolution (SISR) has advanced dramatically in recent years. However, we notice a phenomenon that the structure of all these models must be consistent during training and testing. This severely limits the flexibility of the model, making the same model difficult to be deployed on different sizes of platforms (e.g., computers, smartphones, and embedded devices). Therefore, it is crucial to develop a model that can adapt to different needs without retraining. To achieve this, we propose a lightweight Adjustable Super-Resolution Network (ASRN). Specifically, ASRN consists of a series of Multi-scale Aggregation Blocks (MABs), which is a lightweight and efficient module specially designed for feature extraction. Meanwhile, the Deep Supervised Learning (DSL) strategy is introduced into the model to guarantee the performance of each sub-network and a novel Progressive Self-Distillation (PSD) strategy is proposed to further improve the intermediate results of the model. With the help of DSL and PSD strategies, ASRN can achieve elastic image reconstruction. Meanwhile, ASRN is the first elastic SISR model, which shows good results after directly changing the model size without retraining.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []