NasmamSR: a fast image super-resolution network based on neural architecture search and multiple attention mechanism

2021 
Although the current super-resolution model based on deep learning has achieved excellent reconstruction results, the increasing depth of the model results in huge parameters, limiting the further application of the super-resolution deep model. To solve this problem, we propose an efficient super-resolution model based on neural architecture search and attention mechanism. First, we use global residual learning to limit the search to the non-linear mapping part of the network and add a down-sampling to this part to reduce the feature map’s size and computation. Second, we establish a lightweight search space and joint rewards for searching the optimal network structure. The model divides the search into macro search and micro search, which are used to search for the optimal down-sampling position and the optimal cell structure, respectively. In addition, we introduce the Bayesian algorithm for hyper-parameter tuning and further improve the model’s performance based on the optimal sub-network searched out. Detailed experiments show that our model achieves excellent super-resolution performance and high computational efficiency compared with some state-of-the-art models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []