Direct Unsupervised Super-Resolution Using Generative Adversarial Network (DUS-GAN) for Real-World Data.

2021 
The deep learning models for the Single Image Super-Resolution (SISR) task have found success in recent years. However, one of the prime limitations of existing deep learning-based SISR approaches is that they need supervised training. Specifically, the Low-Resolution (LR) images are obtained through known degradation (for instance, bicubic downsampling) from the High-Resolution (HR) images to provide supervised data as an LR-HR pair. Such training results in a domain shift of learnt models when real-world data is provided with multiple degradation factors not present in the training set. To address this challenge, we propose an unsupervised approach for the SISR task using Generative Adversarial Network (GAN), which we refer to hereafter as DUS-GAN . The novel design of the proposed method accomplishes the SR task without degradation estimation of real-world LR data. In addition, a new human perception-based quality assessment loss, i.e., Mean Opinion Score (MOS), has also been introduced to boost the perceptual quality of SR results. The pertinence of the proposed method is validated with numerous experiments on different reference-based (i.e., NTIRE Real-world SR Challenge validation dataset) and no-reference based (i.e., NTIRE Real-world SR Challenge Track-1 and Track-2) testing datasets. The experimental analysis demonstrates committed improvement from the proposed method over the other state-of-the-art unsupervised SR approaches, both in terms of subjective and quantitative evaluations on different reference metrics (i.e., LPIPS, PI-RMSE graph) and no-reference quality measures such as NIQE, BRISQUE and PIQE. We also provide the implementation of the proposed approach ( https://github.com/kalpeshjp89/DUSGAN ) to support reproducible research.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    1
    Citations
    NaN
    KQI
    []