Unsupervised Single Image Super-Resolution Network (USISResNet) for Real-World Data Using Generative Adversarial Network

2020 
Current state-of-the-art Single Image Super-Resolution (SISR) techniques rely largely on supervised learning where Low-Resolution (LR) images are synthetically generated with known degradation (e.g., bicubic downsampling). The deep learning models trained with such synthetic dataset generalize poorly on the real-world or natural data where the degradation characteristics cannot be fully modelled. As an implication, the super-resolved images obtained for real LR images do not produce optimal Super-Resolution (SR) images. We propose a new SR approach to mitigate such an issue using unsupervised learning in Generative Adversarial Network (GAN) framework - USISResNet. In an attempt to provide high quality SR image for perceptual inspection, we also introduce a new loss function based on the Mean Opinion Score (MOS). The effectiveness of the proposed architecture is validated with extensive experiments on NTIRE-2020 Real-world SR Challenge validation (Track-1) set along with testing datasets (Track-1 and Track-2). We demonstrate the generalizable nature of proposed network by evaluating real-world images as against other state-of-the-art methods which employ synthetically downsampled LR images. The proposed network has further been evaluated on NTIRE 2020 Real-world SR Challenge dataset where the approach has achieved reliable accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    10
    Citations
    NaN
    KQI
    []