Pyramid Attention Dense Network for Image Super-Resolution

2019 
Recent deep convolution neural networks has made remarkable progress in single images super-resolution area. They achieved very high Peak Signal to Noise Ratio (PSNR) and structural similarity (SSIM), by improved learning of high-frequency details to enhance visual perception. However, current models usually ignore relations between adjacent pixels. In this work, we propose a network that incorporate gradients of adjacent pixels in addition to per-pixel loss and perceptual loss. In addition, we utilize multi-stage network learning to progressively generate high resolution images, by incorporate a new inter-stage feedback in the Laplacian pyramid network structure. Furthermore, we adopted recently proposed attention mechanism and dense block structure. The proposed Pyramid Attention Dense model for image super-resolution achieved state-of-the-art performance in experiments on four benchmark datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []