Semantic Perceptual Image Compression with a Laplacian Pyramid of Convolutional Networks

2019 
Recently, deep neural network (DNN)-based image compression methods have achieved impressive results. These methods generally use thumbnail images or crop small patches from high-resolution images to train their networks. Instead of using patch-based training mode, we propose a novel DNN-based image compression framework in this paper. We apply the Laplacian pyramid to construct a multi-scale image representation. By learning the increasingly detailed representations, the proposed method is able to progressively restore an image. Furthermore, we use the adversarial networks for training to encourage the perceptual quality of the reconstructed image. Particularly at low bitrates, our model can only store the global semantics of an image and automatically synthesize the texture to achieve high subjective quality. Experimental results on demonstrate that our method achieves state-of-the-art performance, with advantages over existing methods in terms of visual quality.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    1
    Citations
    NaN
    KQI
    []