Domain Adaptation for Synthesis of Hazy Images

2021 
Most existing image dehazing methods based learning are less able to perform well to real hazy images. An important reason is that they are trained on synthetic hazy images whose distribution is different from real hazy images. To relieve this issue, this paper proposes a new hazy scene generation model based on domain adaptation, which uses a variational autoencoder to encode the synthetic hazy image pairs and the real hazy images into the latent space to adapt. The synthetic hazy image pairs guide the model to learn the mapping of clear images to hazy images, the real hazy images are used to adapt the synthetic hazy images’ latent space to real hazy images through generative adversarial loss, so as to make the generative hazy images’ distribution as close to the real hazy images’ distribution as possible. By comparing the results of the model with traditional physical scattering models and Adobe Lightroom CC software, the hazy images generated in this paper is more realistic. Our end-to-end domain adaptation model is also very convenient to synthesize hazy images without depth map. Using traditional method to dehaze the synthetic hazy images generated by this paper, both SSIM and PSNR have been improved, proved that the effectiveness of our method. The non-reference haze density evaluation algorithm and other quantitative evaluation also illustrate the advantages of our method in synthetic hazy images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []