Depth-guided learning light field angular super-resolution with edge-aware inpainting

2021 
High angular resolution light field (LF) enables exciting applications such as depth estimation, virtual reality, and augmented reality. Although many light field angular super-resolution methods have been proposed, the reconstruction problem of LF with a wide-baseline is far from being solved. In this paper, we propose an end-to-end learning-based approach to achieve angular super-resolution of the light field with a wide-baseline. Our model consists of three components. We first train a convolutional neural network to predict the depth map for each sub-aperture view. Then the estimated depth maps are used to warp the input views. In the final component, we first use a convolutional neural network to fuse the initial warped light fields, and then we propose an edge-aware inpainting network to modify the inaccurate pixels in the near-edge regions. Accordingly, we design an EdgePyramid structure that contains multi-scale edges to perform the inpainting of near-edge pixels. Moreover, we introduce a novel loss function to reduce the artifacts and further estimate the similarity in near-edge regions. Experimental results on various light field datasets including large-baseline light field images show that our method outperforms the state-of-the-art light field angular super-resolution methods, especially in the terms of visual performance near edges.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    0
    Citations
    NaN
    KQI
    []