Unsupervised disparity estimation from light field using plug-and-play weighted warping loss

2022 
We investigated disparity estimation from a light field using a convolutional neural network (CNN). Most of the methods implemented a supervised learning framework, where the predicted disparity map was compared directly to the corresponding ground-truth disparity map in the training stage. However, light field data accompanied with ground-truth disparity maps were insufficient and rarely available for real-world scenes. The lack of training data resulted in limited generality of the methods trained with them. To tackle this problem, we took a simple Figure-and-play approach to remake a supervised method into an unsupervised (self-supervised) one. We replaced the loss function of the original method with one that does not depend on the ground-truth disparity maps. More specifically, our loss function is designed to indirectly evaluate the accuracy of the disparity map by using warping errors among the input light field views. We designed pixel-wise weights to properly evaluate the warping errors in the presence of occlusions, and an edge loss to encourage edge alignment between the image and the disparity map. As a result of this unsupervised learning framework, our method can use more abundant training datasets (even those without ground-truth disparity maps) than the original supervised method. Our method was evaluated on computer-generated scenes (4D Light Field Benchmark) and real-world scenes captured by Lytro Illum cameras. Our method achieved the state-of-the-art performance as an unsupervised method on the benchmark. We also demonstrated that our method can estimate disparity maps more accurately than the original supervised method for various real-world scenes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []