Edge-preserving Smoothing Regularization for Monocular Depth Estimation

2021 
Monocular depth estimation is a fundamental challenge since the foundation of computer vision with many real-world applications. Recently, the introduction of Deep Convolutional Neural Networks (CNN) has brought significant improvements to this particular problem. There are many solutions for scene depth estimation with a focus on obtaining high-quality depth maps from a given RGB image. The insertion of prior information by adding a smoothing regularization has improved the results. However, the smoothing of the surfaces comes together with a certain degradation of the edges. The goal of this paper is to make a comparison between various regularization terms used either in supervised or self-supervised learning methods. In addition to this, we modified the regularization term used currently in self-supervised methods such to work in a supervised manner. The experimental results on NYU-Depth v2 have shown that the regularization based on L1 norm of the gradient is the best and the self-supervised modified one outperforms the rest. Finally, rather than relying on common evaluation metrics, we used an additional accuracy measure based on the Steerable Pyramid and Kullback-Leibler divergence (KLD) for edge accuracy of estimated depths that is more sensitive to positional errors of the edges.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []