3D Face Reconstruction and Semantic Annotation from Single Depth Image

2020 
We introduce a novel data-driven approach for taking a single-view noisy depth image as input and inferring a detailed 3D face with per-pixel semantic labels. The critical point of our method is its ability to handle the depth completions with varying extent of geometric details, managing 3D expressive face estimation by exploiting low-dimensional linear subspace and dense displacement field-based non-rigid deformations. We devise a deep neural network-based coarse-to-fine 3D face reconstruction and semantic annotation framework to produce high-quality facial geometry while preserving large-scale contexts and semantics. We evaluate the semantic consistency constraint and the generative model for 3D face reconstruction and depth annotation in extensive series of experiments. The results demonstrate that the proposed approach outperforms the compared methods not only in the face reconstruction with high-quality geometric details, but also semantic annotation performances regarding segmentation and landmark location.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []