Robust and Accurate 3D Self-portraits in Seconds.

2021 
In this paper, we propose an efficient method for robust and accurate 3D self-portraits using a single RGBD camera. Our method can generate detailed and realistic 3D self-portraits in seconds and shows the ability to handle subjects wearing extremely loose clothes. To achieve highly efficient and robust reconstruction, we propose PIFusion, which combines learning-based 3D recovery with volumetric non-rigid fusion to generate accurate sparse partial scans of the subject. Meanwhile, a non-rigid volumetric deformation method is proposed to continuously refine the learned shape prior. Moreover, a lightweight bundle adjustment algorithm is proposed to guarantee that all the partial scans can not only loop with each other but also remain consistent with the selected live key observations. Finally, to further generate realistic portraits, we propose non-rigid texture optimization to improve the texture quality. Additionally, we also contribute a benchmark for single-view 3D self-portrait reconstruction, an evaluation dataset that contains 10single-view RGBD sequences of a self-rotated performer wearing various clothes and the corresponding ground-truth 3D models in the first frame of each sequence. The results and experiments based on this dataset show that the proposed method outperforms state-of-the-art methods on accuracy, efficiency, and generality.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []