Estimation of 3D human pose using prior knowledge

2021 
Estimating three-dimensional (3D) human poses from the positions of two-dimensional (2D) joints has shown promising results. However, using 2D joint coordinates as input loses more information than image-based approaches and results in ambiguity. To overcome this problem, we combine bone length and camera parameters with 2D joint coordinates for input. This combination is more discriminative than the 2D joint coordinates in that it can improve the accuracy of the model’s prediction depth and alleviate the ambiguity that comes from projecting 3D coordinates into 2D space. Furthermore, we introduce direction constraints, which can better measure the difference between the ground truth and the output of the proposed model. The experimental results on the Human3.6M show that the method performed better than other state-of-the-art 3D human pose estimation approaches. The code is available at: https://github.com/XTU-PR-LAB/ExtraPose/.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []