First-Person View Hand Parameter Estimation Based on Fully Convolutional Neural Network

2020 
In this paper, we propose a real-time framework that can not only estimate location of hands within a RGB image but also their corresponding 3D joint coordinates and their hand side determination of left or right handed simultaneously. Most of the recent methods on hand pose analysis from monocular images only focus on the 3D coordinates of hand joints, which cannot give a full story to users or applications. Moreover, to meet the demands of applications such as virtual reality or augmented reality, a first-person viewpoint hand pose dataset is needed to train our proposed CNN. Thus, we collect a synthetic RGB dataset captured in an egocentric view with the help of Unity, a 3D engine. The synthetic dataset is composed of hands with various posture, skin color and size. We provide 21 joint annotations including 3D coordinates, 2D locations, and corresponding hand side which is left hand or right hand for each hand within an image.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []