Robust Driver Head Pose Estimation in Naturalistic Conditions from Point-Cloud Data

2020 
Head pose estimation has been a key task in computer vision since a broad range of applications often requires accurate information about the orientation of the head. Achieving this goal with regular RGB cameras faces challenges in automotive applications due to occlusions, extreme head poses and sudden changes in illumination. Most of these challenges can be attenuated with algorithms relying on depth cameras. This paper proposes a novel point-cloud based deep learning approach to estimate the driver's head pose from depth camera data, addressing these challenges. The proposed algorithm is inspired by the PointNet++ framework, where points are sampled and grouped before extracting discriminative features. We demonstrate the effectiveness of our algorithm by evaluating our approach on a naturalistic driving database consisting of 22 drivers, where the benchmark for the orientation of the driver's head is obtained with the Fi-Cap device. The experimental evaluation demonstrates that our proposed approach relying on point-cloud data achieves predictions that are almost always more reliable than state-of-the-art head pose estimation methods based on regular cameras. Furthermore, our approach provides predictions even for extreme rotations, which is not the case for the baseline methods. To the best of our knowledge, this is the first study to propose head pose estimation using deep learning on point-cloud data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    6
    Citations
    NaN
    KQI
    []