Discriminative Recognition of Point Cloud Gesture Classes through One-Shot Learning

2019 
In this paper, we introduce a one-shot learning approach to extend learned point cloud gesture categories into recognizing newly introduced categories after training the original model. This approach is based on learning the discrimination between gesture classes of point cloud data, making it possible to recognize new classes without retraining the original deep neural network (DNN) model. We develop a temporal variant of the PointNet model referred to as Temporal PointNet or TPoinNet, which consumes a sequence of raw point clouds from a low-cost depth sensor and outputs the class of the sequence. We use a multitask strategy where the model learns the classification of a point cloud sequence and a Euclidean space discrimination between the classified sequence and another sequence of a different class. The new model is able to classify and map the point clouds sequence inputs into a Euclidean space where the distances between the gesture sequences correspond to the gesture similarities. We present results on a point cloud dataset and the MSR Action 3D dataset showing the discrimination of new gesture categories with a high precision.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []