Human Action Recognition via Learning Joint Points Information toward Big AI system

2019 
Abstract Human action recognition plays an important role in modern intelligent systems, such as video surveillance, somatosensory games, and action analysis. However, it is still a challenging task due to the sophisticated environment. To improve the performance of human action recognition, in this paper, we propose a novel method for human action recognition in the sophisticated sports scenes based on learning human joint points information for big data. More specifically, we first leverage Kinect device to acquire human joint point information under big data and construct a 3-D spatial vector to describe the information. Afterward, we calculate the angles and the ratios of the spatial vector to describe human action. Subsequently, a human action similarity index (ASIM) is proposed to measure the distance between the test action template and the reference action template. Our proposed method can automatically recognize different human action in big data. Five human action datasets are used in our experiment and the experimental results have shown the effectiveness of our proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    1
    Citations
    NaN
    KQI
    []