MOPED25: A multimodal dataset of full-body pose and motion in occupational tasks

2020 
Abstract In recent years, there has been a trend of using images and deep neural network-based computer vision algorithms to perform postural evaluation in workplace safety and ergonomics community. The performance of the computer vision algorithms, however, heavily relies on the generalizability of the posture dataset that was used for algorithm training. Current open-access posture datasets from the computer vision community mainly focus on the pose and motion of daily activities and lack the context in workplaces. In this study, a new posture dataset named, MOPED25 (Multimodal Occupational Posture Dataset with 25 tasks) is presented. This dataset includes full-body kinematics data and the synchronized videos of 11 participants, performing commonly seen tasks at workplaces. All the data has been made publicly available online. This dataset can serve as a benchmark for developing more robust computer vision algorithms for postural evaluation at workplaces.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []