Look into Multi-Person: A New Benchmark for Pose Estimation and Human Parsing

2020 
Human parsing and pose estimation, regarded as two fundamental tasks to analyze human in the wild, are the basis of upper-level tasks, such as human action recognition and person re-identification. The lack of a comprehensive multi-person dataset, which contains the annotations of both human part labels and skeleton keypoint labels, makes that most of the joint learning work for pose estimation and human parsing can only focus on the single-person scene. To fill this gap, we proposed a comprehensive multi-person dataset Look into Multi-Person (LIMP) with 10,082 multi-person images. And we adopt data augmentation strategies to enrich the dataset's diversity which contains more texture, more occlusion, and more in-the-wild images than CIHP. To the best of our knowledge, this is the first comprehensive multi-person dataset for pose estimation and human parsing.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []