language-icon Old Web
English
Sign In

Self-Correction for Human Parsing.

2021 
Labeling pixel-level masks for fine-grained semantic segmentation tasks, e.g., human parsing, remains a challenging task. The ambiguous boundary between different semantic parts and those categories with similar appearances are usually confusing for annotators, leading to incorrect labels in ground-truth masks. These label noises will inevitably harm the training process and decrease the performance of the learned models. To tackle this, we introduce a noise-tolerant method, called Self-Correction for Human Parsing (SCHP), to progressively promote the reliability of the supervised labels as well as the learned models. In particular, starting from a model trained with inaccurate annotations, we design a cyclically learning scheduler to infer more reliable pseudo masks by iteratively aggregating the current learned model with the former sub-optimal one in an online manner. Besides, those corrected labels can reversely boost model performance. In this way, the models and the labels will reciprocally become more robust and accurate with self-correction learning cycles. Our SCHP is model-agnostic and can be applied to any human parsing models for further enhancing their performance. Benefiting the superiority of SCHP, we achieve the new state-of-the-art results on 6 benchmarks and win the 1st place for all human parsing tracks in the 3rd LIP Challenge.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    31
    Citations
    NaN
    KQI
    []