Task-Specific Loss for Robust Instance Segmentation with Noisy Class Labels

2021 
Deep learning methods have achieved significant progress in the presence of correctly annotated datasets in instance segmentation. However, object classes in large-scale datasets are sometimes ambiguous, which easily causes confusion. Besides, limited experience and knowledge of annotators can lead to mislabeled object semantic classes. To solve this issue, a novel method is proposed in this paper, which considers different roles of noisy class labels in different sub-tasks. Our method is based on two basic observations: firstly, the foreground-background annotation of a sample is correct even though its class label is noisy. Secondly, symmetric loss benefits the model robustness to noisy labels but harms the learning of hard samples, while cross entropy loss is the opposite. Based on the two basic observations, in the foreground-background sub-task, cross entropy loss is used to fully exploit correct gradient guidance. In the foreground-instance sub-task, symmetric loss is used to prevent incorrect gradient guidance provided by noisy class labels. Furthermore, we apply contrastive self-supervised loss to update features of all foreground, to compensate for insufficient guidance provided by partially correct labels especially in the highly noisy setting. Extensive experiments conducted with three popular datasets (i.e., Pascal VOC, Cityscapes and COCO) have demonstrated the effectiveness of our method in a wide range of noisy class label scenarios.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []