Single Annotated Pixel based Weakly Supervised Semantic Segmentation under Driving Scenes

2021 
Abstract Semantic segmentation tasks based on weakly supervised conditions have been put forward to achieve a lightweight labeling process. For simple images that only include a few categories, research based on image-level annotations has achieved acceptable performance. However, when facing complex scenes, since image contains a large number of classes, it becomes challenging to learn visual appearance based on image tags. In this case, image-level annotations are not useful in providing information. Therefore, we set up a new task in which a single annotated pixel is provided for each category in a whole dataset. Based on the more lightweight and informative condition, a three step process is built for pseudo labels generation, which progressively implements each class’ optimal feature representation, image inference, and context-location based refinement. In particular, since high-level semantics and low-level imaging features have different discriminative abilities for each class under driving scenes, we divide categories into ”object” or ”scene” and then provide different operations for the two types separately. Further, an alternate iterative structure is established to gradually improve segmentation performance, which combines CNN-based inter-image common semantic learning and imaging prior based intra-image modification process. Experiments on the Cityscapes dataset demonstrate that the proposed method provides a feasible way to solve weakly supervised semantic segmentation tasks under complex driving scenes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []