BASeg: Boundary aware semantic segmentation for autonomous driving

2023 
Semantic segmentation is a critical component for street understanding task in autonomous driving field. Existing various methods either focus on constructing the object’s inner consistency by aggregating global or multi-scale context information, or simply combine semantic features with boundary features to refine object details. Despite impressive, most of them neglect the long-range dependences between the inner objects and boundaries. To this end, we present a Boundary Aware Network (BASeg) for semantic segmentation by exploiting boundary information as a significant cue to guide context aggregation. Specifically, a Boundary Refined Module (BRM) is proposed in the BASeg to refine coarse low-level boundary features from a Canny detector by high-level multi-scale semantic features from the backbone, and based on which, the Context Aggregation Module (CAM) is further proposed to capture long-range dependences between the boundary regions and the object inner pixels, achieving mutual gains and enhancing the intra-class consistency. Moreover, our method can be plugged into other CNN backbones for higher performance with a minor computation budget, and obtains 45.72%, 81.2%, and 77.3% of mIoU on the datasets ADE20K, Cityscapes, and CamVid, respectively. Compared with some state-of-the-art ResNet101-based segmentation methods, extensive experiments demonstrate the effectiveness of our method. Our code is available at .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []