DXNet: An Encoder-Decoder Architecture with XSPP for Semantic Image Segmentation in Street Scenes

2019 
Semantic image segmentation plays a crucial role in scene understanding tasks. In autonomous driving, the driving of the vehicle causes the scale changes of objects in the street scene. Although multi-scale features can be learned through concatenating multiple different atrous-convolved features, it is difficult to accurately segment pedestrians with only partial feature information due to factors such as occlusion. Therefore, we propose a Xiphoid Spatial Pyramid Pooling method integrated with detailed information. This method, while connecting the features of multiple atrous-convolved, retains the image-level features of target boundary information. Based on the above methods, we design an encoder-decoder architecture called DXNet. The encoder is composed of a deep convolution neural network and two XSPP modules, and the decoder decodes the advanced features through up-sampling operation and skips connection to gradually restore the target boundary. We evaluate the effectiveness of our approach on the Cityscapes dataset. Experimental results show that our method performs better in the case of occlusion, and the mean intersection-over-union score of our model outperforms some representative works.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []