Superdense-scale network for semantic segmentation

2022 
Great progress has been made in semantic segmentation based on deep convolutional . However, semantic segmentation in complex scenes remains challenging due to the large-scale variation problem. To handle this problem, the existing methods usually employ multiple receptive fields to capture multiscale features. Some works have verified that the denser the different receptive fields (scales), the easier it is to address the large-scale variation problem. To make denser scales, we propose a superdense-scale network (SDSNet). Specifically, we design a simple yet effective structure named the parallel-serial structure of atrous convolutions (PSSAC) in which superdense-scale high-level features are captured by explicitly adjusting the neuron’s receptive field. The PSSAC is an improvement over ASPP and DenseASPP by employing exponentially increasing scales with a serially connected multiple parallel structure. To extract more accurate features, we construct an SDSNet consisting of a modified aligned Xcepiton71 backbone followed by a PSSAC. Extensive experiments of semantic segmentation are conducted to evaluate our SDSNet on three datasets, namely, Cityscapes, PASCAL VOC 2012, and ADE20K. Experimental results show that our SDSNet achieves state-of-the-art performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []