Semisupervised Semantic Segmentation of Remote Sensing Images With Consistency Self-Training

2022 
Semisupervised semantic segmentation is an effective way to reduce the expensive manual annotation cost and take advantage of the unlabeled data for remote sensing (RS) image interpretation. Recent related research has mainly adopted two strategies: self-training and consistency regularization. Self-training tries to acquire accurate pseudo-labels to explicitly expand the train set. However, the existing methods cannot accurately identify false pseudo-labels, suffering from their negative impact on model optimization. The consistency regularization constrains the model by producing consistent predictions robust to the perturbations introduced in the sample or feature domain but requires a sufficient number of training data. Therefore, we propose a strategy for the semisupervised semantic segmentation of the RS images. The proposed model in the generative adversarial network (GAN) framework is optimized by consistency self-training, learning the distributions of both labeled and unlabeled data. The discriminator is optimized by accurate pixel-level training labels instead of the image-level ones, thereby assessing the confidence for the prediction of each pixel, which is then used to reweight the loss of the unlabeled data in self-training. The generator is optimized with the consistency constraint with respect to all random perturbations on the unlabeled data, which increases the sample diversity and prompts the model to learn the underlying distribution of the unlabeled data. Experimental results on the the large-scale and densely annotated Instance Segmentation in Aerial Images Dataset (iSAID) datasets and the International Society for Photogrammetry and Remote Sensing (ISPRS) datasets show that our framework outperforms several state-of-the-art semisupervised semantic segmentation methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    53
    References
    1
    Citations
    NaN
    KQI
    []