Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks

2019 
Abstract This paper formulates cloud and cloud shadow detection as a semantic segmentation problem and proposes a deep convolutional neural network (CNN) based method to detect them in Landsat imagery. Different from traditional machine learning methods, deep CNN-based methods convolve the entire input image to extract multi-level spatial and spectral features, and then deconvolve these features to produce the detailed segmentation. In this way, multi-level features from the whole image and all the bands are utilized to label each pixel as cloud, thin cloud, cloud shadow or clear. An adaption of SegNet with 13 convolutional layers and 13 deconvolution layers is proposed in this study. The method is applied to 38 Landsat 7 images and 32 Landsat 8 images which are globally distributed and have pixel-wise cloud and cloud shadow reference masks provided by the U.S. Geological Survey (USGS). In order to process such large images using the adapted SegNet model on a desktop computer, the Landsat Collection 1 scenes are split into non-overlapping 512 * 512 30 m pixel image blocks. 60% of these blocks are used to train the model using the backpropagation algorithm, 10% of the blocks are used to validate the model and tune its parameters, and the remaining 30% of the blocks are used for performance evaluation. Compared with the cloud and cloud shadow masks produced by CFMask, which are provided with the Landsat Collection 1 data, the overall accuracies are significantly improved from 89.88% and 84.58% to 95.26% and 95.47% for the Landsat 7 and Landsat 8 images respectively. The proposed method benefits from the multi-level spatial and spectral features, and results in more than a 40% increase in user's accuracy and in more than a 20% increase in producer's accuracy for cloud shadow detection in Landsat 8 imagery. The issues for operational implementation are discussed.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    70
    Citations
    NaN
    KQI
    []