Multilabel classification of remote sensed satellite imagery

2020 
Multilabel scene classification has emerged as a critical research area in the domain of remote sensing. Contemporary classification models primarily emphasis on a single object or multiobject scene classification of satellite remote sensed images. These classification models rely on feature engineering from images, deep learning, or transfer learning. Comparatively, multilabel scene classification of very high resolution (V.H.R.) images is a fairly unexplored domain of research. Models trained for single label scene classification are unsuitable for the application of recognizing multiple objects in a single remotely sensed V.H.R. satellite image. To overcome this research gap, the current inquiry proposes to fine‐tune the state of the art convolutional neural network (C.N.N.) architectures for multilabel scene classification. The proposed approach pre trains C.N.N on the ImageNet dataset and further fine‐tunes them to the task of detecting multiple objects in V.H.R. images. To understand the efficacy of this approach, the final models are applied on a V.H.R. dataset: the U.C.M.E.R.C.E.D. image dataset containing 21 different terrestrial land use categories with a submeter resolution. The performance of the final models is compared with graph convolutional network‐based model by Khan et al. From the results on performance metrics, it was observed that proposed models achieve comparable results in significantly fewer epochs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    6
    Citations
    NaN
    KQI
    []