Scene Classification of High-Resolution Remote Sensing Image Using Transfer Learning with Multi-model Feature Extraction Framework

2018 
The remote sensing image is full of scene information. Traditional classification methods are based on the artificial extraction feature, can not effectively express the high-level semantic information, and it requires a lot of high-quality training labeled data. However, the labeled data is usually scarce, and difficult to obtain. Transfer learning is a machine learning method that uses existing knowledge to solve those problems different but related. It can effectively solve the learning problem with only a small number of labeled sample data in the target field. ImageNet and remote sensing images have similar characteristics in image texture, lines, color, structure and space. In this paper, we propose a scene classification method of high spatial resolution remote sensing images using transfer learning with multi-model feature extraction network. It designs a combination of multiple pretrained CNN models to extract the features of remote sensing images, and integrates the features into one-dimensional feature vector. This forms a deep feature extraction framework that enriches feature expression and facilitates the capture of remote sensing image features. After feature extraction, a dropout layer and a fully connected layer are used, followed by a classifier. This method achieves a maximum accuracy of 97.38% on the UC Merced dataset and a maximum of 93.97% accuracy on the AID dataset, which is significantly better than the existing method and improves the classification accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    1
    Citations
    NaN
    KQI
    []