Pixel-and feature-level fusion of hyperspectral and lidar data for urban land-use classification

2015 
The complexity of urban areas makes it difficult for single-source remotely sensed data to meet all urban application requirements. Airborne light detection and ranging lidar can provide precise horizontal and vertical point cloud data, while hyperspectral images can provide hundreds of narrow spectral bands which are sensitive to subtle differences in surface materials. The main objectives of this study are to explore: 1 the performance of fused lidar and hyperspectral data for urban land-use classification, especially the contribution of lidar intensity and height information for land-use classification in shadow areas; and 2 the efficiency of combined pixel-and object-based classifiers for urban land-use classification. Support vector machine SVM, maximum likelihood classification MLC, and object-based classifiers were used to classify lidar, hyperspectral data and their derived features, such as the normalized digital surface model nDSM, normalized difference vegetation index NDVI, and texture measures, into 15 urban land-use classes. Spatial attributes and rules were used to minimize misclassification of the objects showing similar spectral properties, and accuracy assessments were carried out for the classification results. Compared with hyperspectral data alone, hyperspectral–lidar data fusion improved overall accuracy by 6.8% from 81.7 to 88.5% when the SVM classifier was used. Meanwhile, compared with SVM alone, the combined SVM and object-based method improved OA by 7.1% from 87.6 to 94.7%. The results suggest that hyperspectral–lidar data fusion is effective for urban land-use classification, and the proposed combined pixel-and object-based classifiers are very efficient and flexible for the fusion of hyperspectral and lidar data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    81
    References
    35
    Citations
    NaN
    KQI
    []