A Lightweight Neural Network Framework for Cross-Domain Road Matching

2019 
Matching aerial images against pre-stored road landmarks (a.k.a. Aerial-Road Matching) is a critical technology to enhance UAS (Unmanned Aircraft System) navigation in GPS-denied urban environments. Current matching approaches typically consist of two stages: extracting roads from aerial images, and performing road matching based on handcrafted features. The above two-stage matching approaches are computationally inefficient. To handle the above challenge, we for the first time study the problem of end-to-end Aerial-Road matching. Considering that UAS typically have limited computation capacity and storage space, we propose a novel lightweight convolutional neural network architecture for cross-domain Aerial-Road matching. Our framework first map input Aerial-OSM pair into a common feature embedding space using an asymmetric two-branch neural network model. We then feed two feature embeddings into a correlation layer to produce a correlation map in order to handle the misalignments between Aerial-Road input pairs. The correlation map is finally input into a fully connected layer followed by a sigmoid layer to produce the likelihood of the input pair to be a true match. We conduct extensive experiments to evaluate the performance of our approach. Experimental results demonstrate the remarkable accuracy of our proposed lightweight neural network architecture in Aerial-Road matching.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    2
    Citations
    NaN
    KQI
    []