Directly Obtaining Matching Points without Keypoints for Image Stitching
2020
Finding enough accurate matching points is key for image stitching. However, the existing state-of-the-art algorithms fail to find enough accurate matching points when facing the challenge where detectable features are not obvious. In this paper, a novel algorithm called CNN-MP is proposed to directly obtain Matching Points between two images using the feature maps extracted by Convolution Neural Network (CNN) and CNN-MP skips the step of detecting keypoints. There are mainly five contributions in CNN-MP: 1) break the conventional image stitching steps without detecting keypoints; 2) a feature map calculation model is built to obtain matching points between the feature maps of two images; 3) establish a position model to map the obtained matching points to the original images; 4) the process of obtaining matching points is accelerated by dividing it into pre-locate and fine-locate; 5) establish the dataset to evaluate CNN-MP in the case where detectable features are not obvious. The experimental results show that the number of accurate matching points obtained by the proposed CNN-MP is at least 1.7 times that of the state-of-the-art algorithms: ORB, SIFT, LIFT and SuperPoint when facing the challenge where detectable features are not obvious. Moreover, CNN-MP also achieves good performance when the input images own significant detectable features.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
23
References
0
Citations
NaN
KQI