MegaStitch: Robust Large-Scale Image Stitching

2022 
We address fast image stitching for large image collections while being robust to drift due to chaining transformations and minimal overlap between images. We focus on scientific applications where ground-truth accuracy is far more important than visual appearance or projection error, which can be misleading. For common large-scale image stitching use cases, transformations between images are often restricted to similarity or translation. When homography is used in these cases, the odds of being trapped in a poor local minimum and producing unnatural results increases. Thus, for transformations up to affine, we cast stitching as minimizing reprojection error globally using linear least-squares with a few, simple constraints. For homography, we observe that the global affine solution provides better initialization for bundle adjustment compared to an alternative that initializes with a homography-based scaffolding and at lower computational cost. We evaluate our methods on a very large translation dataset with limited overlap as well as four drone datasets. We show that our approach is better compared to alternative methods such as MGRAPH in terms of computational cost, scaling to large numbers of images, and robustness to drift. We also contribute ground-truth datasets for this endeavor.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    0
    Citations
    NaN
    KQI
    []