Multimodal deformable registration based on unsupervised learning

2021 
Multimodal deformable registration is designed to solve dense spatial transformations and is used to align images of two different modalities It is a key issue in many medical image analysis applications Multimodal image registration based on traditional methods aims to solve the optimization problem of each pair of images, and usually achieves excellent registration performance, but the calculation cost is high and the running time is long The deep learning method greatly reduces the running time by learning the network used to perform registration These learning-based methods are very effective for single-modality registration However, the intensity distribution of different modal images is unknown and complex Most existing methods rely heavily on label data Faced with these challenges, this paper proposes a deep multimodal registration framework based on unsupervised learning Specifically, the framework consists of feature learning based on matching amount and deformation field learning based on maximum posterior probability, and realizes unsupervised training by means of spatial conversion function and differentiable mutual information loss function In the 3D image registration tasks of MRI T1, MRI T2 and CT, the proposed method is compared with the existing advanced multi-modal registration methods In addition, the registration performance of the proposed method is demonstrated on the latest COVID-19 CT data A large number of results show that the proposed method has a competitive advantage in registration accuracy compared with other methods, and greatly reduces the calculation time © 2021, Editorial Board of JBUAA All right reserved
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []