Unsupervised Cross-Modality Domain Adaptation Network for X-Ray to CT Registration

2022 
2D/3D registration that achieves high accuracy and real-time computation is one of the enabling technologies for radiotherapy and image-guided surgeries. Recently, the Convolutional Neural Network (CNN) has been explored to significantly improve the accuracy and efficiency of 2D/3D registration. A pair of intraoperative 2-D x-ray images and synthetic data from pre-operative volume are often required to model the nonconvex mappings between registration parameters and image residual. However, a large clinical dataset collection with accurate poses for x-ray images can be very challenging or even impractical, while exclusive training on synthetic data can frequently cause performance degradation when tested on x-rays. Thus, we propose to train a model on source domain (i.e., synthetic data) to build appearance-pose relationship first and then use an unsupervised cross-modality domain adaptation network (UCMDAN) to adapt the model to target domain (i.e., X-rays) through adversarial learning. We propose to narrow the significant domain gap by alignment in both pixel and feature space. In particular, the image appearance transformation and domain-invariance feature learning by multiple aspects are conducted synergistically. Extensive experiments on CT and CBCT dataset show that the proposed UCMDAN outperforms the existing state-of-the-art domain adaptation approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    51
    References
    0
    Citations
    NaN
    KQI
    []