CCX-rayNet: A Class Conditioned Convolutional Neural Network For Biplanar X-Rays to CT Volume.

2021 
Despite the advancement of the deep neural network, the 3D CT reconstruction from its correspondence 2D X-ray is still a challenging task in computer vision. To tackle this issue here, we proposed a new class-conditioned network, namely CCX-rayNet, which is proficient in recapturing the shapes and textures with prior semantic information in the resulting CT volume. Firstly, we propose a Deep Feature Transform (DFT) module to modulate the 2D feature maps of semantic segmentation spatially by generating the affine transformation parameters. Secondly, by bridging 2D and 3D features (Depth-Aware Connection), we heighten the feature representation of the X-ray image. Particularly, we approximate a 3D attention mask to be employed on the enlarged 3D feature map, where the contextual association is emphasized. Furthermore, in the biplanar view model, we incorporate the Adaptive Feature Fusion (AFF) module to relieve the registration problem that occurs with unrestrained input data by using the similarity matrix. As far as we are aware, this is the first study to utilize prior semantic knowledge in the 3D CT reconstruction. Both qualitative and quantitative analyses manifest that our proposed CCX-rayNet outperforms the baseline method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []