Deep learning-based reconstruction of interventional tools and devices from four X-ray projections for tomographic interventional guidance.

2021 
Purpose Image guidance for minimally invasive interventions is usually performed by acquiring fluoroscopic images using a monoplanar or a biplanar C-arm system. However, the projective data provide only limited information about the spatial structure and position of interventional tools and devices such as stents, guide wires or coils. In this work we propose a deep learning-based pipeline for real-time tomographic (four-dimensional) interventional guidance at conventional dose levels. Methods Our pipeline is comprised of two steps. In the first one, interventional tools are extracted from four cone-beam CT projections using a deep convolutional neural network. These projections are then Feldkamp reconstructed and fed into a second network, which is trained to segment the interventional tools and devices in this highly undersampled reconstruction. Both networks are trained using simulated CT data and evaluated on both simulated data and C-arm cone-beam CT measurements of stents, coils and guide wires RESULTS: The pipeline is capable of reconstructing interventional tools from only four x-ray projections without the need for a patient prior. At an isotropic voxel size of 100 µm our methods achieves a precision/recall within a 100 µm environment of the ground truth of 93 %/98 %, 90 %/71 %, and 93 %/76 % for guide wires, stents and coils, respectively. Conclusions A deep learning-based approach for four-dimensional interventional guidance is able to overcome the drawbacks of today's interventional guidance by providing full spatiotemporal (4D) information about the interventional tools at dose levels comparable to conventional fluoroscopy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []