Multi-organ segmentation network for abdominal CT images based on spatial attention and deformable convolution

2023 
The accurate segmentation of multi-organ based on computed tomography (CT) images is important for the diagnosis of abdominal diseases, such as cancer staging, and for surgical planning, such as reducing damage to healthy tissues surrounding the target organ. This task is extremely challenging due to the complexity of background in CT and the variable sizes and shapes of different organs. In this paper, a segmentation model based on U-Net is proposed for five organs related to hepato-biliary-pancreatic surgery, including the pancreas, duodenum, gallbladder, liver and stomach. The proposed model has deformable receptive fields and utilizes the structure of organs in terms of locations and sizes to reduce the interference of complex backgrounds, which makes it an efficient and accurate segmentation method. A spatial attention block is proposed to highlight the organ regions of interest during feature extraction by learning spatial attention maps through explicit external supervision. Moreover, a deformable convolution block is set up to deal with variations in shapes and sizes by producing reasonable receptive fields for different organs through additional trainable offsets. In addition, the skip-connection structure of U-Net is improved by using multi-scale attention maps and high-level semantic information. The proposed model is compared with U-Net and several improved variants on the TCIA multi-organ segmentation dataset, including segmentation performance, time consumption and model parameters. The results show that the proposed model can effectively improve the overall segmentation performance with an average DICE of 80.46% at the cost of a 7.86% increase in model parameters. Compared to U-Net, the average DICE is increased by 1.65%, the average JSC is increased by 1.79% and the average 95% HD is reduced by 4.08. It is a competitive multi-organ segmentation method with better application potential.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []