Crossbar-Net: A Novel Convolutional Neural Network for Kidney Tumor Segmentation in CT Images

2019 
Due to the unpredictable location, fuzzy texture and diverse shape, accurate segmentation of the kidney tumor in CT images is an important yet challenging task. To this end, we in this paper present a cascaded trainable segmentation model termed as Crossbar-Net. Our method combines two novel schemes: (1) we originally proposed the crossbar patches, which consists of two orthogonal non-squared patches (i.e., the vertical patch and horizontal patch). The crossbar patches are able to capture both the global and local appearance information of the kidney tumors from both the vertical and horizontal directions simultaneously. (2) With the obtained crossbar patches, we iteratively train two sub-models (i.e., horizontal sub-model and vertical sub-model) in a cascaded training manner. During the training, the trained sub-models are encouraged to become more focus on the difficult parts of the tumor automatically (i.e., mis-segmented regions). Specifically, the vertical (horizontal) sub-model is required to help segment the mis-segmented regions for the horizontal (vertical) sub-model. Thus, the two sub-models could complement each other to achieve the self-improvement until convergence. In the experiment, we evaluate our method on a real CT kidney tumor dataset which is collected from 94 different patients including 3,500 CT slices. Compared with the state-of-the-art segmentation methods, the results demonstrate the superior performance of our method on the Dice similarity coefficient, true positive fraction, centroid distance and Hausdorff distance. Moreover, to exploit the generalization to other segmentation tasks, we also extend our Crossbar-Net to two related segmentation tasks: (1) cardiac segmentation in MR images and (2) breast mass segmentation in X-ray images, showing the promising results for these two tasks. Our implementation is released at https: //github.com/Qianyu1226/Crossbar-Net.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    27
    Citations
    NaN
    KQI
    []