Joint fully convolutional and graph convolutional networks for weakly-supervised segmentation of pathology images.

2021 
Tissue/region segmentation of pathology images is essential for quantitative analysis in digital pathology. Previous studies usually require full supervision (e.g., pixel-level annotation) which is challenging to acquire. In this paper, we propose a weakly-supervised model using joint Fully convolutional and Graph convolutional Networks (FGNet) for automated segmentation of pathology images. Instead of using pixel-wise annotations as supervision, we employ an image-level label (i.e., foreground proportion) as weakly-supervised information for training a unified convolutional model. Our FGNet consists of a feature extraction module (with a fully convolutional network) and a classification module (with a graph convolutional network). These two modules are connected via a dynamic superpixel operation, making the joint training possible. To achieve robust segmentation performance, we propose to use mutable numbers of superpixels for both training and inference. Besides, to achieve strict supervision, we employ an uncertainty range constraint in FGNet to reduce the negative effect of inaccurate image-level annotations. Compared with fully-supervised methods, the proposed FGNet achieves competitive segmentation results on three pathology image datasets (i.e., HER2, KI67, and H&E) for cancer region segmentation, suggesting the effectiveness of our method. The code is made publicly available at https://github.com/zhangjun001/FGNet.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    63
    References
    0
    Citations
    NaN
    KQI
    []