An Accurate Neural Network for Cytologic Whole-Slide Image Analysis

2020 
Typically, high accuracy in deep learning is achieved by large dataset in pixel-wise labeling for segmentation or image-level labeling for classification. However, in biomedical domain, the challenge is not only the availability of image data itself, but also the acquisition of relevant annotations for these images from clinicians. In this work, we propose a novel two-stage architecture to jointly perform the tasks of detection, segmentation and classification of abnormal cells and cancer. Compared with one-step detection for all the catalogues, we combine the advantage of image-level and pixel-level labeling in our deep learning based framework. We use the detection of lesions in cervical clinical dataset as a case study for performance evaluation. In the first stage, a hybrid ResNet and U-Net architecture is designed to predict three catalogues of nuclei, cytoplasm and background with pixel-wise labeled segmentation map. In the second stage, a residual learning based model is applied to the identified nuclei for subtype classification. Confirmed with cytotechnologist, the proposed model is estimated to efficiently deduct more than 90% annotation burden compared with pixel-wise labeling approach. Moreover, the proposed two-stage approach model outperforms one-stage neural network in segmentation and classification for objects with high similarities in appearance. Our collected real-life clinical cytology images and the source code in the experiments are provided in https://github.com/SJTU-AI-GPU/TwoStageCellSegmentation.1
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    4
    Citations
    NaN
    KQI
    []