FABNet: Fusion Attention Block and Transfer Learning for Laryngeal cancer Tumor Grading in P63 IHC Histopathology Images.

2021 
Laryngeal cancer tumor (LCT) grading is a challenging task in P63 Immunohistochemical (IHC) histopathology images due to small differences between LCT levels in pathology images, the lack of precision in lesion regions of interest (LROIs) and the paucity of LCT pathology image samples. The key to solving the LCT grading problem is to transfer knowledge from other images and to identify more accurate LROIs, but the following problems occur: 1) transferring knowledge without a priori experience often causes negative transfer and creates a heavy workload due to the abundance of image types, and 2) convolutional neural networks (CNNs) constructing deep models by stacking cannot sufficiently identify LROIs, often deviate significantly from the LROIs focused on by experienced pathologists, and are prone to providing misleading second opinions. So we propose a novel fusion attention block network (FABNet) to address these problems. First, we propose a model transfer method based on clinical a priori experience and sample analysis (CPESA) that analyzes the transfer ability by integrating clinical a priori experience using indicators such as the relationship between the cancer onset location and morphology and the texture and staining degree of cell nuclei in histopathology images; our method further validates these indicators by the probability distribution of cancer image samples. Then, we propose a fusion attention block (FAB) structure, which can both provide an advanced non-uniform sparse representation of images and extract spatial relationship information between nuclei; consequently, the LROI can be more accurate and more relevant to pathologists. We conducted extensive experiments, compared with the best Baseline model, the classification accuracy is improved 25%, and It is demonstrated that FABNet performs better on different cancer pathology image datasets and outperforms other state of the art (SOTA) models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []