CNL-UNet: A novel lightweight deep learning architecture for multimodal biomedical image segmentation with false output suppression

2021 
Abstract Automatic biomedical image segmentation plays an important role in speeding up disease detection and diagnosis. The rapid development of Deep Learning has shown ground-breaking improvements in this context. However, state-of-the-art networks like U-Net and SegNet often have poor performance on challenging domains. Most of the recent works were domains specific and computationally expensive. This paper proposes a novel lightweight architecture named CNL-UNet for 2D multimodal Biomedical Image Segmentation. The proposed CNL-UNet has a pre-trained encoder enriched with transfer learning techniques to learn sufficiently from the small amount of data. It has modified skip connections to reduce semantic gaps between the corresponding level of the encoder-decoder layer. Furthermore, the proposed architecture is enhanced with a novel Classifier and Localizer (CNL) module. This module provides us with additional classification and localization information with greater accuracy. Fusing this information with the segmentation output, the CNL-UNet can suppress false positives and false negatives. The proposed architcture has comparatively fewer parameters (11.5M) than U-Net (31M), SegNet (29M), and most of the recent works. Thus it is a lightweight architecture and also less prone to overfit. Besides, in the case of simple datasets, the pruned version of the CNL-UNet can be used. We evaluated our proposed architecture on multimodal biomedical image datasets, namely Chest X-ray, Dermoscopy, Microscopy, Ultrasound, and MRI images. The results demonstrate the superior performance of our proposed architecture over most of the existing networks. We have shown that our model can learn quickly, segment precisely, and automatically suppress falsely classified outputs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []