DSNet: Automatic Dermoscopic Skin Lesion Segmentation

2020 
Abstract Background and Objective: Automatic segmentation of skin lesions is considered a crucial step in Computer-aided Diagnosis (CAD) systems for melanoma detection. Despite its significance, skin lesion segmentation remains an unsolved challenge due to their variability in color, texture, and shapes and indistinguishable boundaries. Methods: Through this study, we present a new and automatic semantic segmentation network for robust skin lesion segmentation named Dermoscopic Skin Network (DSNet). In order to reduce the number of parameters to make the network lightweight, we used a depth-wise separable convolution in lieu of standard convolution to project the learned discriminating features onto the pixel space at different stages of the encoder. Additionally, we implemented both a U-Net and a Fully Convolutional Network (FCN8s) to compare against the proposed DSNet. Results: We evaluate our proposed model on two publicly available datasets, namely ISIC-2017 1 and PH2 2 . The obtained mean Intersection over Union (mIoU) is 77.5% and 87.0% respectively for ISIC-2017 and PH2 datasets which outperformed the ISIC-2017 challenge winner by 1.0% with respect to mIoU. Our proposed network also outperformed U-Net and FCN8s respectively by 3.6% and 6.8% with respect to mIoU on the ISIC-2017 dataset. Conclusion: Our network for skin lesion segmentation outperforms the other methods discussed in the article and is able to provide better-segmented masks on two different test datasets which can lead to better performance in melanoma detection. Our trained model along with the source code and predicted masks are made publicly available 3 .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    33
    Citations
    NaN
    KQI
    []