Deep learning guided by an ontology for medical images classification using a multimodal fusion

2021 
Brain tumor is regarded as one of the most perilous diseases, with Glioma being the most prevalent form of primary brain tumor. Brain tumor classification, by playing the part of a treatment guide, makes diagnosis easier by providing acquisition tools for medical imagery providing various modalities that are fused for brain tumor classification. Therefore, to perform this task, existing works fuse either 2D brain MRI image slices or 3D brain images. In this paper, we propose a novel semantic method for MRI brain tumor classification using a multimodal fusion of 2D and 3D MRI images. The proposed method raises two major challenges: the semantic classification and the fusion of 2D and 3D images. It consists of three levels: preprocessing, classification, and fusion. The preprocessing level has a considerable impact on the results. At the classification level, we used two deep learning models and two heterogeneous datasets. The DenseNet model is used to classify 2D brain images into three brain tumor categories (Glioma, Meningioma, and Pituitary tumor). The 3D-CNN model is designed for glioma grading (High/Low-grade glioma) using the 3D brain images. At the fusion level, we used specific-domain ontology to perform the fusion of the output classes. The evaluation of the proposed approach on the test set has shown good results and the classification accuracy rate reached 92.06% and 85% for DenseNet and 3D CNN models respectively and 100% at the fusion level.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []