Assisted Diagnosis of Alzheimer’s Disease Based on Deep Learning and Multimodal Feature Fusion

2021 
With the development of artificial intelligence technologies, it is possible to use computer to read digital medical images. Because Alzheimer’s disease (AD) has the characteristics of high incidence and high disability, it has attracted the attention of many scholars, and its diagnosis and treatment have gradually become a hot topic. In this paper, a multimodal diagnosis method for AD based on three-dimensional shufflenet (3DShuffleNet) and principal component analysis network (PCANet) is proposed. First, the data on structural magnetic resonance imaging (sMRI) and functional magnetic resonance imaging (fMRI) are preprocessed to remove the influence resulting from the differences in image size and shape of different individuals, head movement, noise, and so on. Then, the original two-dimensional (2D) ShuffleNet is developed three-dimensional (3D), which is more suitable for 3D sMRI data to extract the features. In addition, the PCANet network is applied to the brain function connection analysis, and the features on fMRI data are obtained. Next, kernel canonical correlation analysis (KCCA) is used to fuse the features coming from sMRI and fMRI, respectively. Finally, a good classification effect is obtained through the support vector machines (SVM) method classifier, which proves the feasibility and effectiveness of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    1
    Citations
    NaN
    KQI
    []