deepManReg: a deep manifold-regularized learning model for improving phenotype prediction from multi-modal data

2021 
The biological processes from genotype to phenotype are complex involving multi-scale mechanisms. Increasing multi-modal data enables a deeper understanding of underlying complex mechanisms in various phenotypes. However, integrating and interpreting such large-scale multi-modal data remains challenging, especially given highly heterogeneous, nonlinear relationships across modalities. To address this, we developed an interpretable regularized learning model, deepManReg to predict phenotypes from multi-modal data. First, deepManReg employs deep neural networks to learn cross-modal manifolds and then align multi-modal features onto a common latent space. This space aims to preserve both global consistency and local smoothness across modalities and to reveal higher-order nonlinear cross-modal relationships. Second, deepManReg uses cross-modal manifolds as a feature graph to regularize the classifiers for improving phenotype predictions and also prioritizing the multi-modal features and cross-modal interactions for the phenotypes. We applied deepManReg to recent single-cell multi-modal data such as Patch-seq data including transcriptomics and electrophysiology for neuronal cells in the mouse brain. We show that deepManReg significantly improves predicting cellular phenotypes and also prioritizing genes and electrophysiological features for the phenotypes. Finally, deepManReg is open-source and general for phenotype prediction from multi-modal data
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []