A Novel Sparse Dictionary Learning Separation (SDLS) Model With Adaptive Dictionary Mutual Incoherence Constraint for fMRI Data Analysis

2016 
Objective : Many studies have shown that the independence assumption in the widely-used ICAs is not adaptive enough for brain functional networks (BFN) detection due to the complex brain hemodynamics, functional integration, artifacts embedded in functional magnetic resonance imaging (fMRI) data, etc. In this paper, inspired by sparse coding behavior of human brain, we propose an effective BFNs detection model, called sparse dictionary learning separation (SDLS). Methods : In the SDLS, facing the dilemma of huge training samples in sparse learning, an efficient spatial-domain data reduction algorithm was first designed to sharply alleviate the training cost and suppress noise. Then, an improved K singular value decomposition was proposed to speed up the correct convergence of the dictionary learning process. Furthermore, considering the variant degrees of functional integration and sparsity of BFNs across different fMRI datasets, a minimum description length-based framework was proposed to formulate two key factors, i.e., the dictionary mutual incoherence level and sparsity level, self-adaptively resulting in effective temporal dynamics model. Finally, a least-square-based functional network reconstruction was presented to extract the final BFNs. Results : The simulated and real data experiments demonstrated that SDLS had the superiority in the spatial/temporal sources identification, and stronger spatial robustness against the variant smoothing kernels, in contrast to ICAs. Conclusion : SDLS was a novel data-driven BFN separation model, which had an overall consideration of multiple factors, e.g., huge samples dilemma, artifacts removal, and variant degrees of functional integration and sparsity of BFNs. Significance : SDLS as an extension to current fMRI analysis methods was a promising model, which declared the advantage of sparsity.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    57
    References
    13
    Citations
    NaN
    KQI
    []