Solving the where problem in neuroanatomy: a generative framework with learned mappings to register multimodal, incomplete data into a reference brain

2020 
Mapping information from different brains gathered using different modalities into a common coordinate space corresponding to a reference brain is an aspirational goal in modern neuroscience, analogous in importance to mapping genomic data to a reference genome. While brain-atlas mapping workflows exist for single-modality data (3D MRI or STPT image volumes), generally speaking data sets need to be combined across modalities with different contrast mechanisms and scale, in the presence of missing data as well as signals not present in the reference. This has so far been an unsolved problem. We have solved this problem in its full generality by developing and implementing a rigorous, non-parametric generative framework, that learns unknown mappings between contrast mechanisms from data and infers missing data. Our methodology permits rigorous quantification of the local sale changes between different individual brains, which has so far been neglected. We are also able to quantitatively characterize the individual variation in shape. Our work establishes a quantitative, scalable and streamlined workflow for unifying a broad spectrum of multi-modal whole-brain light microscopic data volumes into a coordinate-based atlas framework, a step that is a prerequisite for large scale integration of whole brain data sets in modern neuroscience.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    0
    Citations
    NaN
    KQI
    []