Patch spaces and fusion strategies in patch-based label fusion

2019 
Abstract In the field of multi-atlas segmentation, patch-based approaches have shown promising results in the segmentation of biomedical images. In the most common approach, registration is used to warp the atlases to the target space and then the warped atlas labelmaps are fused into a consensus segmentation based on local appearance information encoded in form of patches. The registration step establishes spatial correspondence, which is important to obtain anatomical priors. Patch-based label fusion in the target space has shown to produce very accurate segmentations although at the expense of registering all atlases to each target image. Moreover, appearance (i.e., patches) and label information used by label fusion is extracted from the warped atlases, which are subject to interpolation errors. In this work, we revisit and extend the patch-based label fusion framework, exploring the role of extracting this information from the native space of both atlases and target images, thus avoiding interpolation artifacts, but at the same time, we do it in a way that it does not sacrifice the anatomical priors derived by registration. We further propose a common formulation for two widely-used label fusion strategies, i.e., similarity-based and a particular type of learning-based label fusion. The proposed framework is evaluated on subcortical structure segmentation in adult brains and tissue segmentation in fetal brain MRI. Our results indicate that using atlas patches in their native space yields superior performance than warping the atlases to the target image. The learning-based approach tends to outperform the similarity-based approach, with the particularity that using patches in native space lessens the computational requirements of learning. As conclusion, the combination of learning-based label fusion and native atlas patches yields the best performance with reduced test times than conventional similarity-based approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    2
    Citations
    NaN
    KQI
    []