Systematic Evaluation of Image Tiling Adverse Effects on Deep Learning Semantic Segmentation

2020 
Convolutional neural network (CNN) models perform state of the art performance on image classification, localization, and segmentation tasks. Limitations in computer hardware, most notably small memory size in deep learning accelerator cards, prevent relatively large images, such as those from medical and satellite imaging, being processed as a whole in their original resolution. A fully convolutional topology, such as U-Net, is typically trained on down-sampled images and perform inference on images of their original size and resolution, by simply dividing the larger image into smaller (typically overlapping) tiles, making predictions on these tiles, and stitching them back together as the prediction for the whole image. In this study, we show that this tiling technique combined with the non-linear nature of CNNs causes small, but relevant differences during inference that can be detrimental in the performance of the model. Here we quantify these variations in both medical and non-medical (i.e., satellite) images and show that training a 2D U-Net model on the whole image substantially improves the overall model performance. Finally, we compare 2D and 3D semantic segmentation models to show that providing CNN models with a wider field of view in all 3 dimensions leads to more accurate and consistent predictions. Our results suggest that tiling the input to CNN models - while perhaps necessary to overcome the memory limitations in computer hardware - may lead to undesirable and unpredictable errors in the model's output that can only be adequately mitigated by increasing the input of the model to the largest possible field of view.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    7
    Citations
    NaN
    KQI
    []