Cellular structure image classification with small targeted training samples

2019 
Motivation: Cell shapes provide crucial biology information on complex tissues. Different cell types often have distinct cell shapes, and collective shape changes usually indicate morphogenetic events and mechanisms. The identification and detection of collective cell shape changes in an extensive collection of 3D time-lapse images of complex tissues is an important step in assaying such mechanisms but is a tedious and time-consuming task. Machine learning provides new opportunities to automatically detect cell shape changes. However, it is challenging to generate sufficient training samples for pattern identification through deep learning because of a limited amount of images and annotations. Result: We present a deep learning approach with minimal well-annotated training samples and apply it to identify multicellular rosettes from 3D live images of the Caenorhabditis elegans embryo with fluorescently labelled cell membranes. Our strategy is to combine two approaches, namely, feature transfer and generative adversarial networks (GANs), to boost image classification with small training samples. Specifically, we use a GAN framework and conduct an unsupervised training to capture the general characteristics of cell membrane images with 11,250 unlabelled images. We then transfer the structure of the GAN discriminator into a new Alex-style neural network for further learning with several dozen labelled samples. Our experiments showed that with 10-15 well-labelled rosette images and 30-40 randomly selected non-rosette images our approach can identify rosettes with over 80% accuracy and capture over 90% of the model accuracy achieved with a training dataset that is five times larger. We also established a public benchmark dataset for rosette detection. This GAN-based transfer approach can be applied to study other cellular structures with minimal training samples.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    1
    Citations
    NaN
    KQI
    []