Optimising Realism of Synthetic Agricultural Images using Cycle Generative Adversarial Networks

2017 
A bottleneck of state-of-the-art machine learning methods, e.g. deep learning, for plant part image segmentation in agricultural robotics is the requirement of large manually annotated datasets. As a solution, large synthetic datasets including ground truth can be rendered that realistically reflect the empirical situation. However, a dissimilarity gap can remain between synthetic and empirical data by incomplete manual modelling. This paper contributes to closing this gap by optimising the realism of synthetic agricultural images using unsupervised cycle generative adversarial networks, enabling unpaired image-to-image translation from the synthetic to empirical domain and vice versa. For this purpose, the Capsicum annuum (sweet- or bell pepper) dataset was used, containing 10,500 synthetic and 50 empirical annotated images. Additionally, 225 unlabelled empirical images were used. We hypothesised that the similarity of the synthetic images with the empirical images increases qualitatively and quantitively when translated to the empirical domain and investigated the effect of the translation on the factors color, local texture and morphology. Results showed an increased mean class color distribution correlation with the empirical dataset from 0.62 prior and 0.90 post translation of the synthetic dataset. Qualitatively, synthetic images translate very well in local features such as color, illumination scattering and texture. However, global features like plant morphology appeared not to be translatable.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    5
    Citations
    NaN
    KQI
    []