Combining Syntactic Methods With LSTM to Classify Soybean Aerial Images

2020 
Syntactic methods in computer vision represent visual patterns in a hierarchical and compositional perspective, which is converted to strings. Long short-term memory (LSTM) is able to learn patterns in sequences. In this letter, we propose a syntactic approach to represent visual patterns as sequences of symbols, and we use an LSTM as a classifier to learn the relationship between the symbols in sequences. An extensive experimental evaluation using aerial images from a soybean field captured by unmanned aerial vehicles has been conducted to compare our method with two deep learning architectures, one syntactic method, and one shallow learning algorithm. The results achieved by the proposed method maintain stability even when trained on small data sets, suggesting that representing visual patterns in a compositional way, repeating primitives, may be a viable alternative when there are only a limited number of samples for training.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []