Local Descriptor-based Multi-Prototype Network for Few-shot Learning

2021 
Abstract Prototype-based few-shot learning methods are promising in that they are simple yet effective to handle any-shot problems, and many prototype associated works are raised since then. However, these traditional prototype-based methods generally use only one single prototype to represent a class, which essentially cannot effectively estimate the complicated distribution of a class. To tackle this problem, we propose a novel Local descriptor-based Multi-Prototype Network (LMPNet) in this paper, a well-designed framework that generates an embedding space with multiple prototypes. Specifically, the proposed LMPNet employs local descriptors to represent each image, which can capture more informative and subtler cues of an image than the normally adopted image-level features. Moreover, to alleviate the uncertainty introduced by the fixed construction (averaging over samples) of prototypes, we introduce a channel squeeze and spatial excitation (sSE) attention module to learn multiple local descriptor-based prototypes for each class through end-to-end learning. Extensive experiments on both few-shot and fine-grained few-shot image classification tasks have been conducted on various benchmark datasets, including miniImageNet, tieredImageNet, Stanford Dogs, Stanford Cars, and CUB-200-2010. The experimental results of our LMPNet on above datasets show tangibly learning performance improvements and distinguishable outcomes over the baseline models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    2
    Citations
    NaN
    KQI
    []