Towards Robust Episodic Meta-Learning

2021 
Meta-learning learns across historical tasks with the goal to discover a representation from which it is easy to adapt to new unseen tasks. Episodic meta-learning simulates a realistic setting by generating a set of small artificial tasks from a larger fixed training set for meta-training and proceeds in a similar fashion for meta-testing. However, this (meta-)learning paradigm has recently been shown to be brittle in practice, suggesting that the inductive bias encoded in the learned representations is inadequate. In this work we propose to compose episodes to robustify meta-learning in the few-shot setting, in order to learn more efficiently and to generalize better to new tasks. We make use of active learning scoring rules to select the data to be included in the episodes. We assume that the meta-learner is given new tasks at random, but the data associated to the tasks can be selected from a larger pool of unlabeled data, and investigate where active learning can boost the performance of episodic meta-learning. We show that instead of selecting samples at random, it is better to select samples in an active manner especially in settings with out-of-distribution and class-imbalanced tasks. We evaluate our method with Prototypical Networks, foMAML and protoMAML, reporting significant improvements on public benchmarks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []