CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
2021
Most recent progress in natural language understanding (NLU) has been driven,
in part, by benchmarks such as GLUE, SuperGLUE, SQuAD, etc. In fact, many NLU
models have now matched or exceeded "human-level" performance on many tasks in
these benchmarks. Most of these benchmarks, however, give models access to
relatively large amounts of labeled data for training. As such, the models are
provided far more data than required by humans to achieve strong performance.
That has motivated a line of work that focuses on improving few-shot learning
performance of NLU models. However, there is a lack of standardized evaluation
benchmarks for few-shot NLU resulting in different experimental settings in
different papers. To help accelerate this line of work, we introduce CLUES
(Constrained Language Understanding Evaluation Standard), a benchmark for
evaluating the few-shot learning capabilities of NLU models. We demonstrate
that while recent models reach human performance when they have access to large
amounts of labeled data, there is a huge gap in performance in the few-shot
setting for most tasks. We also demonstrate differences between alternative
model families and adaptation techniques in the few shot setting. Finally, we
discuss several principles and choices in designing the experimental settings
for evaluating the true few-shot learning performance and suggest a unified
standardized approach to few-shot learning evaluation. We aim to encourage
research on NLU models that can generalize to new tasks with a small number of
examples. Code and data for CLUES are available at
https://github.com/microsoft/CLUES.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI