FRAGE: Frequency-Agnostic Word Representation

Authors:
Chengyue Gong Peking University
Di He Peking University
Xu Tan
Tao Qin Microsoft Research
Liwei Wang Peking University
Tie-Yan Liu Microsoft Research Asia

Introduction:

Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks.

Abstract:

Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks. Although it is widely accepted that words with similar semantics should be close to each other in the embedding space, we find that word embeddings learned in several tasks are biased towards word frequency: the embeddings of high-frequency and low-frequency words lie in different subregions of the embedding space, and the embedding of a rare word and a popular word can be far from each other even if they are semantically similar. This makes learned word embeddings ineffective, especially for rare words, and consequently limits the performance of these neural network models. In order to mitigate the issue, in this paper, we propose a neat, simple yet effective adversarial training method to blur the boundary between the embeddings of high-frequency words and low-frequency words. We conducted comprehensive studies on ten datasets across four natural language processing tasks, including word similarity, language modeling, machine translation and text classification. Results show that we achieve higher performance than the baselines in all tasks.

You may want to know: