|Zi Yin||Stanford University|
|Yuanyuan Shen||Stanford University & Microsoft|
In this paper, the authors provide a theoretical understanding of word embedding and its dimensionality.Motivated by the unitary-invariance of word embedding, the authors propose the Pairwise Inner Product (PIP) loss, a novel metric on the dissimilarity between word embeddings.
In this paper, we provide a theoretical understanding of word embedding and its dimensionality. Motivated by the unitary-invariance of word embedding, we propose the Pairwise Inner Product (PIP) loss, a novel metric on the dissimilarity between word embeddings. Using techniques from matrix perturbation theory, we reveal a fundamental bias-variance trade-off in dimensionality selection for word embeddings. This bias-variance trade-off sheds light on many empirical observations which were previously unexplained, for example the existence of an optimal dimensionality. Moreover, new insights and discoveries, like when and how word embeddings are robust to over-fitting, are revealed. By optimizing over the bias-variance trade-off of the PIP loss, we can explicitly answer the open question of dimensionality selection for word embedding.