Improved Word Representations Via Summed Target and Context Embeddings
2021
Neural embedding models are often described as having an ‘embedding layer’, or a set of network activations that can be extracted from the model in order to obtain word or sentence representations. In this paper, we show via a modification of the well-known word2vec algorithm that relevant semantic information is contained throughout the entirety of the network, not just in the commonly-extracted hidden layer. This extra information can be extracted by summing embeddings from both the input and output weight matrices of a skip-gram model. Word embeddings generated via this method exhibit strong semantic structure, and are able to outperform traditionally extracted word2vec embeddings in a number of analogy tasks.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
29
References
0
Citations
NaN
KQI