Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion

2021 
In this paper, we aim at leveraging a Siamese textual encoder to efficiently and effectively tackle knowledge base completion problem. Traditional graph embedding-based methods straightforwardly learn the embeddings by considering a knowledge base’s structure but are inherently vulnerable to the graph’s sparsity or incompleteness issue. In contrast, previous textual encoding-based methods capture such structured knowledge from a semantic perspective and employ deep neural textual encoder to model graph triples in semantic space, but they fail to trade off the contextual features with model’s efficiency. Therefore, in this paper we propose a Siamese textual encoder operating on each graph triple from the knowledge base, where the contextual features between a head/tail entity and a relation are well-captured to highlight relation-aware entity embedding while a Siamese structure is also adapted to avoid combinatorial explosion during inference. In the experiments, the proposed method reaches state-of-the-art or comparable performance on several link prediction datasets. Further analyses demonstrate that the proposed method is much more efficient than its baseline with similar evaluating results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    1
    Citations
    NaN
    KQI
    []