Patent2Vec: Multi-view representation learning on patent-graphs for patent classification

Patent classification has long been treated as a crucial task to support related services. Though large efforts have been made on the automatic patent classification task, those prior arts mainly focus on mining textual information such as titles and abstracts. Meanwhile, few of them pay attention to the meta data, e.g., the inventors and the assignee company, and the potential correlation via the metadata-based graph has been largely ignored. To that end, in this paper, we develop a new paradigm for patent classification task in the perspective of multi-view patent graph analysis and then propose a novel framework called Patent2vec to learn low-dimensional representations of patents for patent classification. Specifically, we first employ the graph representation learning on individual graphs, so that view-specific representations will be learned by capturing the network structure and side information. Then, we propose a view enhancement module to enrich single view representations by exploiting cross-view correlation knowledge. Afterward, we deploy an attention-based multi-view fusion method to get refined representations for each patent and further design a view alignment module to constraint final fused representation in a relational embedding space which can preserve latent relational information. Empirical results demonstrate that our model not only improves the classification accuracy but also improves the interpretability of classifying patents reflected in the multi-source data.
    • Correction
    • Source
    • Cite
    • Save