Exploiting Centrality Information with Graph Convolutions for Network Representation Learning

2019 
Network embedding has been proven effective to learn low-dimensional vector representations for network vertices, and recently received a tremendous amount of research attention. However, most of existing methods for network embedding merely focus on preserving the first and second order proximities between nodes, and the important properties of node centrality are neglected. Various centrality measures such as Degree, Closeness, Betweenness, Eigenvector and PageRank centralities have been designed to measure the importance of individual nodes. In this paper, we focus on a novel yet unsolved problem that aims to learn low-dimensional continuous nodes representations that not only preserve the network structure, but also keep the centrality information. We propose a generalizable model, namely GraphCSC, that utilizes both linkage information and centrality information to learn low-dimensional vector representations for network vertices. The learned embeddings by GraphCSC are able to preserve different centrality information of nodes. In addition, we further propose GraphCSC-M, a more comprehensive model that can preserve different centrality information simultaneously through learning multiple centrality-specific embeddings, and a novel attentive multi-view learning approach is developed to compress multiple embeddings of one node into a compact vector representation. Extensive experiments have been conducted to demonstrate that our model is able to preserve different centrality information of nodes, and achieves better performance on several benchmark tasks compared with recent state-of-the-art network embedding methods
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    38
    Citations
    NaN
    KQI
    []