Influence-aware graph neural networks

2021 
Abstract Network representation learning endeavors to learn low-dimensional dense representations for nodes in a network. With the rapid development of online social platforms, the analysis of social networks has become increasingly significant. Although network representation learning can facilitate the social network analysis, most existing algorithms merely exploit the explicit structure among nodes to obtain the node representations. Besides, traditional network representation learning techniques ignore the influence of nodes in a network when generating the representations of nodes. Motivated by this, we innovatively propose an influence-aware graph neural network (IAGNN) framework, which can learn the latent feature representations of nodes by incorporating both node influence and global structure information into the embedding process for encoding graph-structured data. The generated low-dimensional dense representations of the nodes in a network can be used for subsequent tasks such as user classification and user behavior prediction. Specifically, we assign different weights to each node according to different types of topology between their neighbors, and integrate with the basic influence of each node to generate an intermediate matrix with influence information. The intermediate matrix is encoded into low-dimensional and dense vector spaces by leveraging the attention mechanism and the graph convolution operation. Extensive experiments are conducted on five datasets, and IAGNN achieves an average accuracy of 3% higher than the comparison algorithms on the node classification and link prediction tasks. The experimental results demonstrate that our model can significantly outperform the state-of-the-art network embedding methods such as GCN, GAT, GraphSage, AGNN on node classification and link prediction tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    0
    Citations
    NaN
    KQI
    []