Semi-AttentionAE: An Integrated Model for Graph Representation Learning

2021 
Graph embedding learns low-dimensional vector representations which capture and preserve information in original graphs. Common shallow neural networks and deep autoencoder only use adjacency matrix as input, and usually ignore node attributes and features. Shallow graph neural networks cannot spread the node characteristic information on a large scale. Many deep models suffer the problem of over-smoothing. Therefore, these methods can’t fully incorporate network information. In this paper, we propose a novel Semi-AttentionAE model to fully utilize node features, node labels, and network structure. More specifically, we integrate a supervised information extraction graph attention network to capture both node features and network structure, with an unsupervised feature extraction autoencoder to reduce dimension while preserving structure information. Finally, ensemble learning is introduced to jointly train the combined model to obtain final embedding. We conduct the node classification and visualization experiments on four real-world datasets, including two citation networks, one co-occurrence network, and one commodity network. The results suggest that the proposed Semi-AttentionAE model is capable of embedding both graph structure and node features. The integrated model has successfully exceeded or matched performance across four well-established baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []