Interpretable Feature Learning of Graphs using Tensor Decomposition

2019 
In recent years, node embedding algorithms, which learn low dimensional vector representations for nodes in a graph, have been one of the key research interests of the graph mining community. The existing algorithms either rely on computationally expensive eigendecomposition of the large matrices, or require tuning of the word embedding-based hyperparameters as a result of representing the graph as a node sequence similar to the sentences in a document. Moreover, the latent features produced by these algorithms are hard to interpret. In this paper, we present two novel tensor decomposition-based node embedding algorithms, that can learn node features from arbitrary types of graphs: undirected, directed, and/or weighted, without relying on eigendecomposition or word embedding-based hyperparameters. Both algorithms preserve the local and global structural properties of the graph by using k-step transition probability matrices to construct third-order multidimensional arrays or tensors and perform CANDECOMP/PARAFAC (CP) decomposition in order to produce an interpretable and low dimensional vector space for the nodes. Our experiments encompass different types of graphs (undirected/directed, unweighted/weighted, sparse/dense) of different domains such as social networking and neuroscience. Our experimental evaluation proves our models to be interpretable with respect to the understandability of the feature space, precise with respect to the network reconstruction and link prediction, and accurate with respect to node classification and graph classification.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    2
    Citations
    NaN
    KQI
    []