Deep Recursive Network Embedding With Regular Equivalence

Authors:
Ke Tu Tsinghua University
Peng Cui Tsinghua University
Xiao Wang Tsinghua University
Philip S. Yu University of Illinois at Chicago
Wenwu Zhu Tsinghua University

Introduction:

This paper studies Network embedding. The authors propose a new approach named Deep Recursive Network Embedding (DRNE) to learn network embeddings with regular equivalence.

Abstract:

Network embedding aims to preserve vertex similarity in an embedding space. Existing approaches usually define the similarity by direct links or common neighborhoods between nodes, i.e. structural equivalence. However, vertexes which reside in different parts of the network may have similar roles or positions, i.e. regular equivalence, which is largely ignored by the literature of network embedding. Regular equivalence is defined in a recursive way that two regularly equivalent vertexes have network neighbors which are also regularly equivalent. Accordingly, we propose a new approach named Deep Recursive Network Embedding (DRNE) to learn network embeddings with regular equivalence. More specifically, we propose a layer normalized LSTM to represent each node by aggregating the representations of their neighborhoods in a recursive way. We theoretically prove that some popular and typical centrality measures which are consistent with regular equivalence are optimal solutions of our model. This is also demonstrated by empirical results that the learned node representations can well predict the indexes of regular equivalence and related centrality scores. Furthermore, the learned node representations can be directly used for end applications like structural role classification in networks, and the experimental results show that our method can consistently outperform centrality-based methods and other state-of-the-art network embedding methods.

You may want to know: