RUM: Network Representation Learning Using Motifs

2019 
We bring the novel idea of exploiting motifs into network embedding, in a dual-level network representation learning model called RUM (network Representation learning Using Motifs). Towards the leveraging of graph motifs that constitute higher-order organizations in a network, we propose two strategies, namely MotifWalk and MotifRe-weighting for learning motif-aware network embeddings. Motif-based and node-based representations are simultaneously generated, so that both the high-order structures and each node's individual properties are preserved in the final embeddings. We demonstrate that RUM has strong and well-balanced capability of preserving lowerorder proximities while discovering and capturing higher-order network structures. In empirical evaluation, RUM is tested on multiple public datasets, that range from small to medium citation networks to a large social network with more than a million nodes. Results show that the use of motifs in the representation learning process brings substantial benefits in reallife tasks, resulting in up to 12% microF1 and 8% macroF1 relative gains for node classification performance over the bestperforming competing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    16
    Citations
    NaN
    KQI
    []