GNNVis: Visualize Large-Scale Data by Learning a Graph Neural Network Representation

2020 
Many achievements have been made by studying how to visualize large-scale and high-dimensional data in typically 2D or 3D space. Normally, such a process is performed through a non-parametric (unsupervised) approach which is limited in handling the unseen data. In this work, we study the parametric (supervised) model which is capable to learn a mapping between high-dimensional data space Rd and low-dimensional latent space Rs with similarity structure in Rd preserved where s l d. The GNNVis is proposed, a framework that applies the idea of Graph Neural Networks (GNNs) to the parametric learning process and the learned mapping serves as a Visualizer (Vis) to compute the low-dimensional embeddings of unseen data online. In our framework, the features of data nodes, as well as the (hidden) information of their neighbors are fused to conduct Dimension Reduction. To the best of our knowledge, none of the existing visualization works have studied how to combine such information into the learning representation. Moreover, the learning process of GNNVis is designed as an end-to-end manner and can easily be extended to arbitrary Dimension Reduction methods if the corresponding objective function is given. Based on GNNVis, several typical dimension reduction methods t-SNE, LargeVis, and UMAP are investigated. As a parametric framework, GNNVis is an inherently efficient Visualizer capable of computing the embeddings of large-scale unseen data. To guarantee its scalability in the Training Stage, a novel training strategy with Subgraph Negative Sampling (SNS) is conducted to reduce the corresponding cost. Experimental results in real datasets demonstrate the advantages of GNNVis. The visualization quality of GNNVis outperforms the state-of-the-art parametric models, and is comparable to that of the non-parametric models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    2
    Citations
    NaN
    KQI
    []