Automatic Generation of High-Performance Inference Kernels for Graph Neural Networks on Multi-Core Systems

2021 
Graph neural networks are powerful in learning from high-dimensional graph-structured data, for which a number of frameworks such as DGL and Pytorch-geometrics have been developed to facilitate the construction, training, and deployment of such models. Unfortunately, existing systems underperform when inferring on huge graph data on multi-core CPUs. Furthermore, traditional graph processing systems are struggling with complexity issues due to their low-level programming interfaces. In this paper, we present a new compiler-based software framework Gin optimized for graph neural network inference, which offers a user-friendly interface, via an intuitive programming model, for defining graph neural network models. Gin builds high-level dataflow graphs as intermediate representations, which are transformed into highly efficient codes and then compiled into binary inference kernels. Our evaluation shows that Gin significantly accelerates the inference on billion-edge graphs, beating three state-of-the-art solutions i.e., DGL, Tensorflow, and Pytorch-geometrics, by 31.44×on average, with much higher CPU and memory bandwidth utilization. In addition, Gin is able to achieve considerable speedup (up to 7.6×) over traditional graph processing system Ligra.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []