NeuGraph: Parallel Deep Neural Network Computation on Large Graphs
- Lingxiao Ma ,
- Zhi Yang ,
- Youshan Miao ,
- Jilong Xue ,
- Ming Wu ,
- Lidong Zhou ,
- Yafei Dai
2019 USENIX Annual Technical Conference |
Recent deep learning models have moved beyond low dimensional regular grids such as image, video, and speech, to high-dimensional graph-structured data, such as social networks, e-commerce user-item graphs, and knowledge graphs. This evolution has led to large graph-based neural network models that go beyond what existing deep learning frameworks or graph computing systems are designed for. We present NeuGraph, a new framework that bridges the graph and dataflow models to support efficient and scalable parallel neural network computation on graphs. NeuGraph introduces graph computation optimizations into the management of data partitioning, scheduling, and parallelism in dataflow-based deep learning frameworks. Our evaluation shows that, on small graphs that can fit in a single GPU, NeuGraph outperforms state-of-the-art implementations by a significant margin, while scaling to large real-world graphs that none of the existing frameworks can handle directly with GPUs.
(Please stay tuned for further updates.)