I am happy to announce that our paper "Everything You Wanted to Know About Graph Neural Network Partitioning (But Were Afraid to Ask)" has been accepted at GRADES-NDA 2025!
The 8th Joint Workshop on Graph Data Management Experiences and Systems (GRADES) and Network Data Analytics (NDA) will be held on June 27, 2025 in Berlin, Germany, co-located with SIGMOD 2025.
The Challenge
Graph Neural Networks are the de facto models for deep learning on graph datasets, but training GNNs on large-scale datasets remains a challenge. Data partitioning plays a critical role in distributed mini-batch GNN training, directly impacting memory usage, training speed, and model accuracy.
Our Contributions
This survey comprehensively compares prevalent partitioning strategies in the Deep Graph Library (DGL) framework across standard benchmarks and models of varying depths. We systematically analyze:
- Partition sizes and their impact
- Training times across different strategies
- Memory overhead comparisons
- Accuracy trade-offs
Key Findings
Our analysis reveals surprising cases where simpler partitioning approaches outperform more sophisticated schemes. We conclude by offering practical guidelines for GNN partitioning.
This is joint work with Chongyang Xu.
