Close

Presentation

xBS-GNN: Accelerating Billion-Scale GNN Training on FPGA
DescriptionGraph Neural Networks (GNNs) have been used in a variety of challenging applications. However, training GNN models is time-consuming as it incur high volume of irregular data accessing due to its graph-structured input data; such a challenge is further exacerbated in real-world applications as they often involve large-scale graphs with over billions of edges. Most existing GNN accelerators cannot scale to billion-scale graphs due to memory limitation. We propose xBS-GNN, an accelerator optimized for billion-scale GNN training. To achieve high training throughput, xBS-GNN jointly exploits several optimizations, including (1) a novel data placement policy, along with (2) a vertex-renaming technique and memory-efficient lookup table design for fast data retrieval, and (3) a feature quantization mechanism to reduce memory traffic. We evaluate xBS-GNN on three large datasets. xBS-GNN achieves up to 8.39x speedup over a widely-used GPU baseline and up to 5.13x speedup over a state-of-the-art GNN training accelerator.