Presentation
A Sparse Approach for Translation-Based Training of Knowledge Graph Embeddings
DescriptionKnowledge graph (KG) learning offers a powerful framework for generating new knowledge and making inferences. Training KG embedding can take a significantly long time, especially for larger datasets. Our analysis shows that the gradient computation of embedding and vector normalization are the dominant functions in the KG embedding training loop. We address this issue by replacing the core embedding computation with SpMM (Sparse-Dense Matrix Multiplication) kernels. This allows us to unify multiple scatter (and gather) operations as a single operation, reducing training time and memory usage. Applying this sparse approach in training the TransE model results in up to 5.7x speedup on the CPU and up to 1.7x speedup on the GPU. Distributing this algorithm on 64 GPUs, we observe up to 3.9x overall speedup in each epoch. Our proposed sparse approach can also be extended to accelerate other translation-based models such as TransR and TransH.

Event Type
Posters
TimeWednesday, 20 November 202410:30am - 10:45am EST
LocationB306
TP
Similar Presentations