Close

Presentation

Democratizing AI: Open-Source Scalable LLM Training on GPU-Based Supercomputers
DescriptionTraining and fine-tuning large language models (LLMs) with hundreds of billions to trillions of parameters requires tens of thousands of GPUs, and a highly scalable software stack. In this work, we present a novel four-dimensional hybrid parallel algorithm implemented in a highly scalable, portable, open-source framework called AxoNN. We describe several performance optimizations in AxoNN to improve matrix multiplication kernel performance and overlap non-blocking collectives with computation, and performance modeling to choose performance-optimal configurations.

While the abilities of LLMs improve with the number of trainable parameters, so do privacy and copyright risks caused by memorization of training data, which can cause disclosure of sensitive or private information at inference time. We highlight this side effect of scale through experiments that explore "catastrophic memorization,'' where models are sufficiently large to memorize training data in a single pass, and present an approach to prevent it.