Presentation
SIGN IN TO VIEW THIS PRESENTATION Sign In
Network-Offloaded Bandwidth-Optimal Broadcast and Allgather for Distributed AI
SessionScale–Out Interconnects
DescriptionIn the Fully Sharded Data Parallel (FSDP) training pipeline, collective operations can be interleaved to maximize the communication/computation overlap. In this scenario, outstanding operations such as Allgather and Reduce-Scatter can compete for the injection bandwidth and create pipeline bubbles. To address this problem, we propose a novel bandwidth-optimal Allgather collective algorithm that leverages hardware multicast. We use multicast to build a constant-time reliable Broadcast protocol, a building block for constructing an optimal Allgather schedule. Our Allgather algorithm achieves 2x traffic reduction on a 188-node testbed. To free the host side from running the protocol, we employ SmartNIC offloading. We extract the parallelism in our Allgather algorithm and map it to a SmartNIC specialized for hiding the cost of data movement. We show that our SmartNIC-offloaded collective progress engine can scale to the next generation of 1.6 Tbit/s links.
Event Type
Paper
TimeThursday, 21 November 20243:30pm - 4pm EST
LocationB312-B313A
Data Compression
Data Movement and Memory
Distributed Computing
Message Passing
Network
TP
Archive
view