Presentation
A Novel Gradient Compression Design with Ultra-High Compression Ratio for Communication-Efficient Federated Learning
DescriptionFederated learning is a privacy-preserving machine learning approach. It allows numerous geographically distributed clients to collaboratively train a large model while maintaining local data privacy. In heterogeneous device settings, limited network bandwidth is a major bottleneck that constrains system performance. In this work, we propose a novel gradient compression method for federated learning that aims to achieve communication efficiency and a low error floor by estimating the prototype of gradients on both the server and client sides and sending only the difference between the real gradient and the estimated prototype. This approach further reduces the total bits required for model updates. Additionally, the memory requirement will be lighter on the client side but heavier on the server side compared to traditional error feedback methods. Experiments on training neural networks show that our method is more communication-efficient with little impact on training and test accuracy.

Event Type
ACM Student Research Competition: Graduate Poster
ACM Student Research Competition: Undergraduate Poster
Doctoral Showcase
Posters
TimeTuesday, 19 November 202412pm - 5pm EST
LocationB302-B305
TP
XO/EX