Handling Device Heterogeneity in Federated Learning: The First Optimal Parallel SGD in the Presence of Data, Compute, and Communication Heterogeneity

Speaker: Peter Richtárik (KAUST)

Organizer: Yi-Shuai Niu (BIMSA)

Abstract: The design of efficient parallel/distributed optimization methods and tight analysis of their theoretical properties are important research endeavors. While minimax complexities are known for sequential optimization methods, the theory of parallel optimization methods is surprisingly much less explored, especially in the presence of data, compute, and communication heterogeneity.

In the first part of the talk, we establish the first optimal time complexities for parallel optimization methods (Rennala SGD and Malenia SGD) that have access to an unbiased stochastic gradient oracle with bounded variance, under the assumption that the workers compute stochastic gradients with different speeds, i.e., we assume compute heterogeneity. We prove lower bounds and develop optimal algorithms that attain them, both in the data homogeneous and heterogeneous regimes.

In the second part of the talk, we establish the first optimal time complexities for parallel optimization methods (Shadowheart SGD) that have access to an unbiased stochastic gradient oracle with bounded variance, under the assumption that the workers compute stochastic gradients with different speeds, as before, but under the further assumption that the worker-to-server communication times are nonzero and heterogeneous. We prove lower bounds and develop optimal algorithms that attain them, in the data homogeneous regime only.

Time permitting, I may briefly outline some further recent results. Our results have surprising consequences for the literature of asynchronous optimization methods: in contrast with prior attempts to tame compute heterogeneity via “complete/wild” compute and update asynchronicity, our methods alternate fast asynchronous computation of a minibatch of stochastic gradients with infrequent synchronous update steps.

Time: 10:00 – 11:00, Monday, November 25, 2024

Location: Auditorium (18th floor) at SIMIS, Block A, No. 657 Songhu Road, Yangpu District, Shanghai
ZOOM Meeting No.: 423 317 8953 (Passcode: SIMIS)


About the Speaker:

Peter Richtárik is a professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), Saudi Arabia, where he leads the Optimization and Machine Learning Lab. His research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, and high-performance computing. Through his work on randomized and distributed optimization algorithms, he has contributed to the foundations of machine learning, optimization, and randomized numerical linear algebra. He is one of the original developers of Federated Learning. Prof. Richtárik’s works attracted international awards, including the Charles Broyden Prize, SIAM SIGEST Best Paper Award, Distinguished Speaker Award at the 2019 International Conference on Continuous Optimization, the IMA Leslie Fox Prize (three times), and a Best Paper Award at the NeurIPS 2020 Workshop on Scalability, Privacy, and Security in Federated Learning. Several of his works are among the most read papers published by the SIAM Journal on Optimization and the SIAM Journal on Matrix Analysis and Applications. Prof. Richtárik serves as an Area Chair for leading machine learning conferences, including NeurIPS, ICML, and ICLR, and is an Action Editor of JMLR, and Associate Editor of Numerische Mathematik and Optimization Methods and Software. In the past, he served as an Action Editor of TMLR and an Area Editor of JOTA.

en_USEnglish
Scroll to Top