TY - GEN
T1 - Learning-based Adaptive Quantization for Communication-efficient Distributed Optimization with ADMM
AU - Nghiem, Truong X.
AU - Duarte, Aldo
AU - Wei, Shuangqing
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11/1
Y1 - 2020/11/1
N2 - In distributed optimization schemes that consist of a group of agents coordinated by a coordinator, the optimization algorithm often involves the agents solving private local proximal minimization subproblems and exchanging data frequently with the coordinator. Such schemes usually incur excessive communication cost, effecting the need for communication reduction in distributed optimization. Gaussian Processes (GPs) have been shown to be effective for learning the agents' proximal operators and hence for reducing the communication of the Alternating Direction Method of Multipliers (ADMM). We combine this learning-based approach with an adaptive uniform quantization approach to achieve even higher communication reduction. Our approach exploits the probabilistic prediction of the GPs to adapt and refine the quantizers along the progress of the ADMM algorithm. Moreover, following a linear minimum mean square error estimation (LMMSE) approach, we improve the GP regression and hyperparameter tuning by taking into account the statistics of the resulting quantization errors. The proposed approach can achieve significant communication reduction for ADMM without sacrificing the convergence nor the optimality even with small numbers of quantization levels, as demonstrated in simulations of a distributed optimal power dispatch application.
AB - In distributed optimization schemes that consist of a group of agents coordinated by a coordinator, the optimization algorithm often involves the agents solving private local proximal minimization subproblems and exchanging data frequently with the coordinator. Such schemes usually incur excessive communication cost, effecting the need for communication reduction in distributed optimization. Gaussian Processes (GPs) have been shown to be effective for learning the agents' proximal operators and hence for reducing the communication of the Alternating Direction Method of Multipliers (ADMM). We combine this learning-based approach with an adaptive uniform quantization approach to achieve even higher communication reduction. Our approach exploits the probabilistic prediction of the GPs to adapt and refine the quantizers along the progress of the ADMM algorithm. Moreover, following a linear minimum mean square error estimation (LMMSE) approach, we improve the GP regression and hyperparameter tuning by taking into account the statistics of the resulting quantization errors. The proposed approach can achieve significant communication reduction for ADMM without sacrificing the convergence nor the optimality even with small numbers of quantization levels, as demonstrated in simulations of a distributed optimal power dispatch application.
UR - http://www.scopus.com/inward/record.url?scp=85107772568&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85107772568&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF51394.2020.9443553
DO - 10.1109/IEEECONF51394.2020.9443553
M3 - Conference contribution
AN - SCOPUS:85107772568
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 37
EP - 41
BT - Conference Record of the 54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
Y2 - 1 November 2020 through 5 November 2020
ER -