TY - GEN
T1 - Distributed Experiment Design and Control for Multi-agent Systems with Gaussian Processes
AU - Le, Viet Anh
AU - Nghiem, Truong X.
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - This paper focuses on distributed learning-based control of decentralized multi-agent systems where the agents' dynamics are modeled by Gaussian Processes (GPs). Two fundamental problems are considered: the optimal design of experiment for learning of the agents' GP models concurrently, and the distributed coordination given the learned models. Using a Distributed Model Predictive Control (DMPC) approach, the two problems are formulated as distributed optimization problems, where each agent's sub-problem includes both local and shared objectives and constraints. To solve the resulting complex and non-convex DMPC problems efficiently, we develop an algorithm called Alternating Direction Method of Multipliers with Convexification (ADMM-C) that combines a distributed ADMM algorithm and a Sequential Convex Programming method. We prove that, under some technical assumptions, the ADMM-C algorithm converges to a stationary point of the penalized optimization problem. The effectiveness of our approach is demonstrated in numerical simulations of a multi-vehicle formation control example.
AB - This paper focuses on distributed learning-based control of decentralized multi-agent systems where the agents' dynamics are modeled by Gaussian Processes (GPs). Two fundamental problems are considered: the optimal design of experiment for learning of the agents' GP models concurrently, and the distributed coordination given the learned models. Using a Distributed Model Predictive Control (DMPC) approach, the two problems are formulated as distributed optimization problems, where each agent's sub-problem includes both local and shared objectives and constraints. To solve the resulting complex and non-convex DMPC problems efficiently, we develop an algorithm called Alternating Direction Method of Multipliers with Convexification (ADMM-C) that combines a distributed ADMM algorithm and a Sequential Convex Programming method. We prove that, under some technical assumptions, the ADMM-C algorithm converges to a stationary point of the penalized optimization problem. The effectiveness of our approach is demonstrated in numerical simulations of a multi-vehicle formation control example.
UR - http://www.scopus.com/inward/record.url?scp=85126053028&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126053028&partnerID=8YFLogxK
U2 - 10.1109/CDC45484.2021.9682906
DO - 10.1109/CDC45484.2021.9682906
M3 - Conference contribution
AN - SCOPUS:85126053028
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 2226
EP - 2231
BT - 60th IEEE Conference on Decision and Control, CDC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 60th IEEE Conference on Decision and Control, CDC 2021
Y2 - 13 December 2021 through 17 December 2021
ER -