Lianghao Ji , Jiali Song , Cuijuan Zhang , Shasha Yang , Jun Li
{"title":"Optimal bipartite consensus for multi-agent systems using twin Q-learning deterministic policy gradient algorithm with adaptive learning rate","authors":"Lianghao Ji , Jiali Song , Cuijuan Zhang , Shasha Yang , Jun Li","doi":"10.1016/j.neucom.2025.130096","DOIUrl":null,"url":null,"abstract":"<div><div>We investigate the optimal bipartite consensus control (OBCC) problem for multi-agent systems (MASs) over a signed network. Due to the improper cooperation-competition strength (CCS) among agents, the system may be unstable or even non-convergent. Recognizing the close relationship between CCS and the training of the critic network, we propose a twin Q-learning deterministic policy gradient algorithm with an adaptive learning rate (ALR-TQDPG). First, an adaptive learning rate formula is established based on the CCS and historical temporal difference (TD) error variations. The weights of two factors are dynamically adjusted using the weight equation as training progresses, then dynamically adjusting the update magnitude (i.e., learning rate) of critic network weights. Second, to solve the underestimation problem of Q-value, a twin Q-learning algorithm is adopted to improve system performance. The addition of experience replay and target network methods enhances algorithm stability. Lyapunov stability theory and functional analysis are utilized to ensure the ALR-TQDPG algorithm’s convergence. Finally, numerical simulations confirm that the suggested approach is effective.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"638 ","pages":"Article 130096"},"PeriodicalIF":5.5000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225007684","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
We investigate the optimal bipartite consensus control (OBCC) problem for multi-agent systems (MASs) over a signed network. Due to the improper cooperation-competition strength (CCS) among agents, the system may be unstable or even non-convergent. Recognizing the close relationship between CCS and the training of the critic network, we propose a twin Q-learning deterministic policy gradient algorithm with an adaptive learning rate (ALR-TQDPG). First, an adaptive learning rate formula is established based on the CCS and historical temporal difference (TD) error variations. The weights of two factors are dynamically adjusted using the weight equation as training progresses, then dynamically adjusting the update magnitude (i.e., learning rate) of critic network weights. Second, to solve the underestimation problem of Q-value, a twin Q-learning algorithm is adopted to improve system performance. The addition of experience replay and target network methods enhances algorithm stability. Lyapunov stability theory and functional analysis are utilized to ensure the ALR-TQDPG algorithm’s convergence. Finally, numerical simulations confirm that the suggested approach is effective.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.