Zhenglong Luo, Zhiyong Chen, Shijian Liu, James Welsh
{"title":"Multi-Agent Reinforcement Learning With Deep Networks for Diverse \n \n Q\n $Q$\n -Vectors","authors":"Zhenglong Luo, Zhiyong Chen, Shijian Liu, James Welsh","doi":"10.1049/ell2.70342","DOIUrl":null,"url":null,"abstract":"<p>In multi-agent reinforcement learning (MARL) tasks, the state-action value, commonly referred to as the <span></span><math>\n <semantics>\n <mi>Q</mi>\n <annotation>$Q$</annotation>\n </semantics></math>-value, can vary among agents because of their individual rewards, resulting in a <span></span><math>\n <semantics>\n <mi>Q</mi>\n <annotation>$Q$</annotation>\n </semantics></math>-vector. Determining an optimal policy is challenging, as it involves more than just maximizing a single <span></span><math>\n <semantics>\n <mi>Q</mi>\n <annotation>$Q$</annotation>\n </semantics></math>-value. Various optimal policies, such as a Nash equilibrium, have been studied in this context. Algorithms like Nash Q-learning and Nash Actor-Critic have shown effectiveness in these scenarios. This paper extends this research by proposing a deep Q-networks algorithm capable of learning various <span></span><math>\n <semantics>\n <mi>Q</mi>\n <annotation>$Q$</annotation>\n </semantics></math>-vectors using Max, Nash, and Maximin strategies. We validate the effectiveness of our approach in a dual-arm robotic environment, a representative human cyber-physical systems (HCPS) scenario, where two robotic arms collaborate to lift a pot or hand over a hammer to each other. This setting highlights how incorporating MARL into HCPS can address real-world complexities such as physical constraints, communication overhead, and dynamic interactions among multiple agents.</p>","PeriodicalId":11556,"journal":{"name":"Electronics Letters","volume":"61 1","pages":""},"PeriodicalIF":0.8000,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70342","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronics Letters","FirstCategoryId":"5","ListUrlMain":"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/ell2.70342","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In multi-agent reinforcement learning (MARL) tasks, the state-action value, commonly referred to as the -value, can vary among agents because of their individual rewards, resulting in a -vector. Determining an optimal policy is challenging, as it involves more than just maximizing a single -value. Various optimal policies, such as a Nash equilibrium, have been studied in this context. Algorithms like Nash Q-learning and Nash Actor-Critic have shown effectiveness in these scenarios. This paper extends this research by proposing a deep Q-networks algorithm capable of learning various -vectors using Max, Nash, and Maximin strategies. We validate the effectiveness of our approach in a dual-arm robotic environment, a representative human cyber-physical systems (HCPS) scenario, where two robotic arms collaborate to lift a pot or hand over a hammer to each other. This setting highlights how incorporating MARL into HCPS can address real-world complexities such as physical constraints, communication overhead, and dynamic interactions among multiple agents.
期刊介绍:
Electronics Letters is an internationally renowned peer-reviewed rapid-communication journal that publishes short original research papers every two weeks. Its broad and interdisciplinary scope covers the latest developments in all electronic engineering related fields including communication, biomedical, optical and device technologies. Electronics Letters also provides further insight into some of the latest developments through special features and interviews.
Scope
As a journal at the forefront of its field, Electronics Letters publishes papers covering all themes of electronic and electrical engineering. The major themes of the journal are listed below.
Antennas and Propagation
Biomedical and Bioinspired Technologies, Signal Processing and Applications
Control Engineering
Electromagnetism: Theory, Materials and Devices
Electronic Circuits and Systems
Image, Video and Vision Processing and Applications
Information, Computing and Communications
Instrumentation and Measurement
Microwave Technology
Optical Communications
Photonics and Opto-Electronics
Power Electronics, Energy and Sustainability
Radar, Sonar and Navigation
Semiconductor Technology
Signal Processing
MIMO