Hoang Khoi Do, Thi Quynh Khanh Dinh, Minh Duong Nguyen, Tien Hoa Nguyen
{"title":"部分观察多智能体强化学习的语义通信","authors":"Hoang Khoi Do, Thi Quynh Khanh Dinh, Minh Duong Nguyen, Tien Hoa Nguyen","doi":"10.1109/SSP53291.2023.10207979","DOIUrl":null,"url":null,"abstract":"Effective cooperation and coordination among agents is essential for success in many real-world scenarios, particularly in reinforcement learning challenges. However, partial observation, where agents are not aware of all the observations made by other agents, creates a significant obstacle to coordination. To overcome this challenge, we propose the Shared Online Multi-agent Knowledge Exchange (SOME) framework, which allows agents to learn to anticipate each other’s observations and improve their local learning. In SOME, agents learn to anticipate the observations of other agents to improve their local learning, allowing for better coordination and cooperation. Additionally, using knowledge generators instead of full observations reduces communication costs. Our experimental evaluation demonstrates that agents trained with SOME can not only predict the next observations and actions of opponents and collaborators but also take appropriate actions, making it a promising approach for overcoming the partial observation challenge in multi-agent reinforcement learning.","PeriodicalId":296346,"journal":{"name":"2023 IEEE Statistical Signal Processing Workshop (SSP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semantic Communication for Partial Observation Multi-agent Reinforcement Learning\",\"authors\":\"Hoang Khoi Do, Thi Quynh Khanh Dinh, Minh Duong Nguyen, Tien Hoa Nguyen\",\"doi\":\"10.1109/SSP53291.2023.10207979\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Effective cooperation and coordination among agents is essential for success in many real-world scenarios, particularly in reinforcement learning challenges. However, partial observation, where agents are not aware of all the observations made by other agents, creates a significant obstacle to coordination. To overcome this challenge, we propose the Shared Online Multi-agent Knowledge Exchange (SOME) framework, which allows agents to learn to anticipate each other’s observations and improve their local learning. In SOME, agents learn to anticipate the observations of other agents to improve their local learning, allowing for better coordination and cooperation. Additionally, using knowledge generators instead of full observations reduces communication costs. Our experimental evaluation demonstrates that agents trained with SOME can not only predict the next observations and actions of opponents and collaborators but also take appropriate actions, making it a promising approach for overcoming the partial observation challenge in multi-agent reinforcement learning.\",\"PeriodicalId\":296346,\"journal\":{\"name\":\"2023 IEEE Statistical Signal Processing Workshop (SSP)\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Statistical Signal Processing Workshop (SSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSP53291.2023.10207979\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Statistical Signal Processing Workshop (SSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSP53291.2023.10207979","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic Communication for Partial Observation Multi-agent Reinforcement Learning
Effective cooperation and coordination among agents is essential for success in many real-world scenarios, particularly in reinforcement learning challenges. However, partial observation, where agents are not aware of all the observations made by other agents, creates a significant obstacle to coordination. To overcome this challenge, we propose the Shared Online Multi-agent Knowledge Exchange (SOME) framework, which allows agents to learn to anticipate each other’s observations and improve their local learning. In SOME, agents learn to anticipate the observations of other agents to improve their local learning, allowing for better coordination and cooperation. Additionally, using knowledge generators instead of full observations reduces communication costs. Our experimental evaluation demonstrates that agents trained with SOME can not only predict the next observations and actions of opponents and collaborators but also take appropriate actions, making it a promising approach for overcoming the partial observation challenge in multi-agent reinforcement learning.