利用异步多代理强化学习进行合作路径规划

Jiaming Yin, Weixiong Rao, Yu Xiao, Keshuang Tang
{"title":"利用异步多代理强化学习进行合作路径规划","authors":"Jiaming Yin, Weixiong Rao, Yu Xiao, Keshuang Tang","doi":"arxiv-2409.00754","DOIUrl":null,"url":null,"abstract":"In this paper, we study the shortest path problem (SPP) with multiple\nsource-destination pairs (MSD), namely MSD-SPP, to minimize average travel time\nof all shortest paths. The inherent traffic capacity limits within a road\nnetwork contributes to the competition among vehicles. Multi-agent\nreinforcement learning (MARL) model cannot offer effective and efficient path\nplanning cooperation due to the asynchronous decision making setting in\nMSD-SPP, where vehicles (a.k.a agents) cannot simultaneously complete routing\nactions in the previous time step. To tackle the efficiency issue, we propose\nto divide an entire road network into multiple sub-graphs and subsequently\nexecute a two-stage process of inter-region and intra-region route planning. To\naddress the asynchronous issue, in the proposed asyn-MARL framework, we first\ndesign a global state, which exploits a low-dimensional vector to implicitly\nrepresent the joint observations and actions of multi-agents. Then we develop a\nnovel trajectory collection mechanism to decrease the redundancy in training\ntrajectories. Additionally, we design a novel actor network to facilitate the\ncooperation among vehicles towards the same or close destinations and a\nreachability graph aimed at preventing infinite loops in routing paths. On both\nsynthetic and real road networks, our evaluation result demonstrates that our\napproach outperforms state-of-the-art planning approaches.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cooperative Path Planning with Asynchronous Multiagent Reinforcement Learning\",\"authors\":\"Jiaming Yin, Weixiong Rao, Yu Xiao, Keshuang Tang\",\"doi\":\"arxiv-2409.00754\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we study the shortest path problem (SPP) with multiple\\nsource-destination pairs (MSD), namely MSD-SPP, to minimize average travel time\\nof all shortest paths. The inherent traffic capacity limits within a road\\nnetwork contributes to the competition among vehicles. Multi-agent\\nreinforcement learning (MARL) model cannot offer effective and efficient path\\nplanning cooperation due to the asynchronous decision making setting in\\nMSD-SPP, where vehicles (a.k.a agents) cannot simultaneously complete routing\\nactions in the previous time step. To tackle the efficiency issue, we propose\\nto divide an entire road network into multiple sub-graphs and subsequently\\nexecute a two-stage process of inter-region and intra-region route planning. To\\naddress the asynchronous issue, in the proposed asyn-MARL framework, we first\\ndesign a global state, which exploits a low-dimensional vector to implicitly\\nrepresent the joint observations and actions of multi-agents. Then we develop a\\nnovel trajectory collection mechanism to decrease the redundancy in training\\ntrajectories. Additionally, we design a novel actor network to facilitate the\\ncooperation among vehicles towards the same or close destinations and a\\nreachability graph aimed at preventing infinite loops in routing paths. On both\\nsynthetic and real road networks, our evaluation result demonstrates that our\\napproach outperforms state-of-the-art planning approaches.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00754\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00754","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们研究了具有多来源-目的地对(MSD)的最短路径问题(SPP),即 MSD-SPP,以最小化所有最短路径的平均旅行时间。道路网络中固有的交通容量限制加剧了车辆之间的竞争。由于 MSD-SPP 中的异步决策设置,车辆(又称代理)无法在前一时间步中同时完成路径规划操作,因此多代理强化学习(MARL)模型无法提供有效且高效的路径规划合作。为了解决效率问题,我们建议将整个道路网络划分为多个子图,然后执行区域间和区域内路径规划的两阶段过程。为了解决异步问题,在所提出的 asyn-MARL 框架中,我们首先设计了一个全局状态,利用低维向量来隐式地表示多个代理的联合观测和行动。然后,我们开发了一种新的轨迹收集机制,以减少训练轨迹的冗余。此外,我们还设计了一个新颖的行动者网络,以促进车辆之间朝着相同或相近目的地的合作,并设计了一个可连接性图,旨在防止路由路径中出现无限循环。在合成和真实道路网络上,我们的评估结果表明,我们的方法优于最先进的规划方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cooperative Path Planning with Asynchronous Multiagent Reinforcement Learning
In this paper, we study the shortest path problem (SPP) with multiple source-destination pairs (MSD), namely MSD-SPP, to minimize average travel time of all shortest paths. The inherent traffic capacity limits within a road network contributes to the competition among vehicles. Multi-agent reinforcement learning (MARL) model cannot offer effective and efficient path planning cooperation due to the asynchronous decision making setting in MSD-SPP, where vehicles (a.k.a agents) cannot simultaneously complete routing actions in the previous time step. To tackle the efficiency issue, we propose to divide an entire road network into multiple sub-graphs and subsequently execute a two-stage process of inter-region and intra-region route planning. To address the asynchronous issue, in the proposed asyn-MARL framework, we first design a global state, which exploits a low-dimensional vector to implicitly represent the joint observations and actions of multi-agents. Then we develop a novel trajectory collection mechanism to decrease the redundancy in training trajectories. Additionally, we design a novel actor network to facilitate the cooperation among vehicles towards the same or close destinations and a reachability graph aimed at preventing infinite loops in routing paths. On both synthetic and real road networks, our evaluation result demonstrates that our approach outperforms state-of-the-art planning approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信