结合神经气体和强化学习的自适应交通信号控制

Mladen Miletić, E. Ivanjko, S. Mandzuka, Daniela Koltovska Nečoska
{"title":"结合神经气体和强化学习的自适应交通信号控制","authors":"Mladen Miletić, E. Ivanjko, S. Mandzuka, Daniela Koltovska Nečoska","doi":"10.1109/ELMAR52657.2021.9550948","DOIUrl":null,"url":null,"abstract":"Travel time of vehicles in urban traffic networks can be reduced by using Adaptive Traffic Signal Control (ATSC) to change the signal program according to the current traffic situation. Modern ATSC approaches based on Reinforcement Learning (RL) can learn the optimal signal control policy. While there are multiple RL based ATSC implementations available, most suffer from high state-action complexity leading to slow convergence and long training time. In this paper, the state-action complexity of ATSC based RL is reduced by implementing Growing Neural Gas learning structure as an integral part of RL, leading to high convergence rate and system stability. The presented approach is evaluated on a simulated signalized intersection, and compared with self-organizing map RL-based ATSC systems. Obtained results prove that the reduction of state-action complexity in this manner improves the effectiveness of RL based ATSC not needing to have an a priory analysis of needed number of neurons for state representation.","PeriodicalId":410503,"journal":{"name":"2021 International Symposium ELMAR","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Combining Neural Gas and Reinforcement Learning for Adaptive Traffic Signal Control\",\"authors\":\"Mladen Miletić, E. Ivanjko, S. Mandzuka, Daniela Koltovska Nečoska\",\"doi\":\"10.1109/ELMAR52657.2021.9550948\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Travel time of vehicles in urban traffic networks can be reduced by using Adaptive Traffic Signal Control (ATSC) to change the signal program according to the current traffic situation. Modern ATSC approaches based on Reinforcement Learning (RL) can learn the optimal signal control policy. While there are multiple RL based ATSC implementations available, most suffer from high state-action complexity leading to slow convergence and long training time. In this paper, the state-action complexity of ATSC based RL is reduced by implementing Growing Neural Gas learning structure as an integral part of RL, leading to high convergence rate and system stability. The presented approach is evaluated on a simulated signalized intersection, and compared with self-organizing map RL-based ATSC systems. Obtained results prove that the reduction of state-action complexity in this manner improves the effectiveness of RL based ATSC not needing to have an a priory analysis of needed number of neurons for state representation.\",\"PeriodicalId\":410503,\"journal\":{\"name\":\"2021 International Symposium ELMAR\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Symposium ELMAR\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ELMAR52657.2021.9550948\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Symposium ELMAR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ELMAR52657.2021.9550948","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

采用自适应交通信号控制(ATSC)技术,根据当前交通状况改变信号程序,可以减少城市交通网络中车辆的行驶时间。基于强化学习(RL)的现代ATSC方法可以学习最优信号控制策略。虽然有多种基于RL的ATSC实现可用,但大多数都存在高状态-动作复杂性,导致收敛速度慢和训练时间长。在本文中,通过将Growing Neural Gas学习结构作为RL的组成部分,降低了基于ATSC的RL的状态-行为复杂性,从而提高了系统的收敛速度和稳定性。在一个模拟信号交叉口上对该方法进行了评估,并与基于自组织映射rl的ATSC系统进行了比较。得到的结果证明,以这种方式降低状态-动作复杂性提高了基于RL的ATSC的有效性,而不需要对状态表示所需的神经元数量进行优先分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Combining Neural Gas and Reinforcement Learning for Adaptive Traffic Signal Control
Travel time of vehicles in urban traffic networks can be reduced by using Adaptive Traffic Signal Control (ATSC) to change the signal program according to the current traffic situation. Modern ATSC approaches based on Reinforcement Learning (RL) can learn the optimal signal control policy. While there are multiple RL based ATSC implementations available, most suffer from high state-action complexity leading to slow convergence and long training time. In this paper, the state-action complexity of ATSC based RL is reduced by implementing Growing Neural Gas learning structure as an integral part of RL, leading to high convergence rate and system stability. The presented approach is evaluated on a simulated signalized intersection, and compared with self-organizing map RL-based ATSC systems. Obtained results prove that the reduction of state-action complexity in this manner improves the effectiveness of RL based ATSC not needing to have an a priory analysis of needed number of neurons for state representation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信