Mladen Miletić, E. Ivanjko, S. Mandzuka, Daniela Koltovska Nečoska
{"title":"结合神经气体和强化学习的自适应交通信号控制","authors":"Mladen Miletić, E. Ivanjko, S. Mandzuka, Daniela Koltovska Nečoska","doi":"10.1109/ELMAR52657.2021.9550948","DOIUrl":null,"url":null,"abstract":"Travel time of vehicles in urban traffic networks can be reduced by using Adaptive Traffic Signal Control (ATSC) to change the signal program according to the current traffic situation. Modern ATSC approaches based on Reinforcement Learning (RL) can learn the optimal signal control policy. While there are multiple RL based ATSC implementations available, most suffer from high state-action complexity leading to slow convergence and long training time. In this paper, the state-action complexity of ATSC based RL is reduced by implementing Growing Neural Gas learning structure as an integral part of RL, leading to high convergence rate and system stability. The presented approach is evaluated on a simulated signalized intersection, and compared with self-organizing map RL-based ATSC systems. Obtained results prove that the reduction of state-action complexity in this manner improves the effectiveness of RL based ATSC not needing to have an a priory analysis of needed number of neurons for state representation.","PeriodicalId":410503,"journal":{"name":"2021 International Symposium ELMAR","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Combining Neural Gas and Reinforcement Learning for Adaptive Traffic Signal Control\",\"authors\":\"Mladen Miletić, E. Ivanjko, S. Mandzuka, Daniela Koltovska Nečoska\",\"doi\":\"10.1109/ELMAR52657.2021.9550948\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Travel time of vehicles in urban traffic networks can be reduced by using Adaptive Traffic Signal Control (ATSC) to change the signal program according to the current traffic situation. Modern ATSC approaches based on Reinforcement Learning (RL) can learn the optimal signal control policy. While there are multiple RL based ATSC implementations available, most suffer from high state-action complexity leading to slow convergence and long training time. In this paper, the state-action complexity of ATSC based RL is reduced by implementing Growing Neural Gas learning structure as an integral part of RL, leading to high convergence rate and system stability. The presented approach is evaluated on a simulated signalized intersection, and compared with self-organizing map RL-based ATSC systems. Obtained results prove that the reduction of state-action complexity in this manner improves the effectiveness of RL based ATSC not needing to have an a priory analysis of needed number of neurons for state representation.\",\"PeriodicalId\":410503,\"journal\":{\"name\":\"2021 International Symposium ELMAR\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Symposium ELMAR\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ELMAR52657.2021.9550948\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Symposium ELMAR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ELMAR52657.2021.9550948","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Combining Neural Gas and Reinforcement Learning for Adaptive Traffic Signal Control
Travel time of vehicles in urban traffic networks can be reduced by using Adaptive Traffic Signal Control (ATSC) to change the signal program according to the current traffic situation. Modern ATSC approaches based on Reinforcement Learning (RL) can learn the optimal signal control policy. While there are multiple RL based ATSC implementations available, most suffer from high state-action complexity leading to slow convergence and long training time. In this paper, the state-action complexity of ATSC based RL is reduced by implementing Growing Neural Gas learning structure as an integral part of RL, leading to high convergence rate and system stability. The presented approach is evaluated on a simulated signalized intersection, and compared with self-organizing map RL-based ATSC systems. Obtained results prove that the reduction of state-action complexity in this manner improves the effectiveness of RL based ATSC not needing to have an a priory analysis of needed number of neurons for state representation.