{"title":"State Complexity Reduction in Reinforcement Learning based Adaptive Traffic Signal Control","authors":"Mladen Miletić, K. Kušić, M. Gregurić, E. Ivanjko","doi":"10.1109/ELMAR49956.2020.9219024","DOIUrl":null,"url":null,"abstract":"The throughput of a signalized intersection can be increased by appropriate adjustment of the signal program using Adaptive Traffic Signal Control (ATSC). One possible approach is to use Reinforcement Learning (RL). It enables model-free learning of the control law for the reduction of the negative impacts of traffic congestion. RL based ATSC achieves good results but requires many learning iterations to train optimal control policy due to high state-action complexity. In this paper, a novel approach for state complexity reduction in RL by using Self-Organizing Maps (SOM) is presented. With SOM, the convergence rate of RL and system stability in the later stages of learning is increased. The proposed approach is evaluated against the traditional RL approach that uses Q-Learning on a simulated isolated intersection calibrated according to realistic traffic data. Presented simulation results prove the effectiveness of the proposed approach regarding learning stability and traffic measures of effectiveness.","PeriodicalId":235289,"journal":{"name":"2020 International Symposium ELMAR","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Symposium ELMAR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ELMAR49956.2020.9219024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
The throughput of a signalized intersection can be increased by appropriate adjustment of the signal program using Adaptive Traffic Signal Control (ATSC). One possible approach is to use Reinforcement Learning (RL). It enables model-free learning of the control law for the reduction of the negative impacts of traffic congestion. RL based ATSC achieves good results but requires many learning iterations to train optimal control policy due to high state-action complexity. In this paper, a novel approach for state complexity reduction in RL by using Self-Organizing Maps (SOM) is presented. With SOM, the convergence rate of RL and system stability in the later stages of learning is increased. The proposed approach is evaluated against the traditional RL approach that uses Q-Learning on a simulated isolated intersection calibrated according to realistic traffic data. Presented simulation results prove the effectiveness of the proposed approach regarding learning stability and traffic measures of effectiveness.