{"title":"Intelligent traffic signal control based on reinforcement learning: a survey","authors":"Hang Xiao, Huale Li, Zhaobin Wang, Zhen Yang, Shuhan Qi, Jiajia Zhang, DingZhong Cai, JiaQi Yin","doi":"10.1007/s10462-026-11530-9","DOIUrl":null,"url":null,"abstract":"<div><p>Rapid urbanization and the surge in vehicle ownership have exacerbated traffic congestion, posing substantial economic, environmental, and social challenges. Traditional traffic signal control methods often struggle to address the dynamic complexities of modern urban traffic, frequently resulting in operational inefficiencies. Reinforcement Learning (RL), with its inherent capacity for real-time learning and adaptation, has emerged as a promising paradigm for optimizing Traffic Signal Control (TSC). RL approaches are particularly well-suited for handling complex traffic states and coordinating global optimization across multiple intersections. Despite notable progress, RL-based systems continue to face significant hurdles, including high computational costs, extensive data requirements, and issues regarding generalizability across diverse traffic scenarios. This paper synthesizes current RL-based models for TSC and highlights recent advancements in the field. It provides a comprehensive review of prominent approaches, categorizes existing studies based on their methodological frameworks, and conducts a technical evaluation of classical RL-based methods to assess their performance across varied traffic conditions. Finally, the remaining challenges and potential future directions for RL-based TSC are critically examined.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 5","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-026-11530-9.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-026-11530-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/3/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Rapid urbanization and the surge in vehicle ownership have exacerbated traffic congestion, posing substantial economic, environmental, and social challenges. Traditional traffic signal control methods often struggle to address the dynamic complexities of modern urban traffic, frequently resulting in operational inefficiencies. Reinforcement Learning (RL), with its inherent capacity for real-time learning and adaptation, has emerged as a promising paradigm for optimizing Traffic Signal Control (TSC). RL approaches are particularly well-suited for handling complex traffic states and coordinating global optimization across multiple intersections. Despite notable progress, RL-based systems continue to face significant hurdles, including high computational costs, extensive data requirements, and issues regarding generalizability across diverse traffic scenarios. This paper synthesizes current RL-based models for TSC and highlights recent advancements in the field. It provides a comprehensive review of prominent approaches, categorizes existing studies based on their methodological frameworks, and conducts a technical evaluation of classical RL-based methods to assess their performance across varied traffic conditions. Finally, the remaining challenges and potential future directions for RL-based TSC are critically examined.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.