{"title":"基于安全强化学习的城市轨道交通列车跟踪间隔控制","authors":"","doi":"10.1016/j.engappai.2024.109226","DOIUrl":null,"url":null,"abstract":"<div><p>In order to solve the problem of controlling the interval between trains in the new train control system, which aims to ensure the safe operation of trains and improve traffic density, the process of managing train speed is treated as a decision-making process. The utilization of Safe Reinforcement Learning is implemented to attain immediate control of the train interval within the train section. Firstly, utilizing vehicle-to-vehicle communication, the train obtains state information about its surroundings. A constrained Markov Decision Process model is created that takes into account the dynamic characteristics of both the leading and tracking trains. Secondly, by integrating the minimal safety distance and the maximum operating efficiency distance, safety and optimality are connected. An augmented Lagrange multiplier method is utilized to design and implement the safe reinforcement learning algorithm. To enhance the convergence speed of the algorithm, a dual-priority system is implemented, classifying and extracting samples based on their varying levels of importance in empirical samples. Ultimately, simulations were performed to examine various train tracking scenarios. The findings demonstrate that, in the same scenarios, this algorithm surpasses both the Lagrange-based deep deterministic policy gradient algorithm and the fixed lambda based deep deterministic policy gradient algorithm. The safety performance has been improved by 30% and 60%, and the optimality performance has been improved by 40% and 30%, respectively. This algorithm, when paired with safety experience prioritized replay, achieves faster convergence compared to the enhanced version. In general, this algorithm exhibits robust suitability for train tracking interval control.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Tracking interval control for urban rail trains based on safe reinforcement learning\",\"authors\":\"\",\"doi\":\"10.1016/j.engappai.2024.109226\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In order to solve the problem of controlling the interval between trains in the new train control system, which aims to ensure the safe operation of trains and improve traffic density, the process of managing train speed is treated as a decision-making process. The utilization of Safe Reinforcement Learning is implemented to attain immediate control of the train interval within the train section. Firstly, utilizing vehicle-to-vehicle communication, the train obtains state information about its surroundings. A constrained Markov Decision Process model is created that takes into account the dynamic characteristics of both the leading and tracking trains. Secondly, by integrating the minimal safety distance and the maximum operating efficiency distance, safety and optimality are connected. An augmented Lagrange multiplier method is utilized to design and implement the safe reinforcement learning algorithm. To enhance the convergence speed of the algorithm, a dual-priority system is implemented, classifying and extracting samples based on their varying levels of importance in empirical samples. Ultimately, simulations were performed to examine various train tracking scenarios. The findings demonstrate that, in the same scenarios, this algorithm surpasses both the Lagrange-based deep deterministic policy gradient algorithm and the fixed lambda based deep deterministic policy gradient algorithm. The safety performance has been improved by 30% and 60%, and the optimality performance has been improved by 40% and 30%, respectively. This algorithm, when paired with safety experience prioritized replay, achieves faster convergence compared to the enhanced version. In general, this algorithm exhibits robust suitability for train tracking interval control.</p></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624013848\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624013848","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Tracking interval control for urban rail trains based on safe reinforcement learning
In order to solve the problem of controlling the interval between trains in the new train control system, which aims to ensure the safe operation of trains and improve traffic density, the process of managing train speed is treated as a decision-making process. The utilization of Safe Reinforcement Learning is implemented to attain immediate control of the train interval within the train section. Firstly, utilizing vehicle-to-vehicle communication, the train obtains state information about its surroundings. A constrained Markov Decision Process model is created that takes into account the dynamic characteristics of both the leading and tracking trains. Secondly, by integrating the minimal safety distance and the maximum operating efficiency distance, safety and optimality are connected. An augmented Lagrange multiplier method is utilized to design and implement the safe reinforcement learning algorithm. To enhance the convergence speed of the algorithm, a dual-priority system is implemented, classifying and extracting samples based on their varying levels of importance in empirical samples. Ultimately, simulations were performed to examine various train tracking scenarios. The findings demonstrate that, in the same scenarios, this algorithm surpasses both the Lagrange-based deep deterministic policy gradient algorithm and the fixed lambda based deep deterministic policy gradient algorithm. The safety performance has been improved by 30% and 60%, and the optimality performance has been improved by 40% and 30%, respectively. This algorithm, when paired with safety experience prioritized replay, achieves faster convergence compared to the enhanced version. In general, this algorithm exhibits robust suitability for train tracking interval control.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.