{"title":"基于 DQN 的大规模电动汽车充电调度方法","authors":"Yingnan Han, Tianyang Li, Qingzhu Wang","doi":"10.1007/s40747-024-01587-w","DOIUrl":null,"url":null,"abstract":"<p>This paper addresses the challenge of large-scale electric vehicle (EV) charging scheduling during peak demand periods, such as holidays or rush hours. The growing EV industry has highlighted the shortcomings of current scheduling plans, which struggle to manage surge large-scale charging demands effectively, thus posing challenges to the EV charging management system. Deep reinforcement learning, known for its effectiveness in solving complex decision-making problems, holds promise for addressing this issue. To this end, we formulate the problem as a Markov decision process (MDP). We propose a deep Q-learning (DQN) based algorithm to improve EV charging service quality as well as minimizing average queueing times for EVs and average idling times for charging devices (CDs). In our proposed methodology, we design two types of states to encompass global scheduling information, and two types of rewards to reflect scheduling performance. Based on this designing, we developed three modules: a fine-grained feature extraction module for effectively extracting state features, an improved noise-based exploration module for thorough exploration of the solution space, and a dueling block for enhancing Q value evaluation. To assess the effectiveness of our proposal, we conduct three case studies within a complex urban scenario featuring 34 charging stations and 899 scheduled EVs. The results of these experiments demonstrate the advantages of our proposal, showcasing its superiority in effectively locating superior solutions compared to current methods in the literature, as well as its efficiency in generating feasible charging scheduling plans for large-scale EVs. The code and data are available by accessing the hyperlink: https://github.com/paperscodeyouneed/A-Noisy-Dueling-Architecture-for-Large-Scale-EV-ChargingScheduling/tree/main/EV%20Charging%20Scheduling.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"8 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A DQN based approach for large-scale EVs charging scheduling\",\"authors\":\"Yingnan Han, Tianyang Li, Qingzhu Wang\",\"doi\":\"10.1007/s40747-024-01587-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This paper addresses the challenge of large-scale electric vehicle (EV) charging scheduling during peak demand periods, such as holidays or rush hours. The growing EV industry has highlighted the shortcomings of current scheduling plans, which struggle to manage surge large-scale charging demands effectively, thus posing challenges to the EV charging management system. Deep reinforcement learning, known for its effectiveness in solving complex decision-making problems, holds promise for addressing this issue. To this end, we formulate the problem as a Markov decision process (MDP). We propose a deep Q-learning (DQN) based algorithm to improve EV charging service quality as well as minimizing average queueing times for EVs and average idling times for charging devices (CDs). In our proposed methodology, we design two types of states to encompass global scheduling information, and two types of rewards to reflect scheduling performance. Based on this designing, we developed three modules: a fine-grained feature extraction module for effectively extracting state features, an improved noise-based exploration module for thorough exploration of the solution space, and a dueling block for enhancing Q value evaluation. To assess the effectiveness of our proposal, we conduct three case studies within a complex urban scenario featuring 34 charging stations and 899 scheduled EVs. The results of these experiments demonstrate the advantages of our proposal, showcasing its superiority in effectively locating superior solutions compared to current methods in the literature, as well as its efficiency in generating feasible charging scheduling plans for large-scale EVs. The code and data are available by accessing the hyperlink: https://github.com/paperscodeyouneed/A-Noisy-Dueling-Architecture-for-Large-Scale-EV-ChargingScheduling/tree/main/EV%20Charging%20Scheduling.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01587-w\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01587-w","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A DQN based approach for large-scale EVs charging scheduling
This paper addresses the challenge of large-scale electric vehicle (EV) charging scheduling during peak demand periods, such as holidays or rush hours. The growing EV industry has highlighted the shortcomings of current scheduling plans, which struggle to manage surge large-scale charging demands effectively, thus posing challenges to the EV charging management system. Deep reinforcement learning, known for its effectiveness in solving complex decision-making problems, holds promise for addressing this issue. To this end, we formulate the problem as a Markov decision process (MDP). We propose a deep Q-learning (DQN) based algorithm to improve EV charging service quality as well as minimizing average queueing times for EVs and average idling times for charging devices (CDs). In our proposed methodology, we design two types of states to encompass global scheduling information, and two types of rewards to reflect scheduling performance. Based on this designing, we developed three modules: a fine-grained feature extraction module for effectively extracting state features, an improved noise-based exploration module for thorough exploration of the solution space, and a dueling block for enhancing Q value evaluation. To assess the effectiveness of our proposal, we conduct three case studies within a complex urban scenario featuring 34 charging stations and 899 scheduled EVs. The results of these experiments demonstrate the advantages of our proposal, showcasing its superiority in effectively locating superior solutions compared to current methods in the literature, as well as its efficiency in generating feasible charging scheduling plans for large-scale EVs. The code and data are available by accessing the hyperlink: https://github.com/paperscodeyouneed/A-Noisy-Dueling-Architecture-for-Large-Scale-EV-ChargingScheduling/tree/main/EV%20Charging%20Scheduling.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.