Qiang Zhao , Chengwei Xu , Chuan Sun , Yinghua Han
{"title":"基于多智能体异步更新深度强化学习的智能家用电动汽车充放电调度","authors":"Qiang Zhao , Chengwei Xu , Chuan Sun , Yinghua Han","doi":"10.1016/j.compeleceng.2025.110473","DOIUrl":null,"url":null,"abstract":"<div><div>Despite the increasing penetration of electric vehicle (EV), significant efforts are still required to transition towards a low-carbon future while balancing economy and power system stability. Coordinating EV charging and discharging scheduling in residential areas faces challenges due to uncertainties in EV owners’ commuting behaviors, complex energy demands, and unpredictable power information. This paper formulates the <strong>R</strong>esidential <strong>EV C</strong>harging and <strong>D</strong>ischarging <strong>S</strong>cheduling (REV-CDS) as a Markov Decision Process with an unknown transition function. <strong>M</strong>ulti-<strong>A</strong>gent-<strong>A</strong>synchronous-<strong>S</strong>oft-<strong>A</strong>ctor-<strong>C</strong>ritic (MAASAC) algorithm is proposed to solve the Markov Decision Process. The proposed method employs the asynchronous updating process instead of synchronous updating, which allows agents to maintain consistent policy update direction, enabling better learning of a charging and discharging strategy to improve coordination among EVs and highly align with the overall scheduling goals in the REV-CDS environment. Finally, several numerical studies were conducted to compare the proposed approach with classical multi-agent reinforcement learning methods. The studies demonstrate the effectiveness of improvement in minimizing charging costs, reducing carbon emissions, alleviating charging anxiety, and preventing transformer overload.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"126 ","pages":"Article 110473"},"PeriodicalIF":4.9000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Smart residential electric vehicle charging and discharging scheduling via multi-agent asynchronous-updating deep reinforcement learning\",\"authors\":\"Qiang Zhao , Chengwei Xu , Chuan Sun , Yinghua Han\",\"doi\":\"10.1016/j.compeleceng.2025.110473\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Despite the increasing penetration of electric vehicle (EV), significant efforts are still required to transition towards a low-carbon future while balancing economy and power system stability. Coordinating EV charging and discharging scheduling in residential areas faces challenges due to uncertainties in EV owners’ commuting behaviors, complex energy demands, and unpredictable power information. This paper formulates the <strong>R</strong>esidential <strong>EV C</strong>harging and <strong>D</strong>ischarging <strong>S</strong>cheduling (REV-CDS) as a Markov Decision Process with an unknown transition function. <strong>M</strong>ulti-<strong>A</strong>gent-<strong>A</strong>synchronous-<strong>S</strong>oft-<strong>A</strong>ctor-<strong>C</strong>ritic (MAASAC) algorithm is proposed to solve the Markov Decision Process. The proposed method employs the asynchronous updating process instead of synchronous updating, which allows agents to maintain consistent policy update direction, enabling better learning of a charging and discharging strategy to improve coordination among EVs and highly align with the overall scheduling goals in the REV-CDS environment. Finally, several numerical studies were conducted to compare the proposed approach with classical multi-agent reinforcement learning methods. The studies demonstrate the effectiveness of improvement in minimizing charging costs, reducing carbon emissions, alleviating charging anxiety, and preventing transformer overload.</div></div>\",\"PeriodicalId\":50630,\"journal\":{\"name\":\"Computers & Electrical Engineering\",\"volume\":\"126 \",\"pages\":\"Article 110473\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Electrical Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0045790625004161\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625004161","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Smart residential electric vehicle charging and discharging scheduling via multi-agent asynchronous-updating deep reinforcement learning
Despite the increasing penetration of electric vehicle (EV), significant efforts are still required to transition towards a low-carbon future while balancing economy and power system stability. Coordinating EV charging and discharging scheduling in residential areas faces challenges due to uncertainties in EV owners’ commuting behaviors, complex energy demands, and unpredictable power information. This paper formulates the Residential EV Charging and Discharging Scheduling (REV-CDS) as a Markov Decision Process with an unknown transition function. Multi-Agent-Asynchronous-Soft-Actor-Critic (MAASAC) algorithm is proposed to solve the Markov Decision Process. The proposed method employs the asynchronous updating process instead of synchronous updating, which allows agents to maintain consistent policy update direction, enabling better learning of a charging and discharging strategy to improve coordination among EVs and highly align with the overall scheduling goals in the REV-CDS environment. Finally, several numerical studies were conducted to compare the proposed approach with classical multi-agent reinforcement learning methods. The studies demonstrate the effectiveness of improvement in minimizing charging costs, reducing carbon emissions, alleviating charging anxiety, and preventing transformer overload.
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.