{"title":"Reinforcement learning as a control layer for electric vehicle interaction with multi-energy systems: A comprehensive review","authors":"Anis ur Rehman","doi":"10.1016/j.rser.2026.116733","DOIUrl":null,"url":null,"abstract":"<div><div>The shift toward sustainable transport and renewable energy has transformed electric vehicles (EVs) from passive loads into active components within integrated energy systems. Their interaction with batteries, charging networks, renewables, and grid services introduces complex uncertainties that conventional methods struggle to manage. In response to these complex and uncertain dynamics, reinforcement learning (RL) is emerging as a powerful adaptive control approach, and this review surveys current peer-reviewed research on its applications within the evolving energy-mobility ecosystem. It systematically examines: (i) EV powertrains and on-board energy management, (ii) hybrid energy storage systems combining batteries and supercapacitors, (iii) charging infrastructure including fast-charging hubs and battery swapping stations, (iv) vehicle-to-grid operations, (v) fleet-level scheduling and mobility services, (vi) microgrids and distributed energy systems, (vii) renewable energy integration, and (viii) resilience and stability of coupled multi-energy systems. The review identifies persistent challenges, including the reliance on simplified models, limited hardware-in-the-loop or real-vehicle validation, the computational intensity of deep RL, the sensitivity to reward design, and the safety risks in real-world deployment. To address these gaps, the review outlines future research directions including physics-informed and degradation-aware RL, hybrid RL-optimization for scalable decision-making, federated and multi-agent learning for large-scale coordination, and uncertainty-aware, explainable policies. It also proposes cross-domain reward functions to capture battery degradation and thermal dynamics, and emphasizes the urgent need for hardware validation to bridge simulation and real-world application.</div></div>","PeriodicalId":418,"journal":{"name":"Renewable and Sustainable Energy Reviews","volume":"231 ","pages":"Article 116733"},"PeriodicalIF":16.3000,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Renewable and Sustainable Energy Reviews","FirstCategoryId":"1","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1364032126000328","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/23 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
Abstract
The shift toward sustainable transport and renewable energy has transformed electric vehicles (EVs) from passive loads into active components within integrated energy systems. Their interaction with batteries, charging networks, renewables, and grid services introduces complex uncertainties that conventional methods struggle to manage. In response to these complex and uncertain dynamics, reinforcement learning (RL) is emerging as a powerful adaptive control approach, and this review surveys current peer-reviewed research on its applications within the evolving energy-mobility ecosystem. It systematically examines: (i) EV powertrains and on-board energy management, (ii) hybrid energy storage systems combining batteries and supercapacitors, (iii) charging infrastructure including fast-charging hubs and battery swapping stations, (iv) vehicle-to-grid operations, (v) fleet-level scheduling and mobility services, (vi) microgrids and distributed energy systems, (vii) renewable energy integration, and (viii) resilience and stability of coupled multi-energy systems. The review identifies persistent challenges, including the reliance on simplified models, limited hardware-in-the-loop or real-vehicle validation, the computational intensity of deep RL, the sensitivity to reward design, and the safety risks in real-world deployment. To address these gaps, the review outlines future research directions including physics-informed and degradation-aware RL, hybrid RL-optimization for scalable decision-making, federated and multi-agent learning for large-scale coordination, and uncertainty-aware, explainable policies. It also proposes cross-domain reward functions to capture battery degradation and thermal dynamics, and emphasizes the urgent need for hardware validation to bridge simulation and real-world application.
期刊介绍:
The mission of Renewable and Sustainable Energy Reviews is to disseminate the most compelling and pertinent critical insights in renewable and sustainable energy, fostering collaboration among the research community, private sector, and policy and decision makers. The journal aims to exchange challenges, solutions, innovative concepts, and technologies, contributing to sustainable development, the transition to a low-carbon future, and the attainment of emissions targets outlined by the United Nations Framework Convention on Climate Change.
Renewable and Sustainable Energy Reviews publishes a diverse range of content, including review papers, original research, case studies, and analyses of new technologies, all featuring a substantial review component such as critique, comparison, or analysis. Introducing a distinctive paper type, Expert Insights, the journal presents commissioned mini-reviews authored by field leaders, addressing topics of significant interest. Case studies undergo consideration only if they showcase the work's applicability to other regions or contribute valuable insights to the broader field of renewable and sustainable energy. Notably, a bibliographic or literature review lacking critical analysis is deemed unsuitable for publication.