{"title":"基于 DDPG 的可持续路径跟踪,用于城市外场景中的互联自动驾驶电动汽车","authors":"Giacomo Basile;Sara Leccese;Alberto Petrillo;Renato Rizzo;Stefania Santini","doi":"10.1109/TIA.2024.3444733","DOIUrl":null,"url":null,"abstract":"This paper addresses the path-tracking control problem for Connected Autonomous Electric Vehicles (CAEVs) moving in a smart Cooperative Connected Automated Mobility (CCAM) environment, where a smart infrastructure suggests the reference behaviour to achieve. To solve this problem, a novel energy-oriented Deep Deterministic Policy Gradient (DDPG) control strategy, able to guarantee the optimal tracking of the suggested path while minimizing the CAEVs energy consumption, is proposed. To this aim, the power autonomy, the battery state of charge (SOC), the overall power train model -comprehensive of the electric motor equations, inverter dynamics and the battery pack model- is embedded within the training process of the DDPG agent, hence letting the CAEV to travel according to the best sustainable driving policy. The training procedure and the validation phase of the proposed control method is performed via an own-made advanced simulation platform which, combining \n<italic>Matlab & Simulink</i>\n with \n<italic>Python</i>\n environment, allows the virtualization of real driving scenarios. Specifically, the training process confirms the capability of DDPG agent in learning the safe eco-driving policy, while, the numerical validation, tailored for the realistic extra-urban scenario located in Naples, Italy, discloses the capability of the DDPG-based eco-driving controller in solving the appraised CCAM control problem despite presence of external disturbances. Finally, a robustness analysis of the proposed strategy in ensuring the ecological path tracking control problem for different CAEV models and driving path scenarios, along with a comparison analysis with respect model-based controls, is provided to better highlights the benefits/advantages of the proposed Deep Reinforcement Learning (DRL) control.","PeriodicalId":13337,"journal":{"name":"IEEE Transactions on Industry Applications","volume":"60 6","pages":"9237-9250"},"PeriodicalIF":4.2000,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sustainable DDPG-Based Path Tracking for Connected Autonomous Electric Vehicles in Extra-Urban Scenarios\",\"authors\":\"Giacomo Basile;Sara Leccese;Alberto Petrillo;Renato Rizzo;Stefania Santini\",\"doi\":\"10.1109/TIA.2024.3444733\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper addresses the path-tracking control problem for Connected Autonomous Electric Vehicles (CAEVs) moving in a smart Cooperative Connected Automated Mobility (CCAM) environment, where a smart infrastructure suggests the reference behaviour to achieve. To solve this problem, a novel energy-oriented Deep Deterministic Policy Gradient (DDPG) control strategy, able to guarantee the optimal tracking of the suggested path while minimizing the CAEVs energy consumption, is proposed. To this aim, the power autonomy, the battery state of charge (SOC), the overall power train model -comprehensive of the electric motor equations, inverter dynamics and the battery pack model- is embedded within the training process of the DDPG agent, hence letting the CAEV to travel according to the best sustainable driving policy. The training procedure and the validation phase of the proposed control method is performed via an own-made advanced simulation platform which, combining \\n<italic>Matlab & Simulink</i>\\n with \\n<italic>Python</i>\\n environment, allows the virtualization of real driving scenarios. Specifically, the training process confirms the capability of DDPG agent in learning the safe eco-driving policy, while, the numerical validation, tailored for the realistic extra-urban scenario located in Naples, Italy, discloses the capability of the DDPG-based eco-driving controller in solving the appraised CCAM control problem despite presence of external disturbances. Finally, a robustness analysis of the proposed strategy in ensuring the ecological path tracking control problem for different CAEV models and driving path scenarios, along with a comparison analysis with respect model-based controls, is provided to better highlights the benefits/advantages of the proposed Deep Reinforcement Learning (DRL) control.\",\"PeriodicalId\":13337,\"journal\":{\"name\":\"IEEE Transactions on Industry Applications\",\"volume\":\"60 6\",\"pages\":\"9237-9250\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-08-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Industry Applications\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10638204/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Industry Applications","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10638204/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Sustainable DDPG-Based Path Tracking for Connected Autonomous Electric Vehicles in Extra-Urban Scenarios
This paper addresses the path-tracking control problem for Connected Autonomous Electric Vehicles (CAEVs) moving in a smart Cooperative Connected Automated Mobility (CCAM) environment, where a smart infrastructure suggests the reference behaviour to achieve. To solve this problem, a novel energy-oriented Deep Deterministic Policy Gradient (DDPG) control strategy, able to guarantee the optimal tracking of the suggested path while minimizing the CAEVs energy consumption, is proposed. To this aim, the power autonomy, the battery state of charge (SOC), the overall power train model -comprehensive of the electric motor equations, inverter dynamics and the battery pack model- is embedded within the training process of the DDPG agent, hence letting the CAEV to travel according to the best sustainable driving policy. The training procedure and the validation phase of the proposed control method is performed via an own-made advanced simulation platform which, combining
Matlab & Simulink
with
Python
environment, allows the virtualization of real driving scenarios. Specifically, the training process confirms the capability of DDPG agent in learning the safe eco-driving policy, while, the numerical validation, tailored for the realistic extra-urban scenario located in Naples, Italy, discloses the capability of the DDPG-based eco-driving controller in solving the appraised CCAM control problem despite presence of external disturbances. Finally, a robustness analysis of the proposed strategy in ensuring the ecological path tracking control problem for different CAEV models and driving path scenarios, along with a comparison analysis with respect model-based controls, is provided to better highlights the benefits/advantages of the proposed Deep Reinforcement Learning (DRL) control.
期刊介绍:
The scope of the IEEE Transactions on Industry Applications includes all scope items of the IEEE Industry Applications Society, that is, the advancement of the theory and practice of electrical and electronic engineering in the development, design, manufacture, and application of electrical systems, apparatus, devices, and controls to the processes and equipment of industry and commerce; the promotion of safe, reliable, and economic installations; industry leadership in energy conservation and environmental, health, and safety issues; the creation of voluntary engineering standards and recommended practices; and the professional development of its membership.