{"title":"基于隐式分位数网络的增程式电动运输车辆风险感知能量管理","authors":"Pengyue Wang, Yan Li, S. Shekhar, W. Northrop","doi":"10.1109/CASE48305.2020.9216797","DOIUrl":null,"url":null,"abstract":"Model-free reinforcement learning (RL) algorithms are used to solve sequential decision-making problems under uncertainty. They are data-driven methods and do not require an explicit model of the studied system or environment. Because of this characteristic, they are widely utilized in Intelligent Transportation Systems (ITS), as real-world transportation systems are highly complex and extremely difficult to model. However, in most literature, decisions are made according to the expected long-term return estimated by the RL algorithm, ignoring the underlying risk. In this work, a distributional RL algorithm called implicit quantile network is adapted for the energy management problem of a delivery vehicle. Instead of only estimating the expected long-term return, the full return distribution is estimated implicitly. This is highly beneficial for applications in ITS, as uncertainty and randomness are intrinsic characteristics of transportation systems. In addition, risk-aware strategies are integrated into the algorithm with the risk measure of conditional value at risk. In this study, we demonstrate that by changing a hyperparameter, the trade-off between fuel efficiency and the risk of running out of battery power during a delivery trip can be controlled according to different application scenarios and personal preferences.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Risk-aware Energy Management of Extended Range Electric Delivery Vehicles with Implicit Quantile Network\",\"authors\":\"Pengyue Wang, Yan Li, S. Shekhar, W. Northrop\",\"doi\":\"10.1109/CASE48305.2020.9216797\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Model-free reinforcement learning (RL) algorithms are used to solve sequential decision-making problems under uncertainty. They are data-driven methods and do not require an explicit model of the studied system or environment. Because of this characteristic, they are widely utilized in Intelligent Transportation Systems (ITS), as real-world transportation systems are highly complex and extremely difficult to model. However, in most literature, decisions are made according to the expected long-term return estimated by the RL algorithm, ignoring the underlying risk. In this work, a distributional RL algorithm called implicit quantile network is adapted for the energy management problem of a delivery vehicle. Instead of only estimating the expected long-term return, the full return distribution is estimated implicitly. This is highly beneficial for applications in ITS, as uncertainty and randomness are intrinsic characteristics of transportation systems. In addition, risk-aware strategies are integrated into the algorithm with the risk measure of conditional value at risk. In this study, we demonstrate that by changing a hyperparameter, the trade-off between fuel efficiency and the risk of running out of battery power during a delivery trip can be controlled according to different application scenarios and personal preferences.\",\"PeriodicalId\":212181,\"journal\":{\"name\":\"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CASE48305.2020.9216797\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CASE48305.2020.9216797","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Risk-aware Energy Management of Extended Range Electric Delivery Vehicles with Implicit Quantile Network
Model-free reinforcement learning (RL) algorithms are used to solve sequential decision-making problems under uncertainty. They are data-driven methods and do not require an explicit model of the studied system or environment. Because of this characteristic, they are widely utilized in Intelligent Transportation Systems (ITS), as real-world transportation systems are highly complex and extremely difficult to model. However, in most literature, decisions are made according to the expected long-term return estimated by the RL algorithm, ignoring the underlying risk. In this work, a distributional RL algorithm called implicit quantile network is adapted for the energy management problem of a delivery vehicle. Instead of only estimating the expected long-term return, the full return distribution is estimated implicitly. This is highly beneficial for applications in ITS, as uncertainty and randomness are intrinsic characteristics of transportation systems. In addition, risk-aware strategies are integrated into the algorithm with the risk measure of conditional value at risk. In this study, we demonstrate that by changing a hyperparameter, the trade-off between fuel efficiency and the risk of running out of battery power during a delivery trip can be controlled according to different application scenarios and personal preferences.