基于分布式强化学习的感知不确定性决策模型

Shuyuan Xu , Qiao Liu , Yuhui Hu , Mengtian Xu , Jiachen Hao
{"title":"基于分布式强化学习的感知不确定性决策模型","authors":"Shuyuan Xu ,&nbsp;Qiao Liu ,&nbsp;Yuhui Hu ,&nbsp;Mengtian Xu ,&nbsp;Jiachen Hao","doi":"10.1016/j.geits.2022.100062","DOIUrl":null,"url":null,"abstract":"<div><p>Decision-making for autonomous vehicles in the presence of obstacle occlusions is difficult because the lack of accurate information affects the judgment. Existing methods may lead to overly conservative strategies and time-consuming computations that cannot be balanced with efficiency. We propose to use distributional reinforcement learning to hedge the risk of strategies, optimize the worse cases, and improve the efficiency of the algorithm so that the agent learns better actions. A batch of smaller values is used to replace the average value to optimize the worse case, and combined with frame stacking, we call it Efficient-Fully parameterized Quantile Function (E-FQF). This model is used to evaluate signal-free intersection crossing scenarios and makes more efficient moves and reduces the collision rate compared to conventional reinforcement learning algorithms in the presence of perceived occlusion. The model also has robustness in the case of data loss compared to the method with embedded long and short term memory.</p></div>","PeriodicalId":100596,"journal":{"name":"Green Energy and Intelligent Transportation","volume":"2 2","pages":"Article 100062"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Decision-making models on perceptual uncertainty with distributional reinforcement learning\",\"authors\":\"Shuyuan Xu ,&nbsp;Qiao Liu ,&nbsp;Yuhui Hu ,&nbsp;Mengtian Xu ,&nbsp;Jiachen Hao\",\"doi\":\"10.1016/j.geits.2022.100062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Decision-making for autonomous vehicles in the presence of obstacle occlusions is difficult because the lack of accurate information affects the judgment. Existing methods may lead to overly conservative strategies and time-consuming computations that cannot be balanced with efficiency. We propose to use distributional reinforcement learning to hedge the risk of strategies, optimize the worse cases, and improve the efficiency of the algorithm so that the agent learns better actions. A batch of smaller values is used to replace the average value to optimize the worse case, and combined with frame stacking, we call it Efficient-Fully parameterized Quantile Function (E-FQF). This model is used to evaluate signal-free intersection crossing scenarios and makes more efficient moves and reduces the collision rate compared to conventional reinforcement learning algorithms in the presence of perceived occlusion. The model also has robustness in the case of data loss compared to the method with embedded long and short term memory.</p></div>\",\"PeriodicalId\":100596,\"journal\":{\"name\":\"Green Energy and Intelligent Transportation\",\"volume\":\"2 2\",\"pages\":\"Article 100062\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Green Energy and Intelligent Transportation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2773153722000627\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Green Energy and Intelligent Transportation","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2773153722000627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在存在障碍物遮挡的情况下,自动驾驶汽车的决策很困难,因为缺乏准确的信息会影响判断。现有方法可能导致过于保守的策略和耗时的计算,无法与效率相平衡。我们建议使用分布式强化学习来对冲策略的风险,优化最坏的情况,并提高算法的效率,以便agent学习更好的行动。使用一批较小的值来代替平均值以优化最坏的情况,并结合帧堆叠,我们称之为高效全参数化分位数函数(E-FQF)。该模型用于评估无信号交叉口交叉场景,在存在感知遮挡的情况下,与传统的强化学习算法相比,该模型可以进行更有效的移动并降低碰撞率。与嵌入长短期内存的方法相比,该模型在数据丢失的情况下也具有鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Decision-making models on perceptual uncertainty with distributional reinforcement learning

Decision-making for autonomous vehicles in the presence of obstacle occlusions is difficult because the lack of accurate information affects the judgment. Existing methods may lead to overly conservative strategies and time-consuming computations that cannot be balanced with efficiency. We propose to use distributional reinforcement learning to hedge the risk of strategies, optimize the worse cases, and improve the efficiency of the algorithm so that the agent learns better actions. A batch of smaller values is used to replace the average value to optimize the worse case, and combined with frame stacking, we call it Efficient-Fully parameterized Quantile Function (E-FQF). This model is used to evaluate signal-free intersection crossing scenarios and makes more efficient moves and reduces the collision rate compared to conventional reinforcement learning algorithms in the presence of perceived occlusion. The model also has robustness in the case of data loss compared to the method with embedded long and short term memory.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.40
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信