基于深度q -学习和神经网络的轨迹规划框架研究

Venkata Satya Rahul Kosuru, Ashwin Kavasseri Venkitaraman
{"title":"基于深度q -学习和神经网络的轨迹规划框架研究","authors":"Venkata Satya Rahul Kosuru, Ashwin Kavasseri Venkitaraman","doi":"10.24018/ejeng.2022.7.6.2944","DOIUrl":null,"url":null,"abstract":"With the recent expansion in Self-Driving and Autonomy field, every vehicle is occupied with some kind or alter driver assist features in order to compensate driver comfort. Expansion further to fully Autonomy is extremely complicated since it requires planning safe paths in unstable and dynamic environments. Impression learning and other path learning techniques lack generalization and safety assurances.  Selecting the model and avoiding obstacles are two difficult issues in the research of autonomous vehicles. Q-learning has evolved into a potent learning framework that can now acquire complicated strategies in high-dimensional contexts to the advent of deep feature representation.  A deep Q-learning approach is proposed in this study by using experienced replay and contextual expertise to address these issues. A path planning strategy utilizing deep Q-learning on the network edge node is proposed to enhance the driving performance of autonomous vehicles in terms of energy consumption. When linked vehicles maintain the recommended speed, the suggested approach simulates the trajectory using a proportional-integral-derivative (PID) concept controller. Smooth trajectory and reduced jerk are ensured when employing the PID controller to monitor the terminals. The computational findings demonstrate that, in contrast to traditional techniques, the approach could investigate a path in an unknown situation with small iterations and a higher average payoff. It can also more quickly converge to an ideal strategic plan.","PeriodicalId":12001,"journal":{"name":"European Journal of Engineering and Technology Research","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Developing a Deep Q-Learning and Neural Network Framework for Trajectory Planning\",\"authors\":\"Venkata Satya Rahul Kosuru, Ashwin Kavasseri Venkitaraman\",\"doi\":\"10.24018/ejeng.2022.7.6.2944\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the recent expansion in Self-Driving and Autonomy field, every vehicle is occupied with some kind or alter driver assist features in order to compensate driver comfort. Expansion further to fully Autonomy is extremely complicated since it requires planning safe paths in unstable and dynamic environments. Impression learning and other path learning techniques lack generalization and safety assurances.  Selecting the model and avoiding obstacles are two difficult issues in the research of autonomous vehicles. Q-learning has evolved into a potent learning framework that can now acquire complicated strategies in high-dimensional contexts to the advent of deep feature representation.  A deep Q-learning approach is proposed in this study by using experienced replay and contextual expertise to address these issues. A path planning strategy utilizing deep Q-learning on the network edge node is proposed to enhance the driving performance of autonomous vehicles in terms of energy consumption. When linked vehicles maintain the recommended speed, the suggested approach simulates the trajectory using a proportional-integral-derivative (PID) concept controller. Smooth trajectory and reduced jerk are ensured when employing the PID controller to monitor the terminals. The computational findings demonstrate that, in contrast to traditional techniques, the approach could investigate a path in an unknown situation with small iterations and a higher average payoff. It can also more quickly converge to an ideal strategic plan.\",\"PeriodicalId\":12001,\"journal\":{\"name\":\"European Journal of Engineering and Technology Research\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Engineering and Technology Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.24018/ejeng.2022.7.6.2944\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Engineering and Technology Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24018/ejeng.2022.7.6.2944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

随着近年来自动驾驶和自动驾驶领域的扩张,每辆车都被某种或改变的驾驶辅助功能所占据,以补偿驾驶员的舒适性。进一步扩展到完全自主是非常复杂的,因为它需要在不稳定和动态的环境中规划安全的路径。印象学习和其他路径学习技术缺乏泛化和安全性保证。模型选择和避障是自动驾驶汽车研究中的两个难题。随着深度特征表示的出现,q学习已经发展成为一个强大的学习框架,现在可以在高维环境中获取复杂的策略。本研究提出了一种深度q学习方法,通过使用有经验的重播和上下文专业知识来解决这些问题。为了从能耗方面提高自动驾驶汽车的驾驶性能,提出了一种利用网络边缘节点深度q学习的路径规划策略。当连接车辆保持推荐速度时,建议的方法使用比例-积分-导数(PID)概念控制器模拟轨迹。采用PID控制器对终端进行监控,保证了轨迹平稳,减少了抖动。计算结果表明,与传统技术相比,该方法可以在未知情况下以较小的迭代和较高的平均收益来研究路径。它也可以更快地收敛到一个理想的战略计划。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Developing a Deep Q-Learning and Neural Network Framework for Trajectory Planning
With the recent expansion in Self-Driving and Autonomy field, every vehicle is occupied with some kind or alter driver assist features in order to compensate driver comfort. Expansion further to fully Autonomy is extremely complicated since it requires planning safe paths in unstable and dynamic environments. Impression learning and other path learning techniques lack generalization and safety assurances.  Selecting the model and avoiding obstacles are two difficult issues in the research of autonomous vehicles. Q-learning has evolved into a potent learning framework that can now acquire complicated strategies in high-dimensional contexts to the advent of deep feature representation.  A deep Q-learning approach is proposed in this study by using experienced replay and contextual expertise to address these issues. A path planning strategy utilizing deep Q-learning on the network edge node is proposed to enhance the driving performance of autonomous vehicles in terms of energy consumption. When linked vehicles maintain the recommended speed, the suggested approach simulates the trajectory using a proportional-integral-derivative (PID) concept controller. Smooth trajectory and reduced jerk are ensured when employing the PID controller to monitor the terminals. The computational findings demonstrate that, in contrast to traditional techniques, the approach could investigate a path in an unknown situation with small iterations and a higher average payoff. It can also more quickly converge to an ideal strategic plan.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信