Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving.

IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
PeerJ Computer Science Pub Date : 2025-06-10 eCollection Date: 2025-01-01 DOI:10.7717/peerj-cs.2823
Manal Abdullah Alohali, Hamed Alqahtani, Abdulbasit Darem, Monir Abdullah, Yunyoung Nam, Mohamed Abouhawwash
{"title":"Integrating cyber-physical systems with embedding technology for controlling autonomous vehicle driving.","authors":"Manal Abdullah Alohali, Hamed Alqahtani, Abdulbasit Darem, Monir Abdullah, Yunyoung Nam, Mohamed Abouhawwash","doi":"10.7717/peerj-cs.2823","DOIUrl":null,"url":null,"abstract":"<p><p>Cyber-physical systems (CPSs) in autonomous vehicles must handle highly dynamic and uncertain settings, where unanticipated impediments, shifting traffic conditions, and environmental changes all provide substantial decision-making issues. Deep reinforcement learning (DRL) has emerged as a strong tool for dealing with such uncertainty, yet current DRL models struggle to ensure safety and optimal behaviour in indeterminate settings due to the difficulties of understanding dynamic reward systems. To address these constraints, this study incorporates double deep Q networks (DDQN) to improve the agent's adaptability under uncertain driving conditions. A structured reward system is established to accommodate real-time fluctuations, resulting in safer and more efficient decision-making. The study acknowledges the technological limitations of automobile CPSs and investigates hardware acceleration as a potential remedy in addition to algorithmic enhancements. Because of their post-manufacturing adaptability, parallel processing capabilities, and reconfigurability, field programmable gate arrays (FPGAs) are used to execute reinforcement learning in real-time. Using essential parameters, including collision rate, behaviour similarity, travel distance, speed control, total rewards, and timesteps, the suggested method is thoroughly tested in the TORCS Racing Simulator. The findings show that combining FPGA-based hardware acceleration with DDQN successfully improves computational efficiency and decision-making reliability, tackling significant issues brought on by uncertainty in autonomous driving CPSs. In addition to advancing reinforcement learning applications in CPSs, this work opens up possibilities for future investigations into real-world generalisation, adaptive reward mechanisms, and scalable hardware implementations to further reduce uncertainty in autonomous systems.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2823"},"PeriodicalIF":3.5000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12190562/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2823","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Cyber-physical systems (CPSs) in autonomous vehicles must handle highly dynamic and uncertain settings, where unanticipated impediments, shifting traffic conditions, and environmental changes all provide substantial decision-making issues. Deep reinforcement learning (DRL) has emerged as a strong tool for dealing with such uncertainty, yet current DRL models struggle to ensure safety and optimal behaviour in indeterminate settings due to the difficulties of understanding dynamic reward systems. To address these constraints, this study incorporates double deep Q networks (DDQN) to improve the agent's adaptability under uncertain driving conditions. A structured reward system is established to accommodate real-time fluctuations, resulting in safer and more efficient decision-making. The study acknowledges the technological limitations of automobile CPSs and investigates hardware acceleration as a potential remedy in addition to algorithmic enhancements. Because of their post-manufacturing adaptability, parallel processing capabilities, and reconfigurability, field programmable gate arrays (FPGAs) are used to execute reinforcement learning in real-time. Using essential parameters, including collision rate, behaviour similarity, travel distance, speed control, total rewards, and timesteps, the suggested method is thoroughly tested in the TORCS Racing Simulator. The findings show that combining FPGA-based hardware acceleration with DDQN successfully improves computational efficiency and decision-making reliability, tackling significant issues brought on by uncertainty in autonomous driving CPSs. In addition to advancing reinforcement learning applications in CPSs, this work opens up possibilities for future investigations into real-world generalisation, adaptive reward mechanisms, and scalable hardware implementations to further reduce uncertainty in autonomous systems.

将网络物理系统与嵌入技术相结合,实现自动驾驶汽车的控制。
自动驾驶汽车中的网络物理系统(cps)必须处理高度动态和不确定的环境,在这些环境中,意外障碍、不断变化的交通状况和环境变化都会带来重大的决策问题。深度强化学习(DRL)已成为处理此类不确定性的强大工具,但由于理解动态奖励系统的困难,当前的DRL模型难以确保不确定环境中的安全性和最佳行为。为了解决这些约束,本研究引入双深度Q网络(DDQN)来提高智能体在不确定驾驶条件下的适应性。建立了结构化的奖励制度,以适应实时波动,从而实现更安全、更有效的决策。该研究承认汽车cps的技术局限性,并研究了硬件加速作为除算法增强之外的潜在补救措施。由于其制造后的适应性、并行处理能力和可重构性,现场可编程门阵列(fpga)被用于实时执行强化学习。使用基本参数,包括碰撞率,行为相似性,行驶距离,速度控制,总奖励和时间步长,建议的方法在TORCS赛车模拟器中进行了彻底的测试。研究结果表明,将基于fpga的硬件加速与DDQN相结合,成功提高了计算效率和决策可靠性,解决了自动驾驶cps中不确定性带来的重大问题。除了推进cps中的强化学习应用之外,这项工作还为未来研究现实世界的泛化、自适应奖励机制和可扩展的硬件实现开辟了可能性,以进一步减少自主系统中的不确定性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
PeerJ Computer Science
PeerJ Computer Science Computer Science-General Computer Science
CiteScore
6.10
自引率
5.30%
发文量
332
审稿时长
10 weeks
期刊介绍: PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信