End-to-end autonomous underwater vehicle path following control method based on improved soft actor–critic for deep space exploration

IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Na Dong , Shoufu Liu , Andrew W.H. Ip , Kai Leung Yung , Zhongke Gao , Rongshun Juan , Yanhui Wang
{"title":"End-to-end autonomous underwater vehicle path following control method based on improved soft actor–critic for deep space exploration","authors":"Na Dong ,&nbsp;Shoufu Liu ,&nbsp;Andrew W.H. Ip ,&nbsp;Kai Leung Yung ,&nbsp;Zhongke Gao ,&nbsp;Rongshun Juan ,&nbsp;Yanhui Wang","doi":"10.1016/j.jii.2025.100792","DOIUrl":null,"url":null,"abstract":"<div><div>The vast extraterrestrial ocean is becoming a hotspot for deep space exploration of life in the future. Considering autonomous underwater vehicle (AUV) has a larger range of activities and greater flexibility, it plays an important role in extraterrestrial ocean research. To solve the problems in path following tasks of AUV, such as high training cost and poor exploration ability, an end-to-end AUV path following control method based on an improved soft actor–critic (SAC) algorithm is designed in this paper, leveraging the advancements in deep reinforcement learning (DRL) to enhance performance and efficiency. It uses sensor information to understand the environment and its state to output the policy to complete the adaptive action. Policies that consider long-term effects can be learned through continuous interaction with the environment, which is helpful in improving adaptability and enhancing the robustness of AUV control. A non-policy sampling method is designed to improve the utilization efficiency of experience transitions in the replay buffer, accelerate convergence, and enhance its stability. A reward function on the current position and heading angle of AUV is designed to avoid the situation of sparse reward leading to slow learning or ineffective learning of agents. In the meantime, we use the continuous action space instead of the discrete action space to make the real-time control of the AUV more accurate. Finally, it is tested on the gazebo simulation platform, and the results confirm that reinforcement learning is effective in AUV control, and the method proposed in this paper has faster and better following performance than traditional reinforcement learning methods.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100792"},"PeriodicalIF":10.4000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Industrial Information Integration","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2452414X25000160","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

The vast extraterrestrial ocean is becoming a hotspot for deep space exploration of life in the future. Considering autonomous underwater vehicle (AUV) has a larger range of activities and greater flexibility, it plays an important role in extraterrestrial ocean research. To solve the problems in path following tasks of AUV, such as high training cost and poor exploration ability, an end-to-end AUV path following control method based on an improved soft actor–critic (SAC) algorithm is designed in this paper, leveraging the advancements in deep reinforcement learning (DRL) to enhance performance and efficiency. It uses sensor information to understand the environment and its state to output the policy to complete the adaptive action. Policies that consider long-term effects can be learned through continuous interaction with the environment, which is helpful in improving adaptability and enhancing the robustness of AUV control. A non-policy sampling method is designed to improve the utilization efficiency of experience transitions in the replay buffer, accelerate convergence, and enhance its stability. A reward function on the current position and heading angle of AUV is designed to avoid the situation of sparse reward leading to slow learning or ineffective learning of agents. In the meantime, we use the continuous action space instead of the discrete action space to make the real-time control of the AUV more accurate. Finally, it is tested on the gazebo simulation platform, and the results confirm that reinforcement learning is effective in AUV control, and the method proposed in this paper has faster and better following performance than traditional reinforcement learning methods.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Industrial Information Integration
Journal of Industrial Information Integration Decision Sciences-Information Systems and Management
CiteScore
22.30
自引率
13.40%
发文量
100
期刊介绍: The Journal of Industrial Information Integration focuses on the industry's transition towards industrial integration and informatization, covering not only hardware and software but also information integration. It serves as a platform for promoting advances in industrial information integration, addressing challenges, issues, and solutions in an interdisciplinary forum for researchers, practitioners, and policy makers. The Journal of Industrial Information Integration welcomes papers on foundational, technical, and practical aspects of industrial information integration, emphasizing the complex and cross-disciplinary topics that arise in industrial integration. Techniques from mathematical science, computer science, computer engineering, electrical and electronic engineering, manufacturing engineering, and engineering management are crucial in this context.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信